Tug’s Blog

Redis, NoSQL and more…

Simple Caching Service With Redis

| Comments

One of the most common use cases for Redis is to use it the database as a caching layer for your data, but Redis can do a lot more (I will publish new articles later)!

In this article, you will learn using a straightforward service, how to cache the result on some REST API calls to accelerate the data access, and also reduce the number of calls to external services.

For this example, I am using the “Redis Movie Database” application, a microservice-based application that I created to showcase and explain various features of Redis and Redis Enterprise.

You can see the caching service in action in this video:

Architecture Overview

The application uses a third party API provided by the “OMDb API” to retrieve the ratings of the movie using its IMDb identifier. The frontend application call the /caching/rating/ service to get the rating information from OMDB.

This service is doing the following:

  1. Check if the rating data is already cached retrieve from the cache
  2. If the information is not cached, the system calls the OMDB API with the proper key and Movie ID
  3. The result is cached in Redis with a time to live of 120 seconds
  4. The ratings are returned to the client.

How to Use SSL/TLS With Redis Enterprise

| Comments

In this article, I will explain how to secure your Redis databases using SSL (Secure Sockets Layer). In production, it is a good practice to use SSL to protect the data that are moving between various computers (client applications and Redis servers). Transport Level Security (TLS) guarantees that only allowed applications/computers are connected to the database, and also that data is not viewed or altered by a middle man process.

You can secure the connections between your client applications and Redis cluster using:

  • One-Way SSL: the client (your application) get the certificate from the server (Redis cluster), validate it, and then all communications are encrypted
  • Two-Way SSL: (aka mutual SSL) here both the client and the server authenticate each other and validate that both ends are trusted.

In this article, I will focus on the Two-Way SSL, and using Redis Enterprise.

Redis Rolling Upgrade on Pivotal Cloud Foundry (PCF)

| Comments

Introduction

In this article, I will show you how to update Redis Enterprise on PCF and see how Redis Enterprise cluster will guarantee a service continuity using out of the box failover.

If you need a Cloud Foundry application that calls Redis automatically you can use this project simple-redis-spring-demo-pcf.

For this article, I will upgrade Redis Enterprise for PCF from the version v5.4.2400147 to the latest version, currently v5.4.40700169.

Multi-Nodes Redis Cluster With Docker

| Comments

As part of my on-boarding/training at RedisLabs I continue to play with the product, and I have decided today to install a local 3 nodes cluster of Redis Enterprise Software (RS); and show how easy is to move from a single node/shard database to a multi nodes highly available one.

Once your cluster is up & running, you will kill some containers to see how the system automatically fail-over to guarantee service continuity.

The deployment will look more or less like the schema below, (coming from RedisLabs documentation)

This is a perfect environment for learning, developing and testing your applications, but it is not supported in production; for production, you can use:

Getting Started With Redis Streams & Java

| Comments

As you may have seen, I have joined Redis Labs a month ago; one of the first task as a new hire is to learn more about Redis. So I learned, and I am still learning.

This is when I discovered Redis Streams. I am a big fan of streaming-based applications so it is natural that I start with a small blog post explaining how to use Redis Streams and Java.

What is Redis Streams?

Redis Streams is a Redis Data Type, that represents a log so you can add new information/message in an append-only mode (this is not 100% accurate since you can remove messages from the log). Using Redis Streams you can build “Kafka Like” applications, what I mean by that you can:

  • create applications that publish and consume messages (nothing extraordinary here, you could already do that with Redis Pub/Sub)
  • consume messages that are published even when your client application (consumer) is not running. This is a big difference with Redis Pub/Sub
  • consume messages starting a specific offset, for example, read the whole history, or only new messages

In addition to this, Redis Streams has the concept of Consumer Groups. Redis Streams Consumer Groups, like Apache Kafka ones, allows the client applications to consume messages in a distributed fashion (multiple clients), providing an easy way to scale and create highly available systems.

Enroll in the Redis University: Redis Streams to learn more and get certified.

Sample Application

The redis-streams-101-java GitHub Repository contains sample code that shows how to

  • post messages to a streams
  • consume messages using a consumer group

Getting Started With MapR-DB JSON REST API

| Comments

Introduction

In this project you will learn how to use the MapR-DB JSON REST API to:

Create and Delete tables Create, Read, Update and Delete documents (CRUD) MapR Extension Package 5.0 (MEP) introduced the MapR-DB JSON REST API that allow application to use REST to interact with MapR-DB JSON.

You can find information about the MapR-DB JSON REST API in the documentation: Using the MapR-DB JSON REST API

Getting Started With MapR-DB Table Replication

| Comments

Introduction

MapR-DB Table Replication allows data to be replicated to another table that could be on on the same cluster or in another cluster. This is different from the automatic and intra-cluster replication that copies the data into different physical nodes for high availability and prevent data loss.

This tutorial focuses on the MapR-DB Table Replication that replicates data between tables on different clusters.

Replicating data between different clusters allows you to:

  • provide another level of disaster recovery that protects your data and applications against global data center failure,
  • push data close to the applications and users,
  • aggregate the data from mutliple datacenters.

Replication Topologies

MapR-DB Table Replication provides various topologies to adapt the replication to the business and technical requirements:

  • Master-slave replication : in this topology, you replicate one way from source tables to replicas. The replicas can be in a remote cluster or in the cluster where the source tables are located.
  • Multi-Master replication : in this replication topology, there are two master-slave relationships, with each table playing both the role of a master and a slave. Client applications update both tables and each table replicates updates to the other.

In this example you will learn how to setup multi-master replication.

Getting Started With Kafka REST Proxy for MapR Streams

| Comments

Introduction

MapR Ecosystem Package 2.0 (MEP) is coming with some new features related to MapR Streams:

MapR Ecosystem Packs (MEPs) are a way to deliver ecosystem upgrades decoupled from core upgrades - allowing you to upgrade your tooling independently of your Converged Data Platform. You can lean more about MEP 2.0 in this article.

In this blog we describe how to use the REST Proxy to publish and consume messages to/from MapR Streams. The REST Proxy is a great addition to the MapR Converged Data Platform allowing any programming language to use MapR Streams.

The Kafka REST Proxy provided with the MapR Streams tools, can be used with MapR Streams (default), but also used in a hybrid mode with Apache Kafka. In this article we will focus on MapR Streams.

Getting Started With MQTT and Java

| Comments

MQTT (MQ Telemetry Transport) is a lightweight publish/subscribe messaging protocol. MQTT is used a lot in the Internet of Things applications, since it has been designed to run on remote locations with system with small footprint.

The MQTT 3.1 is an OASIS standard, and you can find all the information at http://mqtt.org/

This article will guide you into the various steps to run your first MQTT application:

  1. Install and Start a MQTT Broker
  2. Write an application that publishes messages
  3. Write an application that consumes messages

The source code of the sample application is available on GitHub.

Getting Started With Apache Flink and Mapr Streams

| Comments

Introduction

Apache Flink is an open source platform for distributed stream and batch data processing. Flink is a streaming data flow engine with several APIs to create data streams oriented application.

It is very common for Flink applications to use Apache Kafka for data input and output.

This article will guide you into the steps to use Apache Flink with MapR Streams. MapR Streams is a distributed messaging system for streaming event data at scale, and it’s integrated into the MapR Converged Data Platform, based on the Apache Kafka API (0.9.0), this article use the same code and approach than the Flink and Kafka Getting Started.

MapR Streams and Flink.