Skip to main content

· 5 min read

In the first blog post of this series, I have explained how to create a custom GitHub Action that is interesting when you do not find the action that you need on the GitHub Marketplace.

I will now focus on some interesting API that you can use when building an action: Checks & Annotations.

It is import, when a workflow is running to provide visual feedback to the user. This is where the checks and annotation API is handy as it allows you for example indicates to the user that a specific step has failed ( ❌ ) or was successfully ( ✅ ) executed; and using the API it is also possible to create a detailed annotation that points to a specific line of code; this helping the user to understand what is going on in the workflow.

The following screenshot shows the annotation API in action:

Workflow Annotations

📗 In this second post, you will learn how to:

  1. Create custom Checks
  2. Add some detailed annotation with reference to source code (lines) with error
  3. Deploy the action

If you prefer the video version of this post go to Github Actions: Create custom Checks and Annotations .

It is time now to dive into the example!

· 10 min read

Automation is a key element of modern software development and deployment. GitHub with GitHub Actions allows you to automate many tasks, starting with your continuous integration and continuous deployment... but GitHub Actions a lot more than a CI/CD, you can use it for provisioning your environments, automating some project management tasks. However, it is not the purpose of this post, where I want to focus on the development of your own Github Action!

An "Action" is the reusable component of a workflow, and when you create your automation you will start by searching the GitHub Marketplace to look for actions to achieve a specific task. In addition to the thousands of actions available on the marketplace, and available in open source communities, you can create your own actions.

This blog post will guide you using a concrete example, through the steps to create your actions, and this is just "my version" of the official Creating Actions documentation chapter.

Let's say for example that you want to enforce the fact that your repositories have always a README.md and a LICENSE file. And when the repository is not compliant with these rules the workflow should fail and provide clear information to the user.

The following screenshot shows messages and alerts raised by the actions during an integration workflow:

Workflow Checks

📕 In this first post, you will learn how to:

  1. Create an action
  2. Publish the action
  3. Use the action in a workflow
  4. Add some logic to control the workflow's success or failure.

📗 In a second post, you will learn how to:

  1. Create custom Checks
  2. Add some detailed annotation with reference to source code (lines) with error
  3. Deploy the action

If you prefer a video version of it, take a look to "Build Your Own Action" on YouTube.

It is time now to dive into the example!

· 7 min read

One of the most common use cases for Redis is to use it the database as a caching layer for your data, but Redis can do a lot more (I will publish new articles later)!

In this article, you will learn using a straightforward service, how to cache the result on some REST API calls to accelerate the data access, and also reduce the number of calls to external services.

For this example, I am using the "Redis Movie Database" application, a microservice-based application that I created to showcase and explain various features of Redis and Redis Enterprise.

· 6 min read

In this article, I will explain how to secure your Redis databases using SSL (Secure Sockets Layer). In production, it is a good practice to use SSL to protect the data that are moving between various computers (client applications and Redis servers). Transport Level Security (TLS) guarantees that only allowed applications/computers are connected to the database, and also that data is not viewed or altered by a middle man process.

You can secure the connections between your client applications and Redis cluster using:

  • One-Way SSL: the client (your application) get the certificate from the server (Redis cluster), validate it, and then all communications are encrypted
  • Two-Way SSL: (aka mutual SSL) here both the client and the server authenticate each other and validate that both ends are trusted.

In this article, I will focus on the Two-Way SSL, and using Redis Enterprise.

· 7 min read

Introduction

In this article, I will show you how to update Redis Enterprise on PCF and see how Redis Enterprise cluster will guarantee a service continuity using out of the box failover.

If you need a Cloud Foundry application that calls Redis automatically you can use this project simple-redis-spring-demo-pcf.

For this article, I will upgrade Redis Enterprise for PCF from the version v5.4.2400147 to the latest version, currently v5.4.40700169.

· 8 min read

As part of my on-boarding/training at RedisLabs I continue to play with the product, and I have decided today to install a local 3 nodes cluster of Redis Enterprise Software (RS); and show how easy is to move from a single node/shard database to a multi nodes highly available one.

Once your cluster is up & running, you will kill some containers to see how the system automatically fail-over to guarantee service continuity.

The deployment will look more or less like the schema below, (coming from RedisLabs documentation)

This is a perfect environment for learning, developing and testing your applications, but it is not supported in production; for production, you can use:

· 8 min read

As you may have seen, I have joined Redis Labs a month ago; one of the first task as a new hire is to learn more about Redis. So I learned, and I am still learning.

This is when I discovered Redis Streams. I am a big fan of streaming-based applications so it is natural that I start with a small blog post explaining how to use Redis Streams and Java.

What is Redis Streams?

Redis Streams is a Redis Data Type, that represents a log so you can add new information/message in an append-only mode (this is not 100% accurate since you can remove messages from the log). Using Redis Streams you can build "Kafka Like" applications, what I mean by that you can:

  • create applications that publish and consume messages (nothing extraordinary here, you could already do that with Redis Pub/Sub)
  • consume messages that are published even when your client application (consumer) is not running. This is a big difference with Redis Pub/Sub
  • consume messages starting a specific offset, for example, read the whole history, or only new messages

In addition to this, Redis Streams has the concept of Consumer Groups. Redis Streams Consumer Groups, like Apache Kafka ones, allows the client applications to consume messages in a distributed fashion (multiple clients), providing an easy way to scale and create highly available systems.

Enroll in the Redis University: Redis Streams to learn more and get certified.

Sample Application

The redis-streams-101-java GitHub Repository contains sample code that shows how to

  • post messages to a streams
  • consume messages using a consumer group

· 3 min read

Introduction

In this project you will learn how to use the MapR-DB JSON REST API to:

Create and Delete tables Create, Read, Update and Delete documents (CRUD) MapR Extension Package 5.0 (MEP) introduced the MapR-DB JSON REST API that allow application to use REST to interact with MapR-DB JSON.

You can find information about the MapR-DB JSON REST API in the documentation: Using the MapR-DB JSON REST API

· 6 min read

Introduction

MapR-DB Table Replication allows data to be replicated to another table that could be on on the same cluster or in another cluster. This is different from the automatic and intra-cluster replication that copies the data into different physical nodes for high availability and prevent data loss.

This tutorial focuses on the MapR-DB Table Replication that replicates data between tables on different clusters.

Replicating data between different clusters allows you to:

  • provide another level of disaster recovery that protects your data and applications against global data center failure,
  • push data close to the applications and users,
  • aggregate the data from mutliple datacenters.

Replication Topologies

MapR-DB Table Replication provides various topologies to adapt the replication to the business and technical requirements:

  • Master-slave replication : in this topology, you replicate one way from source tables to replicas. The replicas can be in a remote cluster or in the cluster where the source tables are located.
  • Multi-Master replication : in this replication topology, there are two master-slave relationships, with each table playing both the role of a master and a slave. Client applications update both tables and each table replicates updates to the other.

In this example you will learn how to setup multi-master replication.

· 8 min read

Introduction

MapR Ecosystem Package 2.0 (MEP) is coming with some new features related to MapR Streams:

MapR Ecosystem Packs (MEPs) are a way to deliver ecosystem upgrades decoupled from core upgrades - allowing you to upgrade your tooling independently of your Converged Data Platform. You can lean more about MEP 2.0 in this article.

In this blog we describe how to use the REST Proxy to publish and consume messages to/from MapR Streams. The REST Proxy is a great addition to the MapR Converged Data Platform allowing any programming language to use MapR Streams.

The Kafka REST Proxy provided with the MapR Streams tools, can be used with MapR Streams (default), but also used in a hybrid mode with Apache Kafka. In this article we will focus on MapR Streams.