Skip to main content

· 8 min read

Apache Drill allows users to explore any type of data using ANSI SQL. This is great, but Drill goes even further than that and allows you to create custom functions to extend the query engine. These custom functions have all the performance of any of the Drill primitive operations, but allowing that performance makes writing these functions a little trickier than you might expect.

In this article, I'll explain step by step how to create and deploy a new function using a very basic example. Note that you can find lot of information about Drill Custom Functions in the documentation.

Let's create a new function that allows you to mask some characters in a string, and let's make it very simple. The new function will allow user to hide x number of characters from the start and replace then by any characters of their choice. This will look like:

MASK( 'PASSWORD' , '#' , 4 ) => ####WORD

You can find the full project in the following Github Repository.

As mentioned before, we could imagine many advanced features to this, but my goal is to focus on the steps to write a custom function, not so much on what the function does.

Prerequisites

For this you will need:

  • Java Developer Kit 7 or later
  • Apache Drill 1.1 or later
  • Maven 3.0 or later

Dependencies

The following Drill dependency should be added to your maven project

<dependency>
<groupId>org.apache.drill.exec</groupId>
<artifactId>drill-java-exec</artifactId>
<version>1.1.0</version>
</dependency>

Source

The Mask function is an implementation of the DrillSimpleFunc.

Developers can create 2 types of custom functions:

  • Simple Functions: these functions have a single row as input and produce a single value as output
  • Aggregation Functions: that will accept multiple rows as input and produce one value as output

Simple functions are often referred to as UDF's which stands for user defined function. Aggregation functions are referred to as UDAF which stands for user defined aggregation function.

In this example, we just need to transform the value of a column on each row, so a simple function is enough.

Create the function

The first step is to implement the DrillSimpleFunc interface.

package org.apache.drill.contrib.function;

import org.apache.drill.exec.expr.DrillSimpleFunc;
import org.apache.drill.exec.expr.annotations.FunctionTemplate;

@FunctionTemplate(
name="mask",
scope= FunctionTemplate.FunctionScope.SIMPLE,
nulls = FunctionTemplate.NullHandling.NULL_IF_NULL
)
public class SimpleMaskFunc implements DrillSimpleFunc{

public void setup() {

}

public void eval() {

}
}

The behavior of the function is driven by annotations (line 6-10)

  • Name of the function
  • Scope of the function, in our case Simple
  • What to do when the value is NULL, in this case Reverse will just returns NULL

Now we need to implement the logic of the function using setup() and eval() methods.

  • setup is self-explanatory, and in our case we do not need to setup anything.
  • eval that is the core of the function. As you can see this method does not have any parameter, and return void. So how does it work?

In fact the function will be generated dynamically (see DrillSimpleFuncHolder), and the input parameters and output holders are defined using holders by annotations. Let's look into this.

import io.netty.buffer.DrillBuf;
import org.apache.drill.exec.expr.DrillSimpleFunc;
import org.apache.drill.exec.expr.annotations.FunctionTemplate;
import org.apache.drill.exec.expr.annotations.Output;
import org.apache.drill.exec.expr.annotations.Param;
import org.apache.drill.exec.expr.holders.IntHolder;
import org.apache.drill.exec.expr.holders.NullableVarCharHolder;
import org.apache.drill.exec.expr.holders.VarCharHolder;

import javax.inject.Inject;


@FunctionTemplate(
name = "mask",
scope = FunctionTemplate.FunctionScope.SIMPLE,
nulls = FunctionTemplate.NullHandling.NULL_IF_NULL
)
public class SimpleMaskFunc implements DrillSimpleFunc {

@Param
NullableVarCharHolder input;

@Param(constant = true)
VarCharHolder mask;

@Param(constant = true)
IntHolder toReplace;

@Output
VarCharHolder out;

@Inject
DrillBuf buffer;

public void setup() {
}

public void eval() {

}

}

We need to define the parameters of the function. In this case we have 3 parameters, each defined using the @Param annotation. In addition, we also have to define the returned value using the @Output annotation.

The parameters of our mask function are:

  • A nullable string
  • The mask char or string
  • The number of characters to replace starting from the first

The function returns :

  • A string

For each of these parameters you have to use an holder class. For the String, this is managed by a VarCharHolder or NullableVarCharHolder -lines 21, 24,30- that provides a buffer to manage larger objects in a efficient way. Since we are manipulating a VarChar you also have to inject another buffer that will be used for the output -line 33-. Note that Drill doesn't actually use the Java heap for data being processed in a query but instead keeps this data off the heap and manages the life-cycle for us without using the Java garbage collector.

We are almost done since we have the proper class, the input/output object, we just need to implement the eval() method itself, and use these objects.

public void eval() {

// get the value and replace with
String maskValue = org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.getStringFromVarCharHolder(mask);
String stringValue = org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(input.start, input.end, input.buffer);

int numberOfCharToReplace = Math.min(toReplace.value, stringValue.length());

// build the mask substring
String maskSubString = com.google.common.base.Strings.repeat(maskValue, numberOfCharToReplace);
String outputValue = (new StringBuilder(maskSubString)).append(stringValue.substring(numberOfCharToReplace)).toString();

// put the output value in the out buffer
out.buffer = buffer;
out.start = 0;
out.end = outputValue.getBytes().length;
buffer.setBytes(0, outputValue.getBytes());
}

The code is quite simple:

  • Get the mask itself - line 4
  • Get the value - line 5
  • Get the number of character to replace - line 7
  • Generate a new string with masked values - lines 10/11
  • Create and populate the output buffer - lines 14 to 17

This code does, however, look a bit strange to somebody used to reading Java code. This strangeness arises because the final code that is executed in a query will actually be generated on the fly. This allows Drill to leverage Java's just-in-time (JIT) compiler for maximum speed. To make this work, you have to respect some basic rules:

  • Do not use imports, but instead use the fully qualified class name, this is what is done on line 10 with the Strings class. (coming from the Google Guava API packaged in Apache Drill)
  • The ValueHolders classes, in our case VarCharHolder and IntHolder should be manipulated like structs, so you must call helper methods, for example getStringFromVarCharHolder and toStringFromUTF8. Calling methods like toString will result in very bad problems.

Starting in Apache Drill 1.3.x, it is mandatory to specify the package name of your function in the ./resources/drill-module.conf file as follow:

drill {
classpath.scanning {
packages : ${?drill.classpath.scanning.packages} [
org.apache.drill.contrib.function
]
}
}

We are now ready to deploy and test this new function.

Package

Once again since, Drill will generate source, you must prepare your package in a way that classes and sources of the function are present in the classpath. This is different from the way that Java code is normally packaged but is necessary for Drill to be able to do the necessary code generation. Drill uses the compiled code to access the annotations and uses the source code to do code generation.

An easy way to do that is to use maven to build your project, and, in particular, use the maven-source-plugin like this in your pom.xml file:

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
<version>2.4</version>
<executions>
<execution>
<id>attach-sources</id>
<phase>package</phase>
<goals>
<goal>jar-no-fork</goal>
</goals>
</execution>
</executions>
</plugin>

Now, when you build using mvn package, Maven will generate 2 jars:

  • The default jar with the classes and resources (drill-simple-mask-1.0.jar)
  • A second jar with the sources (drill-simple-mask-1.0-sources.jar)

Finally you must add a drill-module.conf file in the resources folder of your project, to tell Drill that your jar contains a custom function. If you have no specific configuration to set for your function you can keep this file empty.

We are all set, you can now package and deploy the new function, just package and copy the Jars into the Drill 3rd party folder; $DRILL_HOME/jars/3rdparty , where $DRILL_HOME being your Drill installation folder.

mvn clean package

cp target/*.jar $DRILL_HOME/jars/3rdparty

Restart drill.

Run !

You should now be able to use your function in your queries:

SELECT MASK(first_name, '*' , 3) FIRST , MASK(last_name, '#', 7) LAST  FROM cp.`employee.json` LIMIT 5;
+----------+------------+
| FIRST | LAST |
+----------+------------+
| ***ri | ###### |
| ***rick | ####### |
| ***hael | ###### |
| ***a | #######ez |
| ***erta | ####### |
+----------+------------+

Conclusion

In this simple project you have learned how to write, deploy and use a custom Apache Drill Function. You can now extend this to create your own function.

One important thing to remember when extending Apache Drill (using a custom function, storage plugin or format), is that Drill runtime is generating dynamically lot of code. This means you may have to use a very specific pattern when writing and deploying your extensions. With our basic function this meant we had to:

  • deploy classes AND sources
  • use fully Qualified Class Names
  • use value holder classes and helper methods to manipulate parameters

· 7 min read

As you know, you have many differences between relational and document databases. The biggest, for the developer, is probably the data model: Row versus Document. This is particularly true when we talk about "relations" versus "embedded documents (or values)". Let's look at some examples, then see what are the various operations provided by MongoDB to help you to deal with this.

· 6 min read

Last week at the Paris MUG, I had a quick chat about security and MongoDB, and I have decided to create this post that explains how to configure out of the box security available in MongoDB.

You can find all information about MongoDB Security in following documentation chapter:

In this post, I won't go into the detail about how to deploy your database in a secured environment (DMZ/Network/IP/Location/...)

I will focus on Authentication and Authorization, and provide you the steps to secure the access to your database and data.

I have to mention that by default, when you install and start MongoDB, security is not enabled. Just to make it easier to work with.

The first part of the security is the Authentication, you have multiple choices documented here. Let's focus on "MONGODB-CR" mechanism.

The second part is Authorization to select what a user can do or not once he is connected to the database. The documentation about authorization is available here.

Let's now document how-to:

  1. Create an Administrator User
  2. Create Application Users

For each type of users I will show how to grant specific permissions.

· 7 min read

Few days ago I have posted a joke on Twitter

So I decided to move it from a simple picture to a real project. Let's look at the two phases of this so called project:

  • Moving the data from Couchbase to MongoDB
  • Updating the application code to use MongoDB

Look at this screencast to see it in action:

· 6 min read

TLTR:

  • MongoDB & Sage organized an internal Hackathon
  • We use the new X3 Platform based on MongoDB, Node.js and HTML to add cool features to the ERP
  • This shows that “any” enterprise can (should) do it to:
    • look differently at software development
    • build strong team spirit
    • have fun!

Introduction

I have like many of you participated to multiple Hackathons where developers, designer and entrepreneurs are working together to build applications in few hours/days. As you probably know more and more companies are running such events internally, it is the case for example at Facebook, Google, but also ING (bank), AXA (Insurance), and many more.

Last week, I have participated to the first Sage Hackathon!

In case you do not know Sage is a 30+ years old ERP vendor. I have to say that I could not imagine that coming from such company… Let me tell me more about it.

· 5 min read

In this article we will see how to create a pub/sub application (messaging, chat, notification), and this fully based on MongoDB (without any message broker like RabbitMQ, JMS, ... ).

So, what needs to be done to achieve such thing:

  • an application "publish" a message. In our case, we simply save a document into MongoDB
  • another application, or thread, subscribe to these events and will received message automatically. In our case this means that the application should automatically receive newly created document out of MongoDB

All this is possible with some very cool MongoDB features: capped collections and tailable cursors,

· 7 min read

In the past 2 years, I have met many developers, architects that are working on “big data” projects. This sounds amazing, but quite often the truth is not that amazing.

####TL;TR You believe that you have a big data project?

  • Do not start with the installation of an Hadoop Cluster -- the "how"
  • Start to talk to business people to understand their problem -- the "why"
  • Understand the data you must process
  • Look at the volume -- very often it is not "that" big
  • Then implement it, and take a simple approach, for example start with MongoDB + Apache Spark

'Big Data'

· 9 min read

This post is a quick and simple introduction to Geospatial feature of MongoDB 2.6 using simple dataset and queries.

Storing Geospatial Informations

As you know you can store any type of data, but if you want to query them you need to use some coordinates, and create index on them. MongoDB supports three types of indexes for GeoSpatial queries:

  • 2d Index : uses simple coordinate (longitude, latitude). As stated in the documentation: The 2d index is intended for legacy coordinate pairs used in MongoDB 2.2 and earlier. For this reason, I won't detail anything about this in this post. Just for the record 2d Index are used to query data stored as points on a two-dimensional plane
  • 2d Sphere Index : support queries of any geometries on an-earth-like sphere, the data can be stored as GeoJSON and legacy coordinate pairs (longitude, latitude). For the rest of the article I will use this type of index and focusing on GeoJSON.
  • Geo Haystack : that are used to query on very small area. It is today less used by applications and I will not describe it in this post. So this article will focus now on the 2d Sphere index with GeoJSON format to store and query documents.

So what is GeoJSON?

You can look at the http://geojson.org/ site, let's do a very short explanation. GeoJSON is a format for encoding, in JSON, a variety of geographic data structures, and support the following types: Point , LineString , Polygon , MultiPoint , MultiLineString , MultiPolygon and Geometry.

The GeoJSON format is quite straightforward based, for the simple geometries, on two attributes: type and coordinates. Let's take some examples:

The city where I spend all my childhood, Pleneuf Val-André, France, has the following coordinates (from Wikipedia)

48° 35′ 30.12″ N, 2° 32′ 48.84″ W

This notation is a point, based on a latitude & longitude using the WGS 84 (Degrees, Minutes, Seconds) system. Not very easy to use by application/code, this is why it is also possible to represent the exact same point using the following values for latitude & longitude:

48.5917, -2.5469

This one uses the WGS 84 (Decimal Degrees) system. This is the coordinates you see use in most of the application/API you are using as developer (eg: Google Maps/Earth for example)

By default GeoJSON, and MongoDB use these values but the coordinates must be stored in the longitude, latitude order, so this point in GeoJSON will look like:

{
"type": "Point",
"coordinates": [
-2.5469,
48.5917
]
}

This is a simple "Point", let's now for example look at a line, a very nice walk on the beach :

{
"type": "LineString",
"coordinates": [
[-2.551082,48.5955632],
[-2.551229,48.594312],
[-2.551550,48.593312],
[-2.552400,48.592312],
[-2.553677, 48.590898]
]
}

http://1.bp.blogspot.com/-dg_myaJAG-c/U_Nv80jrncI/AAAAAAAAArA/utmCcBlQeqY/s1600/02-geojson-linestring.png )

So using the same approach you will be able to create MultiPoint, MultiLineString, Polygon, MultiPolygon. It is also possible to mix all these in a single document using a GeometryCollection. The following example is a Geometry Collection of MultiLineString and Polygon over Central Park:

{
"type" : "GeometryCollection",
"geometries" : [
{
"type" : "Polygon",
"coordinates" : [
[
[ -73.9580, 40.8003 ],
[ -73.9498, 40.7968 ],
[ -73.9737, 40.7648 ],
[ -73.9814, 40.7681 ],
[ -73.9580, 40.8003 ]
]
]
},
{
"type" : "MultiLineString",
"coordinates" : [
[ [ -73.96943, 40.78519 ], [ -73.96082, 40.78095 ] ],
[ [ -73.96415, 40.79229 ], [ -73.95544, 40.78854 ] ],
[ [ -73.97162, 40.78205 ], [ -73.96374, 40.77715 ] ],
[ [ -73.97880, 40.77247 ], [ -73.97036, 40.76811 ] ]
]
}
]
}

Note: You can if you want test/visualize these JSON documents using the http://geojsonlint.com/ service.

Now what? Let's store data!

Once you have a GeoJSON document you just need to store it into your document. For example if you want to store a document about JFK Airport with its location you can run the following command:

db.airports.insert(
{
"name" : "John F Kennedy Intl",
"type" : "International",
"code" : "JFK",
"loc" : {
"type" : "Point",
"coordinates" : [ -73.778889, 40.639722 ]
}
}

Yes this is that simple! You just save the GeoJSON as one of the attribute of the document, loc in this example)

Querying Geospatial Informations

Now that we have the data stored in MongoDB, it is now possible to use the geospatial information to do some interesting queries.

For this we need a sample dataset. I have created one using some open data found in various places. This dataset contains the following informations:

  • airports collection with the list of US airport (Point)
  • states collection with the list of US states (MultiPolygon)

I have created this dataset from various OpenData sources ( http://geocommons.com/ , http://catalog.data.gov/dataset ) and use toGeoJSON to convert them into the proper format.

Let's install the dataset:

  1. Download it from here
  2. Unzip geo.zip file
  3. Restore the data into your mongoDB instance, using the following command
mongorestore geo.zip

MongoDB allows applications to do the following types of query on geospatial data:

  • inclusion
  • intersection
  • proximity

Obviously, you will be able to use all the other operator in addition to the geospatial ones. Let's now look at some concrete examples.

Inclusion

Find all the airports in California. For this you need to get the California location (Polygon) and use the command $geoWithin in the query. From the shell it will look like :

use geo
var cal = db.states.findOne( {code : "CA"} );

db.airports.find(
{
loc : { $geoWithin : { $geometry : cal.loc } }
},
{ name : 1 , type : 1, code : 1, _id: 0 }
);

Result:

{ "name" : "Modesto City - County", "type" : "", "code" : "MOD" }
...
{ "name" : "San Francisco Intl", "type" : "International", "code" : "SFO" }
{ "name" : "San Jose International", "type" : "International", "code" : "SJC" }
...

So the query is using the "California MultiPolygon" and looks in the airports collection to find all the airports that are in these polygons. This looks like the following image on a map:

You can use any other query features or criteria, for example you can limit the query to international airport only sorted by name :

db.airports.find(
{
loc : { $geoWithin : { $geometry : cal.loc } },
type : "International"
},
{ name : 1 , type : 1, code : 1, _id: 0 }
).sort({ name : 1 });

Result:

{ "name" : "Los Angeles Intl", "type" : "International", "code" : "LAX" }
{ "name" : "Metropolitan Oakland Intl", "type" : "International", "code" : "OAK" }
{ "name" : "Ontario Intl", "type" : "International", "code" : "ONT" }
{ "name" : "San Diego Intl", "type" : "International", "code" : "SAN" }
{ "name" : "San Francisco Intl", "type" : "International", "code" : "SFO" }
{ "name" : "San Jose International", "type" : "International", "code" : "SJC" }
{ "name" : "Southern California International", "type" : "International", "code" : "VCV" }

I do not know if you have looked in detail, but we are querying these documents with no index. You can run a query with the explain() to see what's going on. The $geoWithin operator does not need index but your queries will be more efficient with one so let's create the index:

db.airports.ensureIndex( { "loc" : "2dsphere" } );

Run the explain and you will se the difference.

Intersection

Suppose you want to know what are all the adjacent states to California, for this we just need to search for all the states that have coordinates that "intersects" with California. This is done with the following query:

var cal = db.states.findOne(  {code : "CA"}  );
db.states.find(
{
loc : { $geoIntersects : { $geometry : cal.loc } } ,
code : { $ne : "CA" }
},
{ name : 1, code : 1 , _id : 0 }
);

Result:

{ "name" : "Oregon", "code" : "OR" }
{ "name" : "Nevada", "code" : "NV" }
{ "name" : "Arizona", "code" : "AZ" }

Same as before $geoIntersect operator does not need an index to work, but it will be more efficient with the following index:

db.states.ensureIndex( { loc : "2dsphere" } );

Proximity

The last feature that I want to highlight in this post is related to query with proximity criteria. Let's find all the international airports that are located at less than 20km from the reservoir in NYC Central Park. For this you will be using the $near operator.

db.airports.find(
{
loc : {
$near : {
$geometry : {
type : "Point" ,
coordinates : [-73.965355,40.782865]
},
$maxDistance : 20000
}
},
type : "International"
},
{
name : 1,
code : 1,
_id : 0
}
);

Results:

{ "name" : "La Guardia", "code" : "LGA" }
{ "name" : "Newark Intl", "code" : "EWR"}

So this query returns 2 airports, the closest being La Guardia, since the $near operator sorts the results by distance. Also it is important to raise here that the $near operator requires an index.

Conclusion

In this first post about geospatial feature you have learned:

  • the basic of GeoJSON
  • how to query documents with inclusion, intersection and proximity criteria.

You can now play more with this for example integrate this into an application that expose data into some UI, or see how you can use the geospatial operators into a aggregation pipeline.

· 6 min read

Wow! it has been a while since I posted something on my blog post. I have been very busy, moving to MongoDB, learning, learning, learning…finally I can breath a little and answer some questions.

Last week I have been helping my colleague Norberto to deliver a MongoDB Essentials Training in Paris. This was a very nice experience, and I am impatient to deliver it on my own. I was happy to see that the audience was well balanced between developers and operations, mostly DBA.

What! I still need DBA?

This is a good opportunity to raise a point, or comment a wrong idea: the fact that you are using MongoDB, or any other NoSQL datastore does not mean that you do not need a DBA… Like any project, an administrator is not mandatory, but if you have one it is better. So even when MongoDB is pushed by development team it is very important to understand the way the database works, and how to administer, monitor it.

If you are lucky enough to have real operations teams, with good system and database administrators use them! They are very important for your application.

Most DBA/System Administrators have been maintaining systems in production for many years. They know how to keep your application up and running. They also most of the time experienced many “disasters”, and then recover (I hope).

Who knows, you may encounter big issues with your application and you will be happy to have them on your side at this moment.

"Great, but the DBA is slowing down my development!"

I hear this, sometimes, and I had this feeling in the past to as developer in large organization. Is it true?

Developers and DBA are today, not living in the same worlds:

  • Developers want to integrate new technologies as soon as possible, not only because it is fun and they can brag about it during meetups/conferences; but because these technologies, most of the time, are making them more productive, and offer better service/experience to the consumer
  • DBA, are here to keep the applications up and running! So every time they do not feel confident about a technology they will push back. I think this is natural and I would be probably the same in their position. Like all geeks, they would love to adopt new technologies but they need to understand and trust it before.

System administrators, DBAS look at the technology with a different angle than developers.

Based on this assumption, it is important to bring the operation team as early as possible when the development team wants to integrate MongoDB or any new data store. Having the operation team in the loop early will ease the global adoption of MongoDB in the company.

Personally, and this will show my age, I have seen a big change in the way developers and DBAs are working together.

Back in the 90's, when the main architecture was based on client/server architecture developers and DBAs where working pretty well togethers; probably because they were speaking the same language: SQL was everywhere. I had regular meetings wit

Then, since mid 2000, mots of applications have moved to a web based architecture , with for example Java middleware, and the developers stopped working with DBAs. Probably because the abstraction data layer provided by the ORM exposed the database as a "commodity" service that is supposed to work: "Hey Mr DBA, my application has been written with the best middleware technology on the market, so now deal with the performance and scalability! I am done!"

Yes it is a cliché, but I am sure that some of you will recognize that.

Nevertheless each time I can, I have been pushing developers to talk more to administrators and look closely to their database!

A new era for operations and development teams

The fast adoption of MongoDB by developers, is a great opportunity to fix what we have broken 10 years ago in large information systems:

  • Let's talk again!

MongoDB has been built first for developers. The document oriented approach gives lot of flexibility to quickly adapt to change. So anytime your business users need a new feature you can implement it, even if this change impact the data structure. Your data model is now driven and controlled by the application, not the database engine.

However, the applications still need to be available 24x7, and performs well. These topics are managed - and shared- by administrator and developers! This has been always the case but, as I described it earlier, it looks like some of us have forgotten that.

Schemas design, change velocity, are driven by the application, so the business and development teams, but all this impacts the database, for example:

  • How storage will grow ?
  • Which indexes must be created to speed up my application?
  • How to organize my cluster to leverage the infrastructure properly:
    • Replica-Set organization (and related write concerns, managed by developer)
    • Sharding options
  • And the most important of them : backup/recovery strategies

So many things that could be managed by the project team, but if you have an operation team with you, it will be better to do that as a single team.

You, the developer, are convinced that MongoDB is the best database for your projects! Now it is time to work with the ops team and convince them too. So you should for sure explain why MongoDB is good for you as developer, but also you should highlight all the benefits for the operations, starting with built-in high-availability with replica sets, and easy scalability with sharding. MongoDB is also here to make the life of the administrator easier! I have shared in the next paragraph a lit of resources that are interesting for operations people.

Let’s repeat it another time, try to involve the operation team as soon as possible, and use that as an opportunity to build/rebuild the relationship between developers and system administrators!

Resources

You can find many good resources on the Site to helps operations or learn about this: