Skip to main content

· 9 min read

This post is a quick and simple introduction to Geospatial feature of MongoDB 2.6 using simple dataset and queries.

Storing Geospatial Informations#

As you know you can store any type of data, but if you want to query them you need to use some coordinates, and create index on them. MongoDB supports three types of indexes for GeoSpatial queries:

  • 2d Index : uses simple coordinate (longitude, latitude). As stated in the documentation: The 2d index is intended for legacy coordinate pairs used in MongoDB 2.2 and earlier. For this reason, I won't detail anything about this in this post. Just for the record 2d Index are used to query data stored as points on a two-dimensional plane
  • 2d Sphere Index : support queries of any geometries on an-earth-like sphere, the data can be stored as GeoJSON and legacy coordinate pairs (longitude, latitude). For the rest of the article I will use this type of index and focusing on GeoJSON.
  • Geo Haystack : that are used to query on very small area. It is today less used by applications and I will not describe it in this post. So this article will focus now on the 2d Sphere index with GeoJSON format to store and query documents.

So what is GeoJSON?

You can look at the site, let's do a very short explanation. GeoJSON is a format for encoding, in JSON, a variety of geographic data structures, and support the following types: Point , LineString , Polygon , MultiPoint , MultiLineString , MultiPolygon and Geometry.

The GeoJSON format is quite straightforward based, for the simple geometries, on two attributes: type and coordinates. Let's take some examples:

The city where I spend all my childhood, Pleneuf Val-André, France, has the following coordinates (from Wikipedia)

48° 35′ 30.12″ N, 2° 32′ 48.84″ W

This notation is a point, based on a latitude & longitude using the WGS 84 (Degrees, Minutes, Seconds) system. Not very easy to use by application/code, this is why it is also possible to represent the exact same point using the following values for latitude & longitude:

48.5917, -2.5469

This one uses the WGS 84 (Decimal Degrees) system. This is the coordinates you see use in most of the application/API you are using as developer (eg: Google Maps/Earth for example)

By default GeoJSON, and MongoDB use these values but the coordinates must be stored in the longitude, latitude order, so this point in GeoJSON will look like:

{  "type": "Point",  "coordinates": [  -2.5469,    48.5917  ]}

This is a simple "Point", let's now for example look at a line, a very nice walk on the beach :

{  "type": "LineString",  "coordinates": [    [-2.551082,48.5955632],    [-2.551229,48.594312],    [-2.551550,48.593312],    [-2.552400,48.592312],    [-2.553677, 48.590898]  ]  } )

So using the same approach you will be able to create MultiPoint, MultiLineString, Polygon, MultiPolygon. It is also possible to mix all these in a single document using a GeometryCollection. The following example is a Geometry Collection of MultiLineString and Polygon over Central Park:

{  "type" : "GeometryCollection",  "geometries" : [    {      "type" : "Polygon",      "coordinates" : [[  [ -73.9580, 40.8003 ],  [ -73.9498, 40.7968 ],  [ -73.9737, 40.7648 ],  [ -73.9814, 40.7681 ],  [ -73.9580, 40.8003  ]]      ]    },    {      "type" : "MultiLineString",      "coordinates" : [[ [ -73.96943, 40.78519 ], [ -73.96082, 40.78095 ] ],[ [ -73.96415, 40.79229 ], [ -73.95544, 40.78854 ] ],[ [ -73.97162, 40.78205 ], [ -73.96374, 40.77715 ] ],[ [ -73.97880, 40.77247 ], [ -73.97036, 40.76811 ] ]      ]    }  ]}

Note: You can if you want test/visualize these JSON documents using the service.

Now what? Let's store data!#

Once you have a GeoJSON document you just need to store it into your document. For example if you want to store a document about JFK Airport with its location you can run the following command:

db.airports.insert({  "name" : "John F Kennedy Intl",  "type" : "International",  "code" : "JFK",  "loc" : {    "type" : "Point",    "coordinates" : [ -73.778889, 40.639722 ]  }}

Yes this is that simple! You just save the GeoJSON as one of the attribute of the document, loc in this example)

Querying Geospatial Informations#

Now that we have the data stored in MongoDB, it is now possible to use the geospatial information to do some interesting queries.

For this we need a sample dataset. I have created one using some open data found in various places. This dataset contains the following informations:

  • airports collection with the list of US airport (Point)
  • states collection with the list of US states (MultiPolygon)

I have created this dataset from various OpenData sources ( , ) and use toGeoJSON to convert them into the proper format.

Let's install the dataset:

  1. Download it from here
  2. Unzip file
  3. Restore the data into your mongoDB instance, using the following command

MongoDB allows applications to do the following types of query on geospatial data:

  • inclusion
  • intersection
  • proximity

Obviously, you will be able to use all the other operator in addition to the geospatial ones. Let's now look at some concrete examples.


Find all the airports in California. For this you need to get the California location (Polygon) and use the command $geoWithin in the query. From the shell it will look like :

use geovar cal = db.states.findOne(  {code : "CA"}  );
db.airports.find({  loc : { $geoWithin : { $geometry : cal.loc } }},{ name : 1 , type : 1, code : 1, _id: 0 });


{ "name" : "Modesto City - County", "type" : "", "code" : "MOD" }...{ "name" : "San Francisco Intl", "type" : "International", "code" : "SFO" }{ "name" : "San Jose International", "type" : "International", "code" : "SJC" }...

So the query is using the "California MultiPolygon" and looks in the airports collection to find all the airports that are in these polygons. This looks like the following image on a map:

You can use any other query features or criteria, for example you can limit the query to international airport only sorted by name :

db.airports.find({  loc : { $geoWithin : { $geometry : cal.loc } },  type : "International"},{ name : 1 , type : 1, code : 1, _id: 0 }).sort({ name : 1 });


{ "name" : "Los Angeles Intl", "type" : "International", "code" : "LAX" }{ "name" : "Metropolitan Oakland Intl", "type" : "International", "code" : "OAK" }{ "name" : "Ontario Intl", "type" : "International", "code" : "ONT" }{ "name" : "San Diego Intl", "type" : "International", "code" : "SAN" }{ "name" : "San Francisco Intl", "type" : "International", "code" : "SFO" }{ "name" : "San Jose International", "type" : "International", "code" : "SJC" }{ "name" : "Southern California International", "type" : "International", "code" : "VCV" }

I do not know if you have looked in detail, but we are querying these documents with no index. You can run a query with the explain() to see what's going on. The $geoWithin operator does not need index but your queries will be more efficient with one so let's create the index:

db.airports.ensureIndex( { "loc" : "2dsphere" } );

Run the explain and you will se the difference.


Suppose you want to know what are all the adjacent states to California, for this we just need to search for all the states that have coordinates that "intersects" with California. This is done with the following query:

var cal = db.states.findOne(  {code : "CA"}  );db.states.find({  loc : { $geoIntersects : { $geometry : cal.loc  }  } ,  code : { $ne : "CA"  }  },{ name : 1, code : 1 , _id : 0 });


{ "name" : "Oregon", "code" : "OR" }{ "name" : "Nevada", "code" : "NV" }{ "name" : "Arizona", "code" : "AZ" }

Same as before $geoIntersect operator does not need an index to work, but it will be more efficient with the following index:

db.states.ensureIndex( { loc : "2dsphere" } );


The last feature that I want to highlight in this post is related to query with proximity criteria. Let's find all the international airports that are located at less than 20km from the reservoir in NYC Central Park. For this you will be using the $near operator.

db.airports.find({  loc : {    $near : {      $geometry : {        type : "Point" ,        coordinates : [-73.965355,40.782865]        },      $maxDistance : 20000    }  },  type : "International"},{  name : 1,  code : 1,  _id : 0});


{ "name" : "La Guardia", "code" : "LGA" }{ "name" : "Newark Intl", "code" : "EWR"}

So this query returns 2 airports, the closest being La Guardia, since the $near operator sorts the results by distance. Also it is important to raise here that the $near operator requires an index.


In this first post about geospatial feature you have learned:

  • the basic of GeoJSON
  • how to query documents with inclusion, intersection and proximity criteria.

You can now play more with this for example integrate this into an application that expose data into some UI, or see how you can use the geospatial operators into a aggregation pipeline.

· 6 min read

Wow! it has been a while since I posted something on my blog post. I have been very busy, moving to MongoDB, learning, learning, learning…finally I can breath a little and answer some questions.

Last week I have been helping my colleague Norberto to deliver a MongoDB Essentials Training in Paris. This was a very nice experience, and I am impatient to deliver it on my own. I was happy to see that the audience was well balanced between developers and operations, mostly DBA.

What! I still need DBA?#

This is a good opportunity to raise a point, or comment a wrong idea: the fact that you are using MongoDB, or any other NoSQL datastore does not mean that you do not need a DBA… Like any project, an administrator is not mandatory, but if you have one it is better. So even when MongoDB is pushed by development team it is very important to understand the way the database works, and how to administer, monitor it.

If you are lucky enough to have real operations teams, with good system and database administrators use them! They are very important for your application.

Most DBA/System Administrators have been maintaining systems in production for many years. They know how to keep your application up and running. They also most of the time experienced many “disasters”, and then recover (I hope).

Who knows, you may encounter big issues with your application and you will be happy to have them on your side at this moment.

"Great, but the DBA is slowing down my development!"#

I hear this, sometimes, and I had this feeling in the past to as developer in large organization. Is it true?

Developers and DBA are today, not living in the same worlds:

  • Developers want to integrate new technologies as soon as possible, not only because it is fun and they can brag about it during meetups/conferences; but because these technologies, most of the time, are making them more productive, and offer better service/experience to the consumer
  • DBA, are here to keep the applications up and running! So every time they do not feel confident about a technology they will push back. I think this is natural and I would be probably the same in their position. Like all geeks, they would love to adopt new technologies but they need to understand and trust it before.

System administrators, DBAS look at the technology with a different angle than developers.

Based on this assumption, it is important to bring the operation team as early as possible when the development team wants to integrate MongoDB or any new data store. Having the operation team in the loop early will ease the global adoption of MongoDB in the company.

Personally, and this will show my age, I have seen a big change in the way developers and DBAs are working together.

Back in the 90's, when the main architecture was based on client/server architecture developers and DBAs where working pretty well togethers; probably because they were speaking the same language: SQL was everywhere. I had regular meetings wit

Then, since mid 2000, mots of applications have moved to a web based architecture , with for example Java middleware, and the developers stopped working with DBAs. Probably because the abstraction data layer provided by the ORM exposed the database as a "commodity" service that is supposed to work: "Hey Mr DBA, my application has been written with the best middleware technology on the market, so now deal with the performance and scalability! I am done!"

Yes it is a cliché, but I am sure that some of you will recognize that.

Nevertheless each time I can, I have been pushing developers to talk more to administrators and look closely to their database!

A new era for operations and development teams#

The fast adoption of MongoDB by developers, is a great opportunity to fix what we have broken 10 years ago in large information systems:

  • Let's talk again!

MongoDB has been built first for developers. The document oriented approach gives lot of flexibility to quickly adapt to change. So anytime your business users need a new feature you can implement it, even if this change impact the data structure. Your data model is now driven and controlled by the application, not the database engine.

However, the applications still need to be available 24x7, and performs well. These topics are managed - and shared- by administrator and developers! This has been always the case but, as I described it earlier, it looks like some of us have forgotten that.

Schemas design, change velocity, are driven by the application, so the business and development teams, but all this impacts the database, for example:

  • How storage will grow ?
  • Which indexes must be created to speed up my application?
  • How to organize my cluster to leverage the infrastructure properly:
    • Replica-Set organization (and related write concerns, managed by developer)
    • Sharding options
  • And the most important of them : backup/recovery strategies

So many things that could be managed by the project team, but if you have an operation team with you, it will be better to do that as a single team.

You, the developer, are convinced that MongoDB is the best database for your projects! Now it is time to work with the ops team and convince them too. So you should for sure explain why MongoDB is good for you as developer, but also you should highlight all the benefits for the operations, starting with built-in high-availability with replica sets, and easy scalability with sharding. MongoDB is also here to make the life of the administrator easier! I have shared in the next paragraph a lit of resources that are interesting for operations people.

Let’s repeat it another time, try to involve the operation team as soon as possible, and use that as an opportunity to build/rebuild the relationship between developers and system administrators!


You can find many good resources on the Site to helps operations or learn about this:

· 7 min read

If you have to deal with a large number of documents when doing queries against a Couchbase cluster it is important to use pagination to get rows by page. You can find some information in the documentation in the chapter "Pagination", but I want to go in more details and sample code in this article.

For this example I will start by creating a simple view based on the beer-sample dataset, the view is used to find brewery by country:

function (doc, meta) {  if (doc.type == "brewery" &&{    emit(;  }}

This view list all the breweries by country, the index looks like:

Doc idKeyValue
yellowstone_valley_brewingUnited Statesnull
yuengling_son_brewingUnited Statesnull
zea_rotisserie_and_breweryUnited Statesnull
fosters_tien_gangViet Namnull
hue_breweryViet Namnull

So now you want to navigate in this index with a page size of 5 rows.

Using skip / limit Parameters#

The most simplistic approach is to use limit and skip parameters for example:

Page 1 : ?limit=5&skip0
Page 2 : ?limit=5&skip=5 ... Page x : ?limit=5&skip(limit*(page-1))

You can obviously use any other parameters you need to do range or key queries (startkey/endkey, key, keys) and sort option (descending).

This is simple but not the most efficient way, since the query engine has to read all the rows that match the query, until the skip value is reached.

Some code sample in python that paginate using this view :

This application loops on all the pages until the end of the index.

As I said before this is not the best approach since the system must read all the values until the skip is reached. The following example shows a better way to deal with this.

Using startkey / startkey_docid parameters#

To make this pagination more efficient it is possible to take another approach. This approach uses the startkey and startkey_docid to select the proper documents.

  • The startkey parameter will be the value of the key where the query should start to read (based on the last key of the "previous page"
  • Since for a key for example "Germany" you may have one or more ids (documents) it is necessary to say to Couchbase query engine where to start, for this you need to use the startkey_docid parameter, and ignore this id since it is the last one of the previous page.

So if we look at the index, and add a row number to explain the pagination

Row numDoc idKeyValue

Query for page 1
Query for page 2

Query for page 3
...yellowstone_valley_brewingUnited Statesnull
...yuengling_son_brewingUnited Statesnull
...zea_rotisserie_and_breweryUnited Statesnull
...fosters_tien_gangViet Namnull
...hue_breweryViet Namnull

So as you can see in the examples above, the query uses the startkey, a document id, and just passes it using skip=1.

Let's now look at the application code, once again in Python

from couchbase import Couchbasecb = Couchbase.connect(bucket='beer-sample')
hasRow = TruerowPerPage = 5page = 0currentStartkey=""startDocId=""
while hasRow :    hasRow = False    skip = 0 if page == 0 else 1    page = page + 1    print "-- Page %s --" % (page)    rows = cb.query("test", "by_country", limit=rowPerPage, skip=skip, startkey=currentStartkey, startkey_docid=startDocId)    for row in rows:        hasRow = True        print "Country: \"%s\" \t Id: '%s'" % (row.key, row.docid)        currentStartkey = row.key        startDocId = row.docid    print " -- -- -- -- \n"

This application loops on all the pages until the end of the index

Using this approach, the application start to read the index at a specific key (startkey parameter), and only loop on the necessary entry in the index. This is more efficient than using the simple skip approach.

Views with Reduce function#

When your view is using a reduce function, if you want to paginate on the various keys only (with the reduce function) you need to use the skip and limit parameters.

When you are using the paramater startkey_docid with a reduce function it will calculate the reduce only to the subset of document ids that are part of your query.

Couchbase Java SDK Paginator#

In the previous examples, I have showed how to do pagination using the various query parameters. The Java SDK provides a Paginator object to help developers to deal with pagination. The following example is using the same view with the Paginator API.

package com.couchbase.devday;
import com.couchbase.client.CouchbaseClient;import com.couchbase.client.protocol.views.*;import;import java.util.HashMap;import java.util.LinkedList;import java.util.List;import java.util.Properties;import java.util.concurrent.TimeUnit;import java.util.logging.ConsoleHandler;import java.util.logging.Handler;import java.util.logging.Level;import java.util.logging.Logger;
public class JavaPaginatorSample {
public static void main(String[] args) {
    configure();    System.out.println("--------------------------------------------------------------------------");    System.out.println("\tCouchbase - Paginator");    System.out.println("--------------------------------------------------------------------------");
    List<URI> uris = new LinkedList<URI>();    uris.add(URI.create(""));
    CouchbaseClient cb = null;    try {        cb = new CouchbaseClient(uris, "beer-sample", "");        System.out.println("--------------------------------------------------------------------------");        System.out.println("Breweries (by_name) with docs & JSON parsing");        View view = cb.getView("test", "by_country");        Query query = new Query();        int docsPerPage = 5;
        Paginator paginatedQuery = cb.paginatedQuery(view, query, docsPerPage);        int pageCount = 0;        while(paginatedQuery.hasNext()) {            pageCount++;            System.out.println(" -- Page "+ pageCount +" -- ");            ViewResponse response =;            for (ViewRow row : response) {                System.out.println(row.getKey() + " : " + row.getId());            }            System.out.println(" -- -- -- ");        }                System.out.println("\n\n");        cb.shutdown(10, TimeUnit.SECONDS);    } catch (Exception e) {        System.err.println("Error connecting to Couchbase: " + e.getMessage());    }}

private static void configure() {
    for(Handler h : Logger.getLogger("com.couchbase.client").getParent().getHandlers()) {        if(h instanceof ConsoleHandler) {            h.setLevel(Level.OFF);        }    }    Properties systemProperties = System.getProperties();    systemProperties.put("net.spy.log.LoggerImpl", "net.spy.memcached.compat.log.SunLogger");    System.setProperties(systemProperties);
    Logger logger = Logger.getLogger("com.couchbase.client");    logger.setLevel(Level.OFF);    for(Handler h : logger.getParent().getHandlers()) {        if(h instanceof ConsoleHandler){            h.setLevel(Level.OFF);        }    }}

So as you can see you can easily paginate on the results of a Query using the Java Paginator.

  • At the line #37, the Paginator is created from using the view and query objects and a page size is specified
  • Then you just need to use the hasNext() and next() methods to navigate in the results.

The Java Paginator is aware of the fact that they query is using a reduce or not, so you can use it with all type of queries - Internally it will switch between the skip/limit approach and the doc_id approaches. You can see how it is done in the Paginator class.

Note that if you want to do that in a Web application between HTTP request you must keep the Paginator object in the user session since the current API keeps the current page in its state.


In this blog post you have learned how to deal with pagination in Couchbase views; to summarize

  • The pagination is based on some specific parameters that you send when executing a query.
  • Java developers can use the Paginator class that simplifies pagination.

I am inviting you to look at the new Couchbase Query Language N1QL, still under development, that will provide more options to developers including pagination, using LIMIT & OFFSET parameters, for example:

SELECT fname, ageFROM tutorialWHERE age > 30LIMIT 2OFFSET 2

If you want to learn more about N1QL:

· 7 min read


Developers are often asking me how to "version" documents with Couchbase 2.0. The short answer is: the clients and server do not expose such feature, but it is quite easy to implement.

In this article I will use a basic approach, and you will be able to extend it depending of your business requirements.

· 6 min read


Clever Cloud is the first PaaS to provide Couchbase as a service allowing developers to run applications in a fully managed environment. This article shows how to deploy an existing application to Clever Cloud.

I am using a very simple Node application that I have documented in a previous article: “Easy application development with Couchbase, Angular and Node”.

Clever Cloud provides support for various databases MySQL, PostgreSQL, but also and this is most important for me Couchbase. No only Clever Cloud allows you to use database services but also you can deploy and host your application that could be developed in the language/technology of your choice : Java, Node, Scala, Python, PHP, … and all this in a secure, scalable and managed environment.

· 6 min read

TL;DR: Look at the project on Github.


During my last interactions with the Couchbase community, I had the question how can I easily import my data from my current database into Couchbase. And my answer was always the same:

  • Take an ETL such as Talend to do it
  • Just write a small program to copy the data from your RDBMS to Couchbase...

So I have written this small program that allows you to import the content of a RDBMS into Couchbase. This tools could be used as it is, or you can look at the code to adapt it to your application.

The Tool: Couchbase SQL Importer#

The Couchbase SQL Importer, available here, allows you with a simple command line to copy all -or part of- your SQL schema into Couchbase. Before explaining how to run this command, let's see how the data are stored into Couchbase when they are imported:

  • Each table row is imported a single JSON document
    • where each table column becomes a JSON attribute
  • Each document as a key made of the name of the table and a counter (increment)

The following concrete example, based on the MySQL World sample database, will help you to understand how it works. This database contains 3 tables : City, Country, CountryLanguage. The City table looks like:

+-------------+----------+------+-----+---------+----------------+| Field       | Type     | Null | Key | Default | Extra          |+-------------+----------+------+-----+---------+----------------+| ID          | int(11)  | NO   | PRI | NULL    | auto_increment || Name        | char(35) | NO   |     |         |                || CountryCode | char(3)  | NO   |     |         |                || District    | char(20) | NO   |     |         |                || Population  | int(11)  | NO   |     | 0       |                |+-------------+----------+------+-----+---------+----------------+

The JSON document that matches this table looks like the following:

city:3805{  "Name": "San Francisco",  "District": "California",  "ID": 3805,  "Population": 776733,  "CountryCode": "USA"}

You see that here I am simply taking all the rows and "moving" them into Couchbase. This is a good first step to play with your dataset into Couchbase, but it is probably not the final model you want to use for your application; most of the time you will have to see when to use embedded documents, list of values, .. into your JSON documents.

In addition to the JSON document the tool create views based on the following logic:

  • a view that list all imported documents with the name of the "table" (aka type) as key
  • a view for each table with the primary key columns

View: all/by_type

{  "rows": [  {"key": "city", "value": 4079},  {"key": "country", "value": 239},  {"key": "countrylanguage", "value": 984}  ]}

As you can see this view allows you to get with a single Couchbase query the number of document by type.

Also for each table/document type, a view is created where the key of the index is built from the table primary key. Let's for example query the "City" documents.

View: city/by_pk?reduce=false&limit=5

{  "total_rows": 4079,  "rows": [  {"id": "city:1", "key": 1, "value": null},  {"id": "city:2", "key": 2, "value": null},  {"id": "city:3", "key": 3, "value": null},  {"id": "city:4", "key": 4, "value": null},  {"id": "city:5", "key": 5, "value": null}  ]}

The index key matches the value of the City.ID column. When the primary key is made of multiple columns the key looks like:

View: CountryLanguage/by_pk?reduce=false&limit=5

{  "total_rows": 984,  "rows": [  {"id": "countrylanguage:1", "key": ["ABW", "Dutch"], "value": null},  {"id": "countrylanguage:2", "key": ["ABW", "English"], "value": null},  {"id": "countrylanguage:3", "key": ["ABW", "Papiamento"], "value": null},  {"id": "countrylanguage:4", "key": ["ABW", "Spanish"], "value": null},  {"id": "countrylanguage:5", "key": ["AFG", "Balochi"], "value": null}  ]}

This view is built from the CountryLanguage table primary key made of CountryLanguage.CountryCode and CountryLanguage.Language` columns.

+-------------+---------------+------+-----+---------+-------+| Field       | Type          | Null | Key | Default | Extra |+-------------+---------------+------+-----+---------+-------+| CountryCode | char(3)       | NO   | PRI |         |       || Language    | char(30)      | NO   | PRI |         |       || IsOfficial  | enum('T','F') | NO   |     | F       |       || Percentage  | float(4,1)    | NO   |     | 0.0     |       |+-------------+---------------+------+-----+---------+-------+

How to use Couchbase SQL Importer tool?

The importer is a simple Java based command line utility, quite simple to use:

1- Download the CouchbaseSqlImporter.jar file from here. This file is contains all the dependencies to work with Couchbase: the Java Couchbase Client, and GSON. 2- Download the JDBC driver for the database you are using as data source. For this example I am using MySQL and I have download the driver for MySQL Site. 3- Configure the import using a properties file.

## SQL Information ##sql.connection=jdbc:mysql://
## Couchbase Information ##cb.uris=http://localhost:8091/poolscb.bucket=defaultcb.password=
## Import informationimport.tables=ALLimport.createViews=trueimport.typefield=typeimport.fieldcase=lower

This sample properties file contains three sections :

  • The two first sections are used to configure the connections to your SQL database and Couchbase cluster (note that the bucket must be created first)
  • The third section allow you to configure the import itself

4- Run the tool !

java -cp "./CouchbaseSqlImporter.jar:./mysql-connector-java-5.1.25-bin.jar" com.couchbase.util.SqlImporter

So you run the Java command with the proper classpath (-cp parameter).

And you are done, you can get your data from your SQL database into Couchbase.

If you are interested to see how it is working internally, you can take a look to the next paragraph.

The Code: How it works?#

The main class of the tool is really simple com.couchbase.util.SqlImporter, the process is:

  1. Connect to the SQL database
  2. Connect to Couchbase
  3. Get the list of tables
  4. For each tables execute a "select * from table" 4.1. Analyze the ResultSetMetadata to get the list of columns 4.2. Create a Java map for each rows where the key is the name of the columns and the value…is the value 4.3. Serialize this Map into a GSON document and save it into Couchbase

The code is available in the ImportTable(String table) Java method.

One interesting point is that you can use and extend the code to deal with your application.


I have created this tool quickly to help some people in the community, if you are using it and need new features, let me know, using comment or pull request.

· 6 min read

TL;DR: Look at the Couchbase Ansible Playbook on my Github.


When I was looking for a more effective way to create my cluster I asked some sysadmins which tools I should use to do it. The answer I got during OSDC was not Puppet, nor Chef, but was Ansible.

This article shows you how you can easily configure and create a Couchbase cluster deployed and many linux boxes...and the only thing you need on these boxes is an SSH Server!

Thanks to Jan-Piet Mens that was one of the person that convinced me to use Ansible and answered questions I had about Ansible.

You can watch the demonstration below, and/or look at all the details in the next paragraph.


Ansible is an open-source software that allows administrator to configure and manage many computers over SSH.#

I won't go in all the details about the installation, just follow the steps documented in the Getting Started Guide. As you can see from this guide, you just need Python and few other libraries and clone Ansible project from Github. So I am expecting that you have Ansible working with your various servers on which you want to deploy Couchbase.

Also for this first scripts I am using root on my server to do all the operations. So be sure you have register the root ssh keys to your administration server, from where you are running the Ansible scripts.

Create a Couchbase Cluster#

So before going into the details of the Ansible script it is interesting to explain how you create a Couchbase Cluster. So here are the 5 steps to create and configure a cluster:

  1. Install Couchbase on each nodes of the cluster, as documented here.
  2. Take one of the node and "initialize" the cluster, using cluster-init command.
  3. Add the other nodes to the cluster, using server-add command.
  4. Rebalance, using rebalance command.
  5. Create a Bucket, using bucket-create command.

So the goal now is to create an Ansible Playbook that does these steps for you.

Ansible Playbook for Couchbase#

The first think you need is to have the list of hosts you want to target, so I have create a hosts file that contains all my server organized in 2 groups:


The group [couchbase-main] group is just one of the node that will drive the installation and configuration, as you probably already know, Couchbase does not have any master... All nodes in the cluster are identical.

To ease the configuration of the cluster, I have create another file that contains all parameters that must be sent to all the various commands. This file is located in the group_vars/all see the section Splitting Out Host and Group Specific Data in the documentation.

# Adminisrator user and passwordadmin_user: Administratoradmin_password: password
# ram quota for the clustercluster_ram_quota: 1024
# bucket and replicasbucket_name: ansiblebucket_ram_quota: 512num_replicas: 2

Use this file to configure your cluster.

Let's describe the playbook file :

- name: Couchbase Installationhosts: alluser: root
- name: download Couchbase packageget_url: url= dest=~/.
- name: Install dependenciesapt: pkg=libssl0.9.8 state=present
- name: Install Couchbase .deb file on all machinesshell: dpkg -i ~/couchbase-server-enterprise_x86_64_2.0.1.deb

As expected, the installation has to be done on all servers as root then we need to execute 3 tasks:

  1. Download the product, the get_url command will only download the file if not already present
  2. Install the dependencies with the apt command, the state=present allows the system to only install this package if not already present
  3. Install Couchbase with a simple shell command. (here I am not checking if Couchbase is already installed)

So we have now installed Couchbase on all the nodes. Let's now configure the first node and add the others:

- name: Initialize the cluster and add the nodes to the clusterhosts: couchbase-mainuser: root
tasks:- name: Configure main nodeshell: /opt/couchbase/bin/couchbase-cli cluster-init -c  --cluster-init-username=${admin_user} --cluster-init-password=${admin_password} --cluster-init-port=8091 --cluster-init-ramsize=${cluster_ram_quota}
- name: Create shell script for configuring main nodeaction: template src=couchbase-add-node.j2 dest=/tmp/ mode=750
- name: Launch config scriptaction: shell /tmp/
- name: Rebalance the clustershell: /opt/couchbase/bin/couchbase-cli rebalance -c -u ${admin_user} -p ${admin_password}
- name: create bucket ${bucket_name} with ${num_replicas} replicasshell: /opt/couchbase/bin/couchbase-cli bucket-create -c --bucket=${bucket_name} --bucket-type=couchbase --bucket-port=11211 --bucket-ramsize=${bucket_ram_quota}  --bucket-replica=${num_replicas} -u ${admin_user} -p ${admin_password}

Now we need to execute specific taks on the "main" server:

  • Initialization of the cluster using the Couchbase CLI, on line 06 and 07

Then the system needs to ask all other server to join the cluster. For this the system needs to get the various IP and for each IP address execute the add-server command with the IP address. As far as I know it is not possible to get the IP address from the main playbook YAML file, so I ask the system to generate a shell script to add each node and execute the script.

This is done from the line 09 to 13.

To generate the shell script, I use Ansible Template, the template is available in the couchbase-add-node.j2 file.

{% for host in groups['couchbase-nodes'] %}/opt/couchbase/bin/couchbase-cli server-add -c -u ${admin_user} -p ${admin_password} --server-add={{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}:8091 --server-add-username=${admin_user} --server-add-password=${admin_password}{% endfor %}

As you can see this script loop on each server in the [couchbase-nodes] group and use its IP address to add the node to the cluster.

Finally the script rebalance the cluster (line 16) and add a new bucket (line 19).

You are now ready to execute the playbook using the following command :

./bin/ansible-playbook -i ./couchbase/hosts ./couchbase/couchbase.yml -vv

I am adding the -vv parameter to allow you to see more information about what's happening during the execution of the script.

This will execute all the commands described in the playbook, and after few seconds you will have a new cluster ready to be used! You can for example open a browser and go to the Couchase Administration Console and check that your cluster is configured as expected.

As you can see it is really easy and fast to create a new cluster using Ansible.

I have also create a script to uninstall properly the cluster.. just launch

./bin/ansible-playbook -i ./couchbase/hosts ./couchbase/couchbase-uninstall.yml

· 6 min read

Already 6 months! Already 6 months that I have joined Couchbase as Technical Evangelist. This is a good opportunity to take some time to look back.

So first of all what is a Developer/Technical Evangelist?

Hmm it depends of each company/product, but let me tell you what it is for me, inside Couchbase. This is one of the most exciting job I ever had. And I think it is the best job you can have when you are passionate about technology, and you like to share this passion with others. So my role as Technical Evangelist is to help the developers to adopt NoSQL technologies in general, and as you can guess Couchbase in particular.

Let's now see in more details what I have done during these past six months and why I am so happy about it. I have organized the different activities in three types:

  • Outbound activities : meet the developers
  • Online activities : reach even more developers
  • Inbound Activities : make the product better !

Outbound activities : meet the developers !#

A large part of my activities for this first semester was made of conferences and meetups. All these events are great opportunities for me to talk about NoSQL and get more people to use Couchbase Server 2.0, here a short list of what I have done:

  • participated to many Couchbase Developer Days in various cities (Portland, Seattle, Vancouver, Oslo, Copenhagen, Stockholm, Munich, Amsterdam, Barcelona, Paris, ...), these are one day workshops where I am helping developers to get their hands dirty on Couchbase
  • participated to Couchconf Berlin and Couchbase [UK] our main European events where I met many Customer and key members of the community
  • submitted talks to conferences and adapt them to the conference, then spoken in various conferences about NoSQL and Couchbase (33Degree Warsaw, NoSQL & Big Data Israel, Devoxx France, NoSQL Matters, and many others).
  • met many developers during user groups and meetups. I have to say that I have been very active there, and quite happy to see that NoSQL is a very hot topic for developers, and this in all languages.
  • delivered BrowBagLunches to various technical teams in companies.

Yes! Be a Technical Evangelist means, at least for me, be on the road. It is very nice to meet developers from various countries, different cultures, languages, and… this also means tasting many different types of food!

Another interesting thing when you work on a database/infrastructure layer is the fact that it is technology agnostic; you can access Couchbase with multiple programming languages: Java, .Net,Javascript/Node, Ruby, PHP, Python, C, … and even Go. So with this job I met developers with different backgrounds and views about application development. So yes when I am at a conference or meetup, I am suppose to "teach" something to people, but I have also learned a lot of things, and still doing it.

Online activities : reach even more developers!#

Meeting developers during conferences is great but it, it is also very important to produce content to reach even more people, so I have :

  • written blog post about Couchbase usage, most of them based on feedback/questions from the community
  • created sample code to show how it works
  • monitored and answered questions on various sites and mailing lists, from Couchbase discussion forums, mailing lists, Stack Overflow, Quora and others...

This task is quite interesting because it is the moment where you can reach many developers and also get feedback from users, and understand how they are using the product. I have to say that I was not as productive as I was expected, mainly because I was traveling a lot during this period.

Another important thing about online activities, is the "Couchbase Community" itself, many users of Couchbase are creating content : blog posts, samples, new applications, or features - for example I am talking with a person that is developing a Dart Client for Couchbase, so as Technical Evangelist I am also working closely with the most active contributor.

Inbound Activities : make the product better !#

So the ultimate goal of a Technical Evangelist at Couchbase is to "convert" developers to NoSQL/Couchbase and get them to talk about Couchbase. Meeting them online or during events is a way of achieving this; but it is also great to do it directly with the product. This means participating to the "development" of the product or its ecosystem. Here some of the things that I have done on this topic:

  • talked a lot with the development team, core developers, product managers, architects, … Quite exciting to work with so much smart people and have access to them. During this discussions I was able to comment the roadmap, influence features, but also it is all the time an opportunity to learn new things about Couchbase - and many other things around architecture, programming languages, take a look for example to this nice post from Damien Katz .
  • contributed some code, yes remember Couchbase is an open source project and it is quite easy to participate to the development. Obviously based on my skills I have only help a little bit with the Java and the Javascript SDK. So if like me you are interested to contribute to the project, take a look to this page: "Contributing Changes"
  • but the biggest contributions to the products are such like doc reviews, testing and writing bug reports, and this is very important and interesting, since once again it helps a lot with the product adoption by the developers.

So what?#

As you can see the Technical Evangelist job is a quite exciting job, and one of the reason I really love it, it is simply because it allows me to do many different things, that are all related to the technology. Six months is still a very short period, I still have many things to learn and to with the team to be successful, such as be more present online (blog, sample code, technical article, screencast, ..), be accepted in more conferences, and code a little more (I have to finish for example the Couchbase Data Provider for Hibernate OGM, and many other ideas around application development experience)

Finally, Couchbase needs you ! This is a good opportunity to say that Couchbase is always looking for talents, especially in the Technical/Developer Evangelist team, so do not hesitate to look at the different job openings and join the team !

· One min read

I have created this simple screencast to show how you can, using Couchbase do some realtime analysis based on Twitter feed.

The key steps of this demonstration are

  1. Inject Tweets using a simple program available on my Github Couchbase-Twitter-Injector
  2. Create views to index and query the Tweets by
    • User name
    • Tags
    • Date

The views that I used in this demonstration are available at the bottom of this post.


· 14 min read

Note : This article has been written in March 2013, since Couchbase and its drivers have a changed a lot. I am not working with/for Couchbase anymore, with no time to update the code.

A friend of mine wants to build a simple system to capture ideas, and votes. Even if you can find many online services to do that, I think it is a good opportunity to show how easy it is to develop new application using a Couchbase and Node.js.

So how to start?

Some of us will start with the UI, other with the data, in this example I am starting with the model. The basics steps are :

  1. Model your documents
  2. Create Views
  3. Create Services
  4. Create the UI
  5. Improve your application by iteration

The sources of this sample application are available in Gihub :

Use the following command to clone the project locally :

git clone

Note: my goal is not to provide a complete application, but to describe the key steps to develop an application.

Model your documents#

For this application you need 3 types of document :

  • Ideas : describes the idea with a author, title and description
  • Vote : the author and a comment - note that it is a choice to not put a value for the vote, in this first version if the vote exists this means user like the idea.
  • User : contains all the information about the user (not used in this first version of the application)

You can argue that it is possible to put the votes as a list of element inside the idea document. In this case I prefer to use different document and reference the idea in the vote since we do not know how many votes/comments will have. Using different documents is also interesting in this case for the following reasons :

  • No "concurrent" access, when a user wants to vote he does not change the idea document itself, so no need to put an optimistic locking in place.
  • The size of the document will be smaller and easier to cache in memory.

So documents will look like:

What I really like is the fact that I can quickly create a small dataset to validate that it is correct and help me to design the view. The way I do it, I start my server, launch the Couchbase Administration Console, create a bucket, and finally insert document manually and validate the model and views.

Create Views#

Now that I have created some documents, I can think about the way I want to get the information out of the database. For this application I need:

  • The list of ideas
  • The votes by ideas

The list of idea for this first version is very simple, we just need to emit the title:

For the votes by ideas, I choose to create a collated view, this will give me some interesting options when I will expose them into an API/View layer. I am also for this view using sum() reduce function to be sure I capture the number of votes.

I have my documents, I have some views that allow me to retrieve the list of ideas, the number of vote by idea and count the vote... So I am ready to expose all these informations to the application using a simple API layer.

Create Services#

Lately I have been playing a lot with Node.js, just because it is nice to learn new stuff and also because it is really easy to use with Couchbase. Think about it Couchbase loves JSON, and Node.js object format is JSON, this means I do not have any marshaling/unmarshaling to do.

My API layer is quite simple, I just need to create a set of REST endpoint to deal with:

  • CRUD operation on each type of document
  • List the different Documents

The code of the services is available in branch 01-simple-services:

You can run the application  with simple services using the following command:

> git checkout -f 01-simple-services> node app.js

and go to you browser using the

About the project

For this project I am using only 2 node modules Express and Couchbase. The package.json file looks like :

{  "name": "couchbase-ideas-management",  "version": "0.0.1",  "private": true,  "dependencies":  {    "express": "3.x",    "couchbase": "0.0.11"  }}

After running the install, let's code the new API interface, as said before I am using an iterative approach so for now I am not dealing with the security, I just want to get the basic actions to work.

I am starting with the endpoints to get and set the documents. I am creating a generic endpoints that take the type as URI parameter allowing user/application to do a get/post on /api/vote, /api/idea. The following code captures this:

In each case I start to test if the URI is one of the supported types (idea, vote, user) and if this is the case I call the get() or upsert() method that will do the call to Couchbase.

The get() and upsert() methods are using more or less the same approach. I test if the document exists, if the type is correct and do the operation to Couchbase. Let's focus on the upsert()`` method. I call it upsert()` since the same operation is used to create and update the document.

In this function I start by testing if the document contains a type and if the type is the one expected (line 3).

Then I check if the document id is present, to see if I need to create it or not. This is one of the reason why I like to keep the id/key in the document, yes I duplicate it, but it makes the development really easy. So if I have to create a new document I have to generate a new id. I chose to create a counter for each type. this is why I call the incr function (line 7) and then use the returned value to create the document (line 10).

Note: as you can see, my documents contain the an ID as part of the attributes. This ID is the same value that the one used to set the document (the "key"). It is not necessary a good practice to duplicate this information, and in many case the application only use the document key itself. I personally like to put the ID in the document itself too, because it simplifies a lot the development.

If the ID is present, I just call the update operation to save the document. (line 15)

The delete operation is equivalent to the get, using the delete HTTP operation.

So now I can get, insert and update the documents. I still need to do some work to deal with the lists. As you can guess, here I need to call the views. I won't go in the detail of the simple list of ideas. Let's focus on the view that shows the result of the votes.

For this part of the application I use a small trick to use the collated view. The /api/results/ call returns the list of ideas with their title and the total number of votes. The result looks like the following:

Note that it is also possible to select only one idea , you just need to pass the id to the call for example.

If you look in more detail the function, not only I call the view, but I build an array in which I put the idea id, label, then on the next loop, I add the number of vote. This is possible because the view is a collated view of the ideas and its votes.

I have now my REST Services, including advanced query capabilities. It is time now to use these services and build the user interface.

Create the UI#

For the view I am using AngularJS, that I am packaging in the same node.js application for simplicity reason

Simple UI without Login/Security#

The code of the application without login is available branch in 02-simple-ui-no-login

You can run the application  with simple services using the following command:

> git checkout -f 02-simple-ui-no-login> node app.js

The application is based on AngularJS and Twitter Boostrap.

I am using basic feature and packaging for Angular :

  • /public/js/app.js contains the module declaration and all the routes to the different views/controllers
  • /public/js/controllers.js contains all the controller. I will show some of them but basically this is where that I call the services that I have created above.
  • /views/partials/ contains the different pages/screens used by the application.

Because the application is quite simple I have not done any packaging of directive, or other functions. This is true at for AngularJS and Node.js parts.

Dummy user management

In this first version of the UI I have not yet integrated any login/security, so I fake the user login using a global scope variable that $scope.user that you can see in the controller AppCtrl(). Since I have not yet implemented the login/security, I have added at the bottom of the page a textfield where you can enter a "dummy" username to test the application. This field is inserted in the /views/index.html page.

List Views and Number of Votes

The home page of the application contains the list of ideas and number of votes.

Look at the EntriesListCtrl controller and the view/index.html file. As you can guess this is based on the Couchbase collated view that return the list of ideas and number of vote.

Create/Edit an idea

When the user click on the New link in the navigation, the application call the view /view/partials/idea-form.html.  This form is called using the "/#/idea/new" URL.

Just look at the IdeaFormCtrl controller to see what is happening :

function IdeaFormCtrl($rootScope, $scope, $routeParams, $http, $location) {  $scope.idea = null;    if ($ ) {        $http({method: 'GET', url: '/api/idea/'+ $ }).success(function(data, status, headers, config) {                           $scope.idea = data;            });    }     $ = function() {                  $scope.idea.type = "idea"; // set the type        $scope.idea.user_id = $scope.user;        $'/api/idea',$scope.idea).success(function(data) {            $location.path('/');        });    }    $scope.cancel = function() {        $location.path('/');    } }IdeaFormCtrl.$inject = ['$rootScope', '$scope', '$routeParams','$http', '$location'];

First of all I test if the controller is called with a idea identifier in the URL ( $ - line 3) . If the ID is present, I call the REST API to get the idea and set it into the $scope.idea variable.

Then on line 9, you can see the $ function that calls the REST API to save/update the idea to Couchbase. I use the line 10 and 11 to set the user and the type of data to the idea.

Note: It is interesting to look at these lines, by adding the two attributes (user & type) I modify the "schema" of my data. I am adding new fields to my document that will be stored as it is in Couchbase. Once again, you see here that I drive the data type from my application. I could take another approach and force the type in the service layer. For this example I chose to put that in the application layer, that is supposed to send the proper data types.

Other Interactions

The same approach is used to create a vote associated to a user/idea as you can see in the VoteFormCtrl controller.

I won't go in all the details of all operations, I am just inviting you to look at the code of the application, and feel free to add comment to this blog post if I need to clarify other part of the application.

Iterative Development : adding a value to the vote!#

The code of the services is available in branch 01-simple-services:

You can run the application with simple services using the following command:

> git checkout -f 03-vote-with-value> node app.js

Adding the field in the form

Something that I really like about working with AngularJS, Node and Couchbase is the fact that the developer uses JSON from the database to the browser.

So let's implement a new feature, where instead of having only a comment the user can give a rate to its vote from 1 to 5. Doing that is quite easy, here are the steps:

  • Modify the UI : adding a new field
  • Modify the Couchabe View to use the new field

This is it! AngularJS deals with the binding of the new field, so I just need to edit the /views/partials/idea-form.html to add this. For this I need to add the list of values in the controller and expose it into a select box in form.

The list of value located in the $scope.ratings variable :

Once this is done you can add a select box into your view using the following code :

To add the select box into the form, I just use AngularJS features:

  • the list of value described in my controller using the ng-options attribute
  • the binding to the vote.rating field object using ng-model attribute.

I am adding the field in my form, I bind this field to my Javascript object; and... nothing else! Since my REST API is just consuming the JSON object as it is, AngularJS will send the vote object with the new attribute.

Update the view to use the rating

Now that my database is dealing with a new attribute in the vote, I need to update my view to use this in the sum function. (I could calculate an average too, but here I want the sum of all the vote/ratings).

The only line that I have changed is the line number 7. The logic is simple, if the rating is present I emit it, if not I emit a 2, that is a medium rating for an idea.

This is a small tip that allow me to have a working view/system without having to update all the existing document if I have some.

I'll stop here for now, and will add new feature later such as User Authentication and User Management using for example Passport.

Version and Upgrade Management#

If you looked closely to the code of the application the views are automatically imported from the app.js file when the application is starting.

In fact I have added a small function that check the current version installed and update the views with the correct version when needed.

You can look at the function initApplication() :

  • Load the version number from Couchbase (document with ID "app.version")
  • Check the version of if this is different
  • Update/Create the view (I am doing it in production mode here, in real application it will be better to use dev mode - just prefix the design document ID with "dev_" )
  • Once the view is created update/create the "app.version" document with the new ID.


In this article we have seen how you can quickly develop your application/prototype and leverage the flexibility of NoSQL for developers. The steps to do this are:

  1. Design your document model and API (REST)
  2. Create the UI that consumes the API
  3. Modify your model by simply adding field into the UI
  4. Update the view to adapt your lists to your new model

In addition to this, I have also quickly explain how you can from your code control the version of your application and deploy new views (and other things) automatically.

I will post another blog post in few days to explain how you can easily integrate user management, security to your application and database easily