Using Microservices Architecture as API Enablement Strategy


Today Microservices architecture has become hype topic in software development industry. It is an approach to modularity which functionally decomposes an application into a set of services.  The development teams can adopt the most appropriate technology stack in order to resolve specific issues. The microservices architecture also improves services scalability by enabling features such auto-scaling and micro-container approach.

API’s and Microservices are so related once the most common facade are API interface uses fully compliant RESTful services. But nowaday we are still facing issues when needs to integrate with legacy systems to externalize its functionalities as services once the most systems does not expose standards protocols such as Web Services or REST Interfaces. Let’s explore this issue.

The Problem and Microservice Approach

The most companies desire to expose API’s internally or externally but their systems or application were not built for this purpose. The most applications are based on the following architecture:

  • Custom web monolithic applications using a single database. Eg: Java (JSF) with Oracle Database
  • Product or platform based applications such as SAP application
  • High-level applications such as cobol-cics programs
  • Client-Server applications, for example VB6 with SQL Server Database

When facing this kind of scenario, it is a common approach build an adapter component to expose the service with standard protocol. This component should look like the diagram below:


The service adapter is the key component of solution since it will enable legacy service externalization. To provide this expected standardization, the following capabilities should be implemented:

  • RESTful compliant
  • Organized around common business domains
  • Easily scalable
  • Lightweight packages

As service adapters are considered kind of integration applications, they should follow an architectural style in order to be compliant with common standards. The architectural style that best suite the above capabilities and others commons requirements is the Microservice approach.

Read more about [1] Microservices.

Microservice Implementation Strategy

Once decided that service adapter implementation are based on Microservice Architecture style some capabilities are required such as:

  • Small package and low memory consumption
  • Application startup should be fast to load new container instances.
  • Fast and simple development based on common standards such as Swagger specification.
  • Easy security integration to provide features such as Basic Auth or OAuth2.

Some frameworks has the most of capabilities listed above. The recommend are:


  • Spring boot
  • Spark Framework
  • Play Framework
  • Dropwizard
  • Apache Camel


  • Node.js

Another crucial capability when enabling API endpoints through Microservice implementation is fully integration capability with legacy systems. This kind of feature requires specific frameworks which implements enterprise integration patterns. The recommendation here is use the most famous Java framework called Apache Camel.

Read more about [2] Apache Camel.

Microservice Deployment Strategy

Once the package were built, it need to be deployed. The recommended strategy is deploy it into a PaaS (Platform as a Service) because it offers some built-in features such as:

  • Containerization
  • Container Orchestration
  • Storage
  • Monitoring
  • Logging

Also, another two crucial capabilities should be provided:

  • Being scalable in order to support traffic spikes
  • Automation API’s to create autonomous deployment pipeline

The main market PaaS offering should be considered to be used as deployment strategy are:

  1. Pivotal Cloud Foundry: is good choice when using Spring stack technology because it has a native integration
  2. Red Hat OpenShift: could be an alternative when using some Red Hat technology. It also uses docker and kubernetes to containerization
  3. SalesForce Heroku: It abstracts the features implementations such as containers or logging. It’s a good choice when building applications using the Twelve Factor App methodology

Other choices to be considered are Amazon Elastic Beanstalk and Google App Engine. Both of them are interesting because have native integration with cloud services and infrastructure once they are strong IaaS providers.

For more details about PaaS features, use the [3] PaaS comparator.

Read more about [4] Twelve Factor App.

But the best alternative to deploy and run microservices are solutions which provide the runtime platform (PaaS) and fully integration with API Manager Platform. In this case, the Sensedia API Management Suite offers a built-in feature called BaaS (Backend as a Service) which is compliant with PaaS features and capability.

The BaaS feature should be used to deploy and run those microservices which exposes API from legacy systems or also when creating new application or services using this architecture style. The Sensedia BaaS platform supports natively the following technologies:

  • Java
  • Node.js
  • Apache Camel

Read more about [5] Sensedia API Manager Suite.

Microservices and API Management Platforms

Once microservices are deployed and running, its interface should be exposed as an API and must be managed using an API Management Suite because this kind of solution offers a lot of advantages. The most of API Manager solutions have the features below:

  • Security and Resilience: to protect backend microservices from non-managed consumers. When the API are open to partners or community, those microservices should be protected from being overloaded with spikes of traffic using capabilities such as rate limit, payload size limit or spike arrest.
  • Access Control: the consumers should use the API under access policies. The API Manager must provide standard protocol such as OAuth 2.0 or JSON Web Token for consumer authentication and generate those tokens. Also, some policies may be configured such as expiration or rate limit.
  • Monitoring and tracing: the operation teams use this capability for platform health check and debugging.

All capabilities listed above are common in API Gateway solutions, but other features are crucial for API Manager solutions such as:

  • Caching: should be use to avoid unnecessary microservice use or latency once some read functions may be cached. Note that some cases, the backend services are rated and this feature could minimize some cost.
  • Analytics: the API usage could be monitored in real time. This kind of feature should provide dashboard for metric extractions.

As mentioned above, some API Manager platforms offers full integration and management for microservices deployment and runtime capabilities. This kind of feature offers end-to-end management platform, it is not necessary provide aparted infrastructure to run microservices.

The Sensedia API Manager Suite provides a solution look like the diagram below:



Using microservices architecture style is an development approach to enable RESTful interfaces from legacy systems which not expose these kind of endpoints natively, but the first challenge is choose the right implementation tools. There are a lot of frameworks and languages flavors which could help on microservices implementation. The decision of which one is chosen depends of scenario are being faced but there are some recommendations for help as mentioned above.

After development toolkits were chosen, the next decision is establish the microservice runtime and deployment platform. Once more, the decision depends of the scenarios are being faced. But in this case, the main goal is expose a legacy functionality as RESTful API and because this reason, it makes sense deploy the microservices in the same platform.

The Sensedia API Management Suite is a API Management Platform which provide the backend as a service (BaaS) feature which performs as microservices runtime having full platform as a services (PaaS) capabilities. Furthermore, the platform offers standard features as API Gateway, Caching and Analytics.

In short, the recommendation is use this kind of platform which provide full API management and also microservice runtime in all-in-one solution.


[1] Microservices –

[2] Apache Camel –

[3] PaaS Comparator –

[4] Twelve Factor App –

[5] Sensedia API Manager Suite –

15 minutes about Docker

In this post I share a video and a presentation about what is docker and how to start using.

The goal here is to show how easy it is to start using when sharing the reference documentation and showing the execution of Docker in the command line.

This post serves for those who have heard about Docker and then want to start playing with it. It is a simple guide for beginners.

So I describe the following items:

1. Basic Concepts
2. Installation
3. Image repositories
4. Image building
5. Running containers based on images

I hope you enjoy it!

See the video:

Presentation Slides:


15 minutos sobre Docker

Neste post eu compartilho um vídeo e uma apresentação sobre o que é o Docker e como começar a usar.

O objetivo aqui é mostrar como é fácil começar a usar ao compartilhar a referencia da documentação e mostrando a execução do Docker em linha de comando.

Este post serve pra quem já ouviu falar sobre Docker e quer então começar a brincar com ele. É um guia simples para os iniciantes.

Descrevo então sobre:

1. Conceitos básicos
2. Instalação
3. Repositórios de images
4. Criar suas próprias images
5. Executar containers baseados nas images.

Espero que gostem!

Segue os slides:


Arquiteturas Escaláveis: O exemplo da Spotify aplicado ao Ecommerce

Como a arquitetura da Spotify pode nos ajudar a definir a arquitetura do nosso ecommerce ?

Neste vídeo eu apresento uma comparação entre as necessidades da Spotify e as necessidades de uma plataforma de ecommerce. Detalho como as soluções aplicadas para resolver problemas de escalabilidade na Spotify podem ser aplicadas no ecommerce.

Aqui estão os slides:

E se você quer conhecer um pouco mais da arquitetura da Spotify, veja aqui neste post.

Lessons Learned: Handling ACID properties with Apache Cassandra


If you choose to use Apache Cassandra as your NoSQL database, keep in mind that you must consider the following characteristics in order to implement and design your model.

The Apache Cassandra is not fully compliance with ACID properties, however when design the application and the database model all of these properties and how Cassandra handle it should be considered.

See the following topics:

Isolation and Consistency

Apache Cassandra does not have isolation feature, so you must design considering that will not exist concurrency. It does not have lock feature comparing to relational databases, and so the approach must to be different. Consider the following scenario and how to handle it:

Many events arrive simultaneously to your system and you should parallelize to ensure performance. However, each event reads and write a same data that is stored in Cassandra, of course you should read the record and then write, however there is no isolation guarantee, and then it can share outdated data processing and thus may be inconsistent. The best way fix this scenario should be to get all the data you need and process it in memory and then update (write) in batch or serialized in order to evict chance of conflict.


One of the strongest features of the relational approach is atomicity that within a transactional context when an exception occurs all database updates are undone. It is the famous law of commit or rollback. However, Cassandra does not provide this transactional feature coupled with application method transaction and then the implementation of these scenarios turns complex. See the following situations and how to handle them:

# 1 – Within a application method that updates multiple tables in Cassandra it is necessary to ensure that all updates must be made in atomic way. In this scenario it is interesting to use batch feature that API provides and place within the same batch all writes of all tables.

# 2 – Within a application method you should ensure that others points may occurs errors (eg a REST call) and then the context should be rolled back. In this scenario it is interesting to create compensation routines: it should be an exception block to undo the update or create retentive routine to the point that has not been updated.

Asynchronous/Parallel Programming

One of the strongest features of Cassandra is the writing/reading optimization. However, the writing method used in the client can be more effective then using a serialized approach or even using batches. See the following scenario and how to handle it:

You will need to write to Cassandra a few million of records and the scenario implementation shoould be different ways:

# 1 – Write the records in the serialized way, it is one by one each of the records but this approach is not the most performative because its application should wait for execution confirmation.
# 2 – Write the records in the batch way, it is sending a block of records, but this way shoud increase network latency when handling very large block.

But we have a third approach that seems quite interesting, specially when not requiring transactional context: use the feature of parallel programming provided by the API. In the Java API, the driver handle the Future features (using Guava) and in our tests seemed much more performative than other approaches above. However, be careful because this feature uses threads and so your application must be set up to manage a large volume of threads.


The design and implementation of your application should use total different approach comparing when using a relational and transactional database. Keep in mind all of these characteristics should carefully be considered in design once we always used relational aproach.

See more: 

Os segredos de escalabilidade da Spotify

Como o Spotify consegue entregar serviço de áudio com qualidade e performance. Pois bem, o “segredo” não é segredo para ninguém. Veja a provocação abaixo:


Como podem ver, a abordagem é bem antiga e funciona até hoje. Eu vou tentar sintetizar aqui o que ví na #devcamp2015. A apresentação é do Niklas Gustavsson (@protocol7), Lead Engineer da Spotify.

Basicamente, são 3 grandes pontos a serem destacados na abordagem utilizada pelo Spotify:

  1. Arquitetura orientada a serviços: microserviços
  2. Técnicas para Content Delivery
  3. Tecnologias “quentes”

Agora, vamos detalhar cada um destes pontos:

1. Arquitetura orientada a serviços: microserviços


Sim, microserviços é um dos “segredos” do Spotify! Basicamente ao dar responsabilidades bem definidas para os componentes, eles conseguem escalar cada um desses componentes ou mesmo evoluir com menos impacto cada um deles. Vou explicar um pouco da responsabilidade de cada um:

– Access Point Services: serviços que roteiam para outros serviços com responsabilidade de orquestrar e agregar outros serviços. Porta de entrada inicial com responsabilidades sobre questões cruciais como segurança e load balancing.

– View Services: serviços que “preparam” os dados para a camada de visão ou seja, deixam os dados mais preparados para serem facilmente consumidos por seus clientes. E.g mobile app e site.

– Data Services: Serviços de dados com responsabilidades de leitura e escrita

– Meta Data Services: Serviços que “dão” sentido aos dados “crus” dos data services. Geram dados canônicos, composição / agregação de dados, etc.

2. Técnicas para entrega de conteúdo

Content Delivery é uma tarefa simples não ? Temos muitas soluções de mercado, certo ? Pois bem, a tarefa não é tão simples quando se fala em streaming, bem como tais soluções de mercados não funcionam tão bem como prometem. Vejamos o que eles fazem sobre este assunto.

– Latência importa!!! Sim, importa e muito, pois seu cliente quer abrir o aplicativo e começar a escutar a musica sem esperar e sem interrupções, e nesse sentido quanto mais próximo do cliente melhor, a latência diminui e conteúdo é entregue mais rápido. Neste sentido, o Spotify sempre está atento aos serviços que hospedam mais próximos dos seus consumidores.

– CDN ajuda! Arquivos estáticos como imagens são armazenadas nos CDN’s garantem alta disponibilidade e baixa latência para os arquivos mais populares,

– Streaming começa com download de arquivo. Pois é, você pensou que ao iniciar uma musica, você já estava em live streaming, certo ? Não é bem assim, o pulo do gato é download de um arquivo pequeno de inicialização com o trecho inicial da música chamado “Headfile”

3. Tecnologias “quentes”

Quando se fala em tecnologias “quentes” pensamos em algo inovar e muito distante, correto ? Basicamente o que o Spotify usa é Java!! Tanto que até um de seus principais banco de dados é em Java, o Cassandra, desta forma vamos falar um pouco do Cassandra e de Java.

– A escolha do Cassandra: características como alta disponibilidade, multi-site por região, facilidades de replicação dados entre regiões fazem muito sentido para eles. Outro ponto favorável é a performance de streams de dados que vem continuamente do cliente para escrita com muita performance bem como recuperação de dados por views.

– Aproveitando as novas features de Java 7/8: Uso intensa de API de Streams, Lambdas e API Futures são fundamentais.

Ainda, faço um adendo aqui sobre uma característica que acho fundamental para aplicações escalarem e ter performance: Uso e abuso de assincronismo e paralelismo! Chega de economizar e blockar threads!

E por fim, termino com uma frase irônica do Niklas sobre o Java. Vale lembrar que ele é pythonista!

Java is a shit language in an awesome VM while Python is a awesome language in a shit VM