Harvee

Philipp’s Playground in the Cloud

09 Apr 2018

Getting To Know Microservices – Building the API Gateway

Last month has been super busy regarding my microservice adventure. It was time to finally build the API gateway (using Kong).

Before doing so I spent time on some housekeeping for my already existing application containers. I haven’t looked into how the features around docker-compose had evolved lately so it was good to spend time on this again. I introduced dependencies between (Rancher) services which will make bootstrapping an environment more reliable.

version: '2'

services:

worker:

image: ${registry}/${service_name}:${version}

links:

- harveees:harveees

depends_on:

- harveees

I also tried to find a good way of moving containers to other hosts. I recently migrated from EC2 to Hetzner Cloud and there was still some downtime involved. My solution for now is using Rancher scheduling rules which should prevent an additional container instance to be spun up on a host, where a container from the same service is already running.

labels:

io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}

Finally I looked into ways of making my images as independent as possible concerning the environment they are deployed to so I won’t have to distinguish anymore between containers running in my staging environment vs. the production environment.

Let’s come back to the topic of API gateways however. In my previous blog post I evaluated a few solutions and decided to go foreward with Kong. Kong itself offers docker as an installation option, I figured though that I might need a customized container image. In the spirit of continous deployment there should be a build pipeline taking care of having new versions of my microservices containers showing up. Since I won’t be spending time for now on implementing automatic service discovery (which Kong would provide a feature for) I had to look into ways to register these new services at the API gateway. By default one would use http calls executed e.g. via curl to talk to Kong and to create/update an API. Since the production instance of the Kong gateway will be on the Internet, I decided to add more security related features to my customized Kong image (so I can securely talk to my API gateway from the build pipeline).

As just mentioned the Kong administration is done through http calls (against the admin API) and the Kong documentation shows how to invoke them using curl. It felt however that I needed an abstraction layer on top of the individual API calls. Again this requirement resulted mainly from my wish to continuously manage the gateway from the build pipeline. My focus here was on the creation of an API: in case it already existed (because it was created during a prior pipeline run) it only would need to be updated (curl -X PATCH). Would this be the first time however to register an API it would have to get created (curl -X POST). Naming an API (for its initial creation) should be achieved by using the unique parameters of an API (in my case a URL pattern combined with the host name it is called for) instead of assigning a predefined name. In order to accommodate all this (as well as other features) I invented carl, which still reminds heavily of curl but has some extra features built in. I named it carl in remembrance of Carl Denham who brought King Kong to New York. I found this a funny analogy but I do hope that my cli has more fortune in handling Kong! The following command shows how I am using my cli to create/update an API from my build pipeline:

carl update_api --server $kong_admin --strip_uri false --uri $kong_uri --upstream_url http://proxy.$stack --host $kong_target_host --client_ssl true --cert $kong_cert --key $kong_key

It does look very similar to how you would invoke curl but it combines the create and update call and names the API based on the provided uri and host parameter. Also carl supports client ssl certificates as a means of authentication. Another call I am using as part of every pipeline run is for “documenting” the entire API configuration on a Kong gateway. I am writing the output into a script and make it available as an artefact attached to the pipeline run so I can use it later in case I need to bootstrap the gateway again:

carl dump_apis --server $kong_admin --client_ssl true --cert $kong_cert --key $kong_key > createKongApi.${CI_COMMIT_SHA:0:8}.sh

I mentioned the build pipeline a few times now and so far I used Jenkins running on my local server for that. Since I wanted to try alternatives for quite some time I had a look at Gitlab. I have to admit that I immediately became a huge fan of it. Composing the pipeline definition in a single file (.gitlab-ci.yml) with a shared context rather than having various jobs and worrying about how to handover metadata or artefacts felt very easy to do. Gitlab’s native Docker support (both for operating the runners as well as for running the scripts on the runners) is just super cool. I now have a carl cli container that I am using on a containerised Gitlab runner to register the APIs from the pipeline (feels a bit like inception). Also their proposed, simplified gitlab flow sounded appealing to me, up until now I mainly used git flow.

In the end I have what I think is a generic pipeline definition for all my microservices, which ends with proper registration at the API gateway: First a container with the application code (based on the Django Rest Framework) is built. Secondly the container is deployed into my Rancher estate and finally the service URL is registered with the Kong gateway.

The pipeline furthermore always deploys to “green”, which in my case is where the new code lands first and I can use a specific carl cli command to take the green environment “live” for all users. The cli command triggers the change or creation of an API accessible via the production host name to point to the upstream service, which is also referred to by the “green“ version of the API.