We will regularly publish articles on our blog written by the technical teams, which give life to Keecker, in our HQ in the heart of Paris. We wish you a good reading!


Switching from Jenkins to Gitlab CI


Keecker is not like other companies. Not only do we build a brand new and innovative robot from scratch, but we also need to develop and produce quality applications for iOS and Android. Those projects are interdependent, but they also have their own complexity, dependencies and require different skills. This means that we have a fairly complex build process, which is also essential in our day to day work.

Historically, Jenkins has been our solution to build our projects locally. But facing a growing numbers of projects and developers, we past the point of no return: we needed a better Continuous Integration (CI) infrastructure.

Following our migration to Gitlab, we decided to give Gitlab CI a go, to replace Jenkins in our continuous integration pipeline. Hoping to simplify the build process and to make it more reliable for faster iteration.

After a few weeks of usage, we decided to share our thoughts on the process, its pros and cons.


“Move fast and break things”


Jenkins was definitely a good solution to start with. Well documented, huge community, it helped us at the beginning but it became a problem on many levels.


Slow builds

We had a limited number of machines to build on, three actually, which means that each time there were 3 concurrent builds, other builds were queued. In a team of roughly 15 developers, that happens a lot!

One way to fix this problem was to buy and configure more machines, but we had no desire to set up a build farm in our office.

As if that wasn’t enough, our Jenkins instance was running on a build machine, which means that when building, Jenkins dashboard could take minutes to load. Hosting a Jenkins instance also meant maintaining it, keeping Jenkins and its plugins up to date to avoid security breaches.

Finally, not only the builds were slow but if our build happened to fail, we had no way to see at a glance what failed exactly.


Bad developer experience

On the developer side, the experience with Jenkins was kind of unpleasant.

We were using one Dockerfile for a lot of different projects, it was hard to know which project required which dependency/package, and bumping a package version could break a project but not another.


A developer at Keecker trying to update the Dockerfile


If the Dockerfile wasn’t enough, we also had to deal with a mix of local and Docker configuration. Some environment variables were set in the Dockerfile, some locally in a .bashrc file. Hard to maintain did you say?

Even building on a day to day basis could be painful, The only way to do it was to dig through the console logs (thousands of lines!) and find the relevant one. This was definitely restraining us from iterating as quickly as we could, something had to be done.

As a result, the growing numbers of projects combined with the stagnant numbers of mediocre PCs made us look for a more viable solution. We needed many machines, multiple configurations and a better interface. Gitlab was the answer.


How does Gitlab CI works (at Keecker)?


Gitlab CI mainly works thanks to two entities : the Gitlab runner and the .gitlab-ci.yml file

The Gitlab runner is in charge of picking jobs and running them. At Keecker we use 3 types or runners :

1) Gitlab shared runners, they are free to use but with limited usage (environment configuration and build time)

2) Google Cloud Platform virtual machines, they offer great scalability and are fully configurable

3) Physical machines, they allow us to have physical devices plugged in and are mandatory for iOS build that needs to run in an OSX environment

The .gitlab-ci.yml file has the sole purpose of describing how to build the project :

At Keecker, we use Docker, thus almost all .gitlab-ci.yml starts with a base Docker image.

Having a custom Docker image instead of the classic Ubuntu:16.04 allows us to gain build time.

All required packages, SDKs and environment variables are already installed in our Docker image. Thus, we don’t have to install them at the start of each job.

After that we define our stages and each jobs:


Which translates in Gitlab to:


Fairly simple, right?

Having migrated five projects to Gitlab CI we already have a few insights on what is working, what is not and the few difficulties we had to face, will have to face, in order to migrate the remaining projects.



- Faster builds

Most of performances improvements are related to the fact that we only install and clone what is necessary: we have one Docker image and one .gitlab-ci.yml for each project. Running steps in parallel also does improve build time.

- Dev experience improved

Having small and specific jobs allow us to see what part failed at a glance and to restart only this part of the build. Having everything on one website also makes things easier: merge request are updated with test status and coverage, everything is one click away.

- Better scaling

Using Google Cloud Platform machines makes it very easy to scale by increasing ressources or spinning up a new machine.

- Easier to maintain

Previously, when updating our Jenkins job it would impact every branch. Whereas, the .gitlab-ci.yml file is versioned in the same repository as the project and works with branches. It means that if anyone wants to update the build he/she can do it on its own branch without risking to break the build for everyone.



- Gitlab on Kubernetes was a pain to setup

Gitlab offers a “one click” integration with Google Cloud Platform Kubernetes but, doing this, we weren’t able to pull our private Docker image. Hence we had to manually configure our Kubernetes cluster. A process which is not very well documented and during which we stumbled across a lot of unresolved Gitlab issues. To this day we still have issues caching gradle dependencies.

- Docker pull on each stage

Each job being independent means that it starts on a clean base but it also means that it pulls the Docker image and re-installs packages every time. That’s why we decided to go with a custom Docker image for each project and limit the number of clone or package updates we do in the .gitlab-ci.yml

- Validate multiple projects changes

As we said in the introduction, at Keecker we have interdependent projects. They have different repositories but need to be built together or at least be validated together. Gitlab CI is designed to work well with one repository and not having a build validate two merge requests on two different repositories.

We are not there yet, but we can already see that this is going to be a challenging issue.


A step forward


Keecker moved fast to provide new content and experiences for its users. But accumulating technical debt, we needed to change our build system for a solution more adapted to our needs. This move to Gitlab is an important step to a more mature company and our ability to release new features faster.

After all, Keecker is an intelligent robot, we ourselves had to be smart to create a build system that enables more intelligence to be built into ours robots today and tomorrow.  


Keecker new build system