Blog>>Operations>>DevOps>>Sharing configuration between your CI, build and development environments

Sharing configuration between your CI, build and development environments

This post is a follow-up to our presentation at the OpenStack Summit in Berlin, where we discussed maintaining a unified CI and building pipelines using the open source CI system, Zuul. You’ll find a recording of the talk here      link-icon and a related OpenStack Superuser writeup here      link-icon.

I thought it would be useful for some of you to expand the concept of unifying environments and to include also the development one. I will also explain why this may help you ensure your project is progressing smoothly.

GUIDE CI/CD in 6 weeks

Common obstacles to streamlining software development processes

DevOps practitioners are always on the lookout for ways to optimize and bulletproof their development workflows. By optimizing, I mean making the work of contributors more efficient, and by bulletproofing I mean making sure that the systems that guard the source code repositories from accepting malfunctioning changes are good enough to prevent breakage in production environments.

So, to make things easier, here are three steps I’d recommend you take:

  • devise a way for developers to quickly set up their sandbox, try out their changes and run automated tests.
  • Speaking of optimization, keep the list of preparation steps short. This way people can get up to speed in minutes, even if they decide to recreate their environment from scratch.
  • Last but not least, it's so much better to store the instructions as a regular executable script, not as a text manual for copying and pasting, so they are ready to be tested and reused.

Perhaps you are wondering why they are good to reuse. The CI system is sometimes treated as a separate location for building scripts. Whether it's Groovy in Jenkins, YAML in Circle or Travis, build commands are often copied over to the CI job definitions and start to live their own lives. Even worse, the source of those commands can be word-of-mouth between two developers from different business units, or some magic snippet sent on a chat channel. Communication and collaboration is an important part of the DevOps cultural attitude, but it may need to be formalized. What’s more, decisions that have been made should be recorded for other team members.

This can lead to two undesirable situations:

First, when a change to the build process requires modifications in multiple places to keep them in sync.

Second, when the only way for a user or developer to know how the software is built or deployed is to reverse-engineer the CI system. This is particularly crucial when a job cannot be easily run outside of the system.

The first problem will give you a throbbing maintenance headache, while the second can easily prevent people from recreating and debugging problems that have occured in the CI. It's best to tackle these problems as early on in the project as possible, establish a well-defined set of locations for scripts and dependencies, and assign them particular responsibilities. This can be especially beneficial when you decide to also pursue an infrastructure-as-code approach. Below I will tell you how we have dealt with it.

-> Want to learn more about CI/CD? Check out our other articles:

The road to high-performing software delivery

In our project, we used a twofold approach to making the CI, build and dev environments consistent. The first step was to unify the CI and build processes. Zuul, the CI system used in Tungsten Fabric, with its focus on a sharing configuration, made it easy to reuse pipelines and their associated Ansible code. We used the same Zuul setup as a CI and release platform. Our CI and build pipelines use exactly the same jobs and job definitions, so the build steps are stored in only one location (and since it's under CI, it's automatically tested when changed!).

The next task was to establish the development environment, reproducing the CI conditions as closely as possible. As Zuul jobs can’t really be run outside of Zuul, we had to find another way to provide the build pipeline for developers. (For a nice example of how developers can directly run CI jobs on their own machines, check out Circle's awesome local CLI).

We decided to minimize the amount of CI-specific code in the Zuul playbooks and leave only "one-line" invocations of code stored in the source repos. The interface to building and testing is therefore now nicely wrapped in Makefiles, Dockerfiles, RPM spec files and the like. These  take care of setting up their runtime environment, so that they run smoothly without any previous manual setup. Having this setup, it was easy to pack the scripts into a container image and publish it as a base platform for building the project. The contact surface between the container and the sources is really small, which results in minimized copying of commands and also removes the need to duplicate hard-coded dependencies (except for some basic bootstrapping tools like the version control system used to pull further projects).

Docker, which was used in this example, and containerization in general are not universal cures for everything, and are in fact quite flawed. However, they excel in one area - making environments isolated and reproducible, which is a very appealing trait when your are designing your development workflow. Regardless of whether you decide to treat the container images as final artifacts, use something along the lines of the build container pattern, or provide an interactive container with all the development tools required for working on your project, containers will help you keep the dependencies clean. They will also allow you to support users working on different platforms, even if you prepare only one "official" environment. People using Linux, Windows and macOS can run the same containers and benefit from automating their setups. This will mean less work for you and more convenience for them. Just remember not to make any assumptions about the host system and keep all the important actions inside the containers. Your users won’t love you if you ask them to execute some Ubuntu-specific commands before starting the container workflow. After all, doing so will make it harder for everybody not using that particular system.

What to remember to deliver faster develop build test deploy workflows

Start with some basic DevOps practices. Think about how many building and testing guides are stored in non-scripted locations like documentation, wikis and readmes. Manual instructions are very likely to get outdated, as they can't be automatically verified and can break after changes occur in the sources. Keep in mind the inevitable fate of mistreated sources: unused code will rot and untested code will break. It's better to keep all the commands in a script along with the source code and run it during testing. This way, every change will have to conform to the procedure defined in the script, or modify the script so it will still work after it merges. Finally, the scripts can be used in multiple places like the CI, build and development environment so that you won't have to think about keeping them in sync.

So let’s boil this all down to a handful of essential tips:

  1. Make sure you keep all your dependency information and environment preparation instructions along with the source code.
  2. Make sure that it's scripted and that you're able to test it in CI, so it's always up-to-date (it’s better to use the scripts in CI and build systems so that they're required to work at all times).
  3. Choose a platform (distribution/version) that you'll support for the development environment. Extra tip: tools like Docker and Vagrant can make it easier to expand the support to other OSes.
  4. Try to step into the developer's shoes from time time to time and prepare a change in the development environment you have created. Any deficiencies or design flaws will become obvious, thus giving you the opportunity to make the tool even better.
Łukow Jarosław

Jarosław Łukow

Senior DevOps Engineer

As an engineer at CodiLime, Jarek handles infrastructure, automation and streamlining software development processes. His multidisciplinary expertise covers cloud technologies, virtualization, computer networks and SDN solutions. He is passionate about cloud-native technologies and the newest trends in...Read about author >

Read also

Get your project estimate

For businesses that need support in their software or network engineering projects, please fill in the form and we'll get back to you within one business day.