On the face of it, people could be forgiven for thinking that this headline might be true: surely DevOps replaces the need for a lot of the old-world processes and tools? In smaller scale organisations or teams it may well be that your DevOps processes and tools encompass necessary configuration management (CM) process and principles. But with any scale, you will find a sound understanding of CM is central to successful DevOps: keeping track of who did what, when, how and why and its deployment status requires slick and modern CM tools and processes to deliver this at the speed required to support DevOps.
Many organisations at least aspire to implement DevOps, even if that doesn’t reflect all their development and deployment practices. As Gene Kim writes: ‘DevOps is more like a philosophical movement, and not yet a precise collection of practices, descriptive or prescriptive’ The CALMS framework (culture, automation, lean, measurement and sharing) is one way of understanding it.
Early DevOps leaders were online (webops) organisations such as Flickr and Netflix, deploying to servers over which they have total control. Amazon with its web services (AWS) offerings is both an enabler for others and an exemplar of good practices itself: 50 million deploys a year a recent headline figure (including deployments to internal environments).
Your development tool stack might include Ruby On Rails, Java / .Net or other languages and frameworks, topped with a layer of Javascript. You will have plenty of automated tests including unit and integration tests, and continuous integration servers providing feedback on every build. With a DevOps pipeline, you treat infrastructure as code and version it through to deployment.
If you are doing ‘good DevOps’ then you are most likely satisfying basic configuration management (CM) principles. These include configuration identification (defining items under control, classifying those items, naming conventions etc) and configuration control and status accounting (the management of versions and the creation of baselines).
So how much CM experience do you need in your team - isn’t it all being handled?! The issues start to surface particularly with larger teams, more complex and older systems, and your existing processes which you use to manage change successfully.
The rise of the DevOps engineer
DevOps came from a desire to increase co-operation between teams with different skill sets. This requires an understanding of the different disciplines, but communication across specialities isn’t always easy. In a smaller startup, a multi-skilled (development, testing and operations) DevOps engineer may be able to address all your needs.
Some larger organisations try to adopt this solution, gently pushing out the door some of their existing people in operations, or security, or CM or DB admins who may have difficulty cross-skilling. But their depth of experience is then missed. Experienced CM people understand the following sorts of issues.
Unconscious dependencies
To build a Java or Javascript project with open source components, you configure your system to point to the relevant third party repositories and run the relevant build command (e.g. ‘maven’ or ‘npm’). All the dependencies are auto-magically fetched and built.
A single top level library may depend on tens or even thousands of other components. It is not always easy to understand this tree of dependencies and that each of the included items is a particular version (and other versions may not work - or at least haven’t been tested).
If you run the same command again tomorrow, some of those items may have been updated and new versions will be fetched. Most of the time this works. But the ease with which it can be done can disguise two issues relevant to CM:
- reproducibility of the build;
- relying on third party sites/repositories which you don’t directly control.
In March 2016 there was a high profile incident where a developer removed all his Javascript packages from the NPM package repository. It was quickly discovered that one of them (left-pad) had been included by thousands of other projects and packages and they wouldn’t build without it - a major disruption.
So how do we mitigate this sort of risk? A straight forward way is to ensure that you put a proxy between your developers and the public repositories (or at least for build servers). The proxy downloads required items and also caches them - enabling you to reproduce the build without further external dependencies. You can keep the cache for as long as your internal processes require.
Versioning of binary assets - How many sources of truth?
Large binary assets are common in games development as well as other markets (games often generate terabytes of graphics, video and rendered assets). These need to be managed in coordination with the source code your developers are producing so that you can reproduce a particular release of your software together with all its dependencies.
What tools are you using for version control of the source and for versioning the binary assets? The more tools you have the greater the management overhead for your multiple sources of truth.
Shiny tools and technologies
Like magpies, developers can be attracted to the latest shiny tool or technology without always considering the wider implications.
One example is containerisation technology such as Docker - this has grown very rapidly in adoption, particularly in development and test phases. The principles are very attractive: once created, the container and all its dependencies down to operating system packages move as one unit through the lifecycle from development to production - so it makes CM of your environments easier. But the technology is still evolving very rapidly and is not yet as easy to get rock solid production reliability as other more mature technologies.
As with other binary assets, you also have to store and version these containers and manage them in the longer-term.
Other tools that tend to change frequently include discussion and workflow tools, maybe a SaaS tool such as Slack. They work very well, but there can be a challenge to ensure that discussions are linked to requirements and issues which are then implemented as changes to code. Without this you lack traceability from requirements through to testing and deployment - important for most, and vital in some market sectors.
Another factor to consider with such tools is the migration capabilities - what happens to all your data if you change tool next year to something different?
Existing change management
Most large organisations have implemented aspects of ITIL for their infrastructure and services. The various service management processes have been implemented and risk has been reduced. For example, CAB (Change Advisory Board) approval may be mandatory for a certain type of release to be deployed.
A DevOps implementation has to integrate with these existing processes to be successful - and there is no reason it can’t! The key is a good understanding of the principles involved and then finding appropriate ways to satisfy those principles while achieving the goals of DevOps.
Intelligent evolution
The larger your organisation, the more important it is to have deep skills in all relevant areas including CM, and make sure they are all involved in your DevOps implementation process. There is no one-size-fits-all, but instead an intelligent evolution, which ensures the core principles are satisfied while still achieving the gains of a DevOps approach. Knowing the right questions to ask and ensuring they are asked at the right time is vital.