In this article we will be discussing some of the many challenges which present themselves as organizations adopt microservices, docker, containers, and continuous delivery practices. This blog post isn’t aimed at solving all your problems, but to give you an idea of where you will likely encounter friction and how you might go about solving these issues in an organic fashion which aligns with your organization.
When to use Docker Containers
You’ll find hordes of articles about leveraging docker containers and microservice patterns, so I won’t repeat what’s already been said in at least 20 other places. Something that is often overlooked is that Docker allows us to package almost any application that was built in the past 10 years (longer, if you’re in for a challenge) and deliver that application using a vast landscape of mature tooling. You can choose from Kubernetes, Marathon DC/OS, Docker Swarm, and now Netflix’s Titus. This is the secret sauce behind Docker, it’s allows for an opinionated approach to delivering software. This software can be microservices or legacy services, each have their own sets of challenges. If you can master and automate the tooling for packaging your applications, you will be in a great position for delivering the next generation of services for your business, microservices.
What really matters? Pipelines.
As an engineer, I will take every opportunity to experiment with the newest and most popular tools, languages, frameworks, and platforms available. This is understandable as tools such as Bamboo or Mercurial can certainly feel outdated or burdened with what seems like excessive manual intervention in order to properly automate. Be careful here, adopting additional new technologies is adding complexity for the entire organization as they begin to assimilate the new process and ecosystem.
Carefully evaluate existing technologies in your organizations and if you can find ways to fully automate using these solutions you should. Nevertheless, you will likely be forced into upgrading some older technologies or outright replacement for others. If setting up or defining your build process requires any sort of manual intervention it will not be sustainable. A small team of engineers (10) can produce several new microservices a month. There is no time for GUIs in this brave new world. Adopting containers and microservices requires all components in the software build and delivery life cycle to be decorated via configuration and actions to be triggered through an API or from an event.
I cannot overstate how important it is to automate your build and delivery pipeline. In many cases you’ll find yourself partially automated at first but will quickly recognize the patterns to fully realize end-to-end automation. Don’t let this work fall to the wayside. Constantly improve and standardize your pipeline or you will feel the consequences later. This typically will manifest itself in bottlenecks and increased complexity when your engineers begin to rapidly author new microservice applications. There is something else which can be non-trivial to automate for organizations, let’s dive into it.
Legacy Change Management
Organizations with existing change management policies will be quickly become inundated with the rapid pace of innovation. Your teams will realize in order to manage the bulk of their change management must be done at the onset of their projects and tasks. Properly measuring and managing the effects of the change will become increasingly apparent as you begin to realize how quickly you can begin to make changes to your infrastructure and applications. There is no magic bullet to this, careful planning and communication is required for the evolution of this very important business process.
There are a number of organizations which have learned to leverage feature toggles, canary deployments, and have mastered the art of deploying thousands of changes a day. It can be challenging for most organizations to fathom leveraging these techniques let alone begin to implement them. Start small, a feature flag here, or there. Modern tooling such as Jenkins and Kubernetes is beginning to make canary deployments manageable for even small teams.
The point here is automating change management must be striven for in any organization leveraging microservices but sometimes we’re faced with the fact that we’re not at the point where these approaches are democratized enough for trivial implementation.
Monitoring and observing the change in your organization
You may not have canary deployments, or your change management process fully pipelined but what you do have is your business demanding that you innovate and quickly release more software. More! Regardless of where you are in your container journey the only way you’re going to be successful at it is if you can observe and monitor the health of your infrastructure and the applications running on your container platform.
While you’re deciding on how to approach container monitoring, there are some pretty big questions you’ll need to answer. As the complexity of your application quickly rises, do you have a team in place to constantly configure, refine, and operate your monitoring stack? Do you even want a team that is dedicated to managing your monitoring tools? If that’s the case, there are a number of open source tools that can help you achieve the goal of observing custom business metrics and answering the question behind the general health of your organization. Teams are realizing some of the benefits to leveraging Prometheus in order to instrument key business metrics and graph them over time with applications such as Grafana. These solutions are not turn-key, however, they require significant effort to implement, maintain, and evolve over the course of your applications lifetime.
Most companies want to focus on building software for their business instead of babysitting their monitoring solutions. This is where vendors, such as Instana, can effortlessly and automatically collect information about your ecosystem and provide deep insight into the life cycle of your containerized applications, through tracing, machine learning, and artificial intelligence to deliver meaningful data regarding the highly complex interactions between your container workloads.
As your organization has begun revolutionizing their software engineering teams by institutionalizing DevOps practices such as continuous deployment, testing and integration. leveraging cloud-native 12 factor applications, docker containers, and serverless, always keep in mind that you need a solution in place to monitor and observe these new systems. As we spoke heavily about the benefits of automation throughout this article, shouldn’t your monitoring be just as automated too?