What is AWS Fargate
AWS Fargate is a layer underneath ECS and EKS that provides additional automation; managing the provisioning of compute resource to run orchestrated workloads. Fargate allows you to run containers without having to manage the hosts that the cluster runs on. Currently Fargate is available on ECS (Elastic Container Service), with support for EKS (Elastic Container Service for Kubernetes) available in 2018.
ECS – The Elastic Container Service
Elastic Container Service is Amazon’s own Docker compatible container orchestration service. Clusters are created using EC2 instances then tasks, batch or service, are defined to run on those clusters. EC2 resources may be added and removed from clusters to scale up and down capacity. An ECS agent runs on each host in the cluster which reports back its resource availability and task status. ECS looks at the resources required to run the defined tasks and distributes those tasks across the available resources. The ECS agent running on each host pulls the requested Docker image from the registry and runs it. This master and agent model is similar to that used by other container orchestration engines such as: Kubernetes, Docker Swarm and Apache Mesos.
ECS also provides a private Docker image registry with full permission controls; it’s possible to use any other Docker registry, public or private. In summary the basic steps involved to launch your microservice on ECS are:
- Create your Docker image
- Push it to a registry
- Create an ECS cluster of EC2 hosts
- Create a task definition, batch or long running service
- Launch task
EKS – Elastic Container Service for Kubernetes
Elastic Container Service for Kubernetes is Amazon’s managed container orchestration service built with Kubernetes; see the previous introduction to Kubernetes to learn more. When creating an EKS instance, three Masters are automatically created in separate availability zones. These masters are fully managed by Amazon to ensure that they are available and fully patched. The workers are added to EKS by manually creating EC2 instances. You now have a full Kubernetes environment up and running without having to do a lot of work yourself or being a genius. The resulting Kubernetes service is just like any other and the regular tools and extensions will work. This is quite cool. You can develop your microservices application on your laptop using minikube then easily deploy it at a large scale to EKS all using the same toolchain.
Fargate Automated Provisioning
ECS and EKS both provide different flavours of container orchestration, so what does Fargate do? You hopefully noticed an important step when creating both ECS and EKS services, you need to manually add EC2 resources to provide compute power to run your containerised services. While this gives you plenty of control, it would be nice to have the work done for you. That’s what Fargate does. It’s the option to have the EC2 resource provisioned automatically for your ECS or EKS instances. You specify how much resource each microservice needs and Fargate will ensure there is enough EC2 capacity available to run that load. As microservices are scaled up and down, Fargate will scale up and down appropriate EC2 resource.
Ubiquitous Kubernetes and the Orchestration Battle
With recent announcements from Mesos, Docker and Amazon stating their support, it appears that Kubernetes has taken a big lead in the orchestration battle for ubiquity. If you want to use Kubernetes orchestration, there is a wide choice of service providers that include Kubernetes:
- AWS EKS
- Google Compute
- IBM Cloud
- Microsoft Azure
- Red Hat OpenShift
While all these providers offer vanilla Kubernetes, they also differentiate their offerings with their additional service addons; Amazon has the widest range of options here. The wide choice available should result in healthy competition between the providers.
Amazon has done a good job of reducing the complexity of managing container orchestration and resource allocation when using Fargate with ECS and EKS. CloudWatch and X-Ray provide raw metrics and distributed tracing respectively, which can help with troubleshooting performance problems. However threshold alarms need to be manually configured and maintained and collected traces sorted through by hand. Acquiring a gigantic bucket of observed data about your application and its environment is a good start, but comprehending this volume of data is difficult if not impossible for the human mind; you need machine intelligence to make sense of it all.
Stay tuned – soon I’ll publish how to effectively monitor these highly dynamic and extensively automated Fargate environments. Understanding how to deal with the bigger question of ensuring that your monitoring solution can keep up with the pace of constant change and make sense of the large pool of monitoring data resulting from managing such systems. In the meantime, you can get an understanding of this by reading our post on monitoring OpenShift applications.