Since its release in 2014, Kubernetes has truly revolutionized the world of application orchestration. Its ability to truly be platform-agnostic has driven its adoption to traditional data centers, public cloud, as well as satellites in outer space! However, due to its complexity, special consideration should be given to its security configuration. This is especially true while leveraging a cloud service provider (CSP) -managed cluster such as Amazon Elastic Kubernetes Service (EKS). For starters, Amazon EKS is compliant with major security standards such as SOC, HIPAA, PCI and is in-progress for FedRAMP and DOD CC SRG accreditation. The service is also available in the AWS GovCloud region for ITAR compliant workloads and applications.
EKS is a fully managed service provided by Amazon Web Services (AWS), that offers easy out of the box integrations with services like IAM, KMS and monitoring through CloudTrail, GuardDuty amongst others. These integrations greatly simplify the ability to implement FIPS 140-2 encryption, authentication, authorization, and policy-based security models.
Here are the Top 10 security considerations that organizations should consider when deploying Amazon EKS. This list is based on some of the lessons we have learned when advising our customers on proper EKS configurations. This isn’t meant to be an exhaustive list and is by no means in order of importance.
1. Secret Encryption – This is a biggie and can only be done at a cluster launch. Kubernetes secrets are a way of storing sensitive configuration information so it can be accessed by the pods in your cluster. Commonly used secrets are passwords for applications, TLS certificates, and license files. By default, all secrets are base-64 encoded in ETCD to prevent them from being readily exposed to human eyes. But base-64 encoding is far from encryption – would you really want to store your admin passwords or the private key to your web application on someone else’s server in an unencrypted format? There are several third-party Kubernetes secrets encryption tools that you can deploy, but when using AWS EKS the easiest way is to enable Secrets Encryption via AWS KMS. This will encrypt your secrets inside ETCD so only someone with the key can decrypt the data. And since KMS uses FIPS 140-2 validated Ciphers and is FedRAMP Moderate approved you can be sure that the encryption and key storage are held to the highest standards.
2. Private Control Plane – Controlling access to your control plane at the network layer should be the first thing you consider while launching your cluster. Best practice would be to limit access to the control plane to internal networks only. This way only users on VPNs or inside your corporate network will be able to administer the cluster. Hybrid public/private clusters should be avoided as much as possible and limited to scenarios when external systems like a CI/CD SaaS product needs to communicate with your cluster.
3. RBAC – Role-based access control is essential to Kubernetes’ security. Kubernetes offers native objects Roles and ClusterRoles to control how administrators and service accounts can interact with the API. Always follow the least privilege principles. Using namespaces in your environment to segregate different applications in your cluster allows for more granularity when creating RBAC Roles.
4. Audit Logging – Any security engineer would tell you audit logs are essential to any system and Kubernetes is no different. EKS has several logging options:
a. API Server – all cluster API requests
b. Audit – all changes and requests to Kubernetes
c. Authenticator – authentication requests
d. Controller Manager – state and actions of cluster controllers
e. Scheduler – Scheduling decisions
It would be best to enable all logging options and ship them to a SIEM. However, since EKS ships the logs to CloudWatch (said to be the secret cash cow of AWS), when cost must be considered, enabling Audit Logs should be the bare minimum for security in EKS.
5. Deny- All Default Network Policy – Zero-trust architectures are becoming the new standard for security. The simplest way to implement zero-trust is to start by denying all inter-pod communication with a Network Policy (kind of like AWS Security Groups for Kubernetes), and add allow network policies for each individual service that needs to access another service – e.g. NGINX pods communication with MySQL pods.
6. Limit access to AWS API – There may be times when individual pods will need to interact with AWS services to perform their function. Every pod on a worker node may not require the same level of access to the AWS API, thus relying on the instance profile may grant pods unnecessary access. AWS recommends its native IRSA (IAM Role for Service Accounts) which allows you to create a service account, assign the service account to a pod, and allow the service account to assume an IAM role over OIDC. However, this usually requires updating the container images to the latest AWS SDK. When that is not feasible, consider deploying a tool such as kube2iam that will allow each individual pod in the cluster to use its own IAM role rather than using on the instance profile of your worker nodes.
7. Use encrypted persistentVolumes and storageClasses – It should go without saying but encrypting data at rest should be the default for any application these days. This is no different when storing persistent data in Kubernetes. If you are using the EBS CSI, make sure the volumes are encrypted; if you are using EFS, make sure the file system is encrypted; any data at rest should be encrypted. This can be a deal-breaker for many compliance auditors especially in HIPAA, PCI, and FedRAMP scenarios.
8. Update your control plane and workers regularly – Like all systems in the IT world, vulnerabilities in Kubernetes will arise and they will require to be patched. Updating your Kubernetes environment to the latest version supported by your application manifests is essential. And if your manifests use very old Kubernetes APIs, it may be time to update them.
9. Use Managed Node Groups – While AWS does offer the ability to bring your own worker nodes in EKS, why bother? Managed Node Groups provide excellent security posture – you can even configure them to block all remote access and can be upgraded seamlessly using the AWS API. Worker nodes require a very specific sequence to properly upgrade while minimizing downtime. Even the most seasoned Kubernetes Admins can make mistakes. When using Managed Node Groups, AWS automates the cordoning, draining, and scheduling of pods on your worker nodes during an upgrade. Don’t reinvent the wheel.
10. Admission Controllers – Especially in CI/CD driven environments, it essential to control what pods are allowed into your cluster. Admission controllers can prevent you from deploying in secure containers, containers with undesired storage configurations, containers with too permissive OS access, and much more. While most companies are shifting security to the “left”, that doesn’t mean we completely offload security on developers.
Have questions about Amazon EKS? Contact us and our Cloud Solutions Specialist will get in touch with you!