How to deploy Helm charts to an EKS cluster through AWS CloudFormation

Table of Contents

Problem statement

There are several ways to create an EKS cluster in AWS:

  • Web console or CLI
  • EKSctl tool
  • Terraform, CloudFormation or other IaC tools
  • Third-party products

In most cases an empty kubernetes cluster is not enough. We still may need an Ingress Controller, Cluster autoscaler, External DNS, Prometheus, etc. included in a default cluster set.

As you probably know, when an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:masters permissions). Initially, only that IAM user can make calls to the Kubernetes API server.

For example, in the Terraform we can use a Kubernetes/Helm provider or even “local-exec”, that will deploy required Kubernetes resources into the cluster, patch “aws-auth” ConfigMap in order to add extra IAM entities for a cluster administration. But, unfortunately, there is no such basic functionality in AWS CloudFormation. If we have a multi-account (multi-region) AWS environment, we have to use CloudFormation Stack Sets. In this case we can not use any things other than CloudFormation templates. We can create an EKS cluster itself, a managed node group, and the Fargate profile, but we can not manipulate with Kubernetes entities like pods, deployments, configmaps, services, etc. Potentially this can be solved via Lambda-backed Custom Resource, but there is another way.

The proposed solution

AWS CloudFormation has a registry, where we can find Amazon or Third-party public extensions. The AWS Quick Start team has developed and published several CloudFormation extensions related to Kubernetes:

  • AWSQS::EKS::Cluster – github link
  • AWSQS::Kubernetes::Get – link
  • AWSQS::Kubernetes::Helm – link
  • AWSQS::Kubernetes::Resource – link

Before using an extension you need to activate it in the relevant AWS region.

You have to choose an appropriate Execution role, created in advance. Every CloudFormation extension has a template for the Execution role with required policies (example here). You can also configure logging for the extension, it may be useful for debugging if anything goes wrong with Stack provisioning. You can enable or disable automatic extension updates.

As I mentioned before, we use CloudFormation Stack Sets for multi-account (multi-region) provisioning, so we’ve created a separate CloudFormation template that deploys needed IAM execution roles + activates needed extensions.

// omitted code
    Type: AWS::IAM::Role
        Version: '2012-10-17'
          - Effect: Allow
              Service: [,,]
            Action: sts:AssumeRole
      Path: "/"
        - PolicyName: ResourceTypePolicy
            Version: '2012-10-17'
              - Effect: Allow
                  - "eks:CreateCluster"
                  - "eks:DeleteCluster"
                  - "iam:PassRole"
                  - "sts:AssumeRole"    
                  - "lambda:InvokeFunction"
                  - "lambda:CreateFunction"
                  // other required permissions 
                Resource: "*"

    Type: AWS::CloudFormation::TypeActivation
    DependsOn: EKSClusterExtensionRole
      AutoUpdate: false
      ExecutionRoleArn: !GetAtt EKSClusterExtensionRole.Arn
      PublicTypeArn: !Sub "arn:aws:cloudformation:${AWS::Region}::type/resource/408988dff9e863704bcc72e7e13f8d645cee8311/AWSQS-EKS-Cluster"
// omitted code

The same procedure will be for other required extensions, e.g. AWSQS::Kubernetes::Helm.
Once the extension is activated we can use it to create EKS cluster with many more options compared with default CloudFormation resource for EKS, e.g. extra IAM roles/users added to aws-auth configMap, control plane logging configuration and API endpoint configuration.

// omitted code
    Type: "AWS::IAM::Role" # Extra role added to aws-auth
    // omitted code

    Type: "AWS::KMS::Key" # KMS key for envelope encryption 
    // omitted code

    Type: "AWSQS::EKS::Cluster"
      Name: !Sub "${ProjectName}-${ProductName}-${Env}-${ClusterName}"
      Version: !Ref KubernetesVersion
      RoleArn: !GetAtt serviceRole.Arn 
      LambdaRoleName: !ImportValue …… # extra role needed if EKS API endpoint is private (check docs)
          - !ImportValue ...
           // omitted code
          - !Ref EKSSecurityGroup
        EndpointPrivateAccess: true
        EndpointPublicAccess: false
      EnabledClusterLoggingTypes: !If [ LoggingEnabled, !Ref EKSClusterLoggingTypes, !Ref "AWS::NoValue" ]
      KubernetesApiAccess: # Extra roles/users added to aws-auth configmap
          - Arn: !GetAtt AdminEKSRole.Arn
            Username: "AdminRole"
            Groups: ["system:masters"]
          - Arn: !ImportValue ... # Execution role for Helm extension
            Username: "DeployRole"
            Group: ["system:masters"]
      EncryptionConfig: !If
        - EnableEncryption
        - - Resources: [ secrets ]
              KeyArn:  // omitted code
  MetricsServer: # Example for applying kubernetes manifest
    Type: "AWSQS::Kubernetes::Resource"
      ClusterName: !Ref EKSCluster
      Namespace: kube-system

  KubeStateMetrics: # Example for deploying Helm chart
    Type: "AWSQS::Kubernetes::Helm"
      ClusterID: !Ref EKSCluster
      Name: kube-state-metrics
      Namespace: kube-state-metrics
      Chart: prometheus-community/kube-state-metrics
      ValueYaml: |
            enabled: true


AWS CloudFormation extensions described in this post help us deploy EKS clusters with all necessary Kubernetes resources in a fully automated way across all AWS accounts/OUs within an AWS Landing Zone where EKS is required. Whole configuration is stored in the GIT repository and deployed by AWS CodePipeline + AWS CloudFormation Stack Sets.