Overview of third-party addons for EKS (Kubecost, Dynarace, Istio)

Table of Contents

AWS introduced EKS addons in the EKS v1.20. Just a few add-ons appeared back then, e.g. VPC CNI plugin, CoreDNS, and Kube-proxy. All Amazon EKS add-ons include the latest security patches, bug fixes, and are validated by AWS to work with Amazon EKS. Amazon EKS add-ons allow you to consistently ensure that your Amazon EKS clusters are secure and stable and reduce the amount of work that you need to do in order to install, configure, and update add-ons.

Later AWS added Amazon EBS CSI driver and AWS Distro for OpenTelemetry and at the end of 2022 third-party addons were officially presented in the AWS re:Invent 2022. In this post, we will take a look at several available add-ons, their capabilities and value.

EKS add-ons and Marketplace

If you chose EKS v1.24, three add-ons will be automatically installed with a cluster.

You can select different versions according to your requirements. Later you can easily update it.

There are several add-ons available at the time of writing, but this list is constantly growing.

Addons may require permissions to use AWS API. They can use either IAM role of a node where they run, or assume IAM role using IRSA approach that is preferable.

Add-on update process

The purpose was to check how smoothly the update is going. I chose Amazon VPI CNI and wanted to make sure that the update process would not brake the network and that all pods continue running.

You can select newer and older version

I monitor all pods and nodes during the VPC CNI update

The update is started

We can see that only the target application “VPC CNI” (aws-node daemonSet) was recreated and all other pods are stable and all nodes are “Ready”


Kubecost started in early 2019 as an open-source tool to give developers visibility into Kubernetes spend. Kubecost provides real-time cost visibility and insights by uncovering patterns that create overspending on infrastructure to help teams prioritize where to focus optimization efforts. By identifying root causes for negative patterns, customers using Kubecost save 30-50% or more of their Kubernetes cloud infrastructure costs.

You can try it for free and install it as an EKS add-on.

The only thing that you need to start is a subscription in AWS Marketplace and install the add-on.

Kubecost comes bundled with a Prometheus installation. However, if you wish to integrate with an external Prometheus deployment, provide your local Prometheus service address with this format http://..svc.
Note: integrating with an existing Prometheus is only officially supported under Kubecost paid plans and requires some extra configurations on your Prometheus

$ kubectl get po -n kubecost
NAME                                       READY   STATUS    RESTARTS  AGE
kubecost-cost-analyzer-74955f9d46-g2m4n    2/2     Running   0         43h
kubecost-prometheus-server-f4dd75668-82whb 1/1     Running   0         43h

$ kubectl port-forward --namespace kubecost deployment/kubecost-cost-analyzer 9090
Forwarding from -> 9090
Forwarding from [::1]:9090 -> 9090
Handling connection for 9090

Then you can visit http://localhost:9090 via your web browser

The Kubecost Cost Allocation dashboard allows you to quickly see allocated spend across all native Kubernetes concepts, e.g. namespace, k8s label, and service. It also allows for allocating cost to organizational concepts like team, product/project, department, or environment.

Here you can aggregate cost by namespace, deployment, service, and other native Kubernetes concepts. While selecting Single Aggregation, you will only be able to select one concept at a time. While selecting Multi Aggregation, you will be able to filter for multiple concepts at the same time.

The Kubecost Assets view shows Kubernetes cluster costs broken down by the individual backing assets in your cluster (e.g. cost by node, disk, and other assets). It’s used to identify spend drivers over time and to audit Allocation data. This view can also optionally show out-of-cluster assets by service, tag/label, etc.

Kubecost automatically generates recommendations you can use to save 30-50% or more on infrastructure spend

The health score starts at 100. Penalties reduce the score. There are three penalty types:

SevereErrorPenalty = 50
ErrorPenalty       = 15
WarningPenalty     = 3

WarningPenalty is applied when:

  • Single Cluster (Master exists on Cluster – for kops based kubernetes deployments on AWS)
  • Single Region
  • Predictive Disk Growth crosses a 90% threshold

ErrorPenalty is applied:

  • Any Nodes in the Cluster are Not Ready
  • Any Nodes are under MemoryPressure

SevereErrorPenalty is applied:

  • Memory Usage exceeds 90% of Available Memory on the Cluster

Kubecost alerts allow teams to receive updates on real-time Kubernetes spend. They are configurable via the Kubecost UI or Helm values. They can be sent via email, Slack, and Microsoft Teams using Kubecost Helm chart values.

Alerts are either created to monitor specific data sets and trends, or they must be toggled on or off. The following alert types are supported:

  • Allocation Budget: Sends an alert when spending crosses a defined threshold
  • [Beta] Allocation Efficiency: Detects when a Kubernetes tenant is operating below a target cost-efficiency threshold
  • Allocation Recurring Update: Sends an alert with cluster spending across all or a subset of kubernetes resources.
  • Allocation Spend Change: Sends an alert reporting unexpected spend increases relative to moving averages
  • Asset Budget: Sends an alert when spend for a particular set of assets crosses a defined threshold.
  • Cloud Report: Sends an alert with asset spend across all or a subset of cloud resources.
  • Monitor Cluster Health: Used to determine if the cluster’s health score changes by a specific threshold.
  • Monitor Kubecost Health: Used for production monitoring for the health of Kubecost itself.


Dynatrace provides software intelligence to simplify cloud complexity and accelerate digital transformation. With advanced observability, AI, and complete automation, the all-in-one platform provides answers, not just data, about the performance of applications, the underlying infrastructure, and the experience of all users.

With Dynatrace, you can:

  • Monitor your full stack with no manual configuration. End-to-end monitoring of your AWS applications and infrastructure
  • Automatically discover all EC2 instances running in Availability Zones by leveraging CloudWatch API
  • Migrate into AWS faster with automation and intelligence
  • Optimize delivery pipeline with an AI-driven DevOps methodology
  • Improve mean time to resolution with precise root cause analysis showing causation and correlation
  • Analyze highly complex and dynamic ecosystems and billions of events in real-time
  • Out-of-the-box, Dynatrace works with Amazon EC2, Elastic Container Service, Elastic Kubernetes Service, Fargate, and serverless solutions like Lambda.

This add-on just deploys a container agent (https://github.com/dynatrace/dynatrace-operator).

$ kubectl get all -n dynatrace 
NAME                                    READY  STATUS   RESTARTS       AGE
pod/dynatrace-operator-6d6457bc86-g5hdl 1/1    Running  1 (6h39m ago)   9h
pod/dynatrace-webhook-5fb848c58f-h8dzc  1/1    Running  0               9h
pod/dynatrace-webhook-5fb848c58f-r2cst  1/1    Running  0               9h

NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/dynatrace-webhook   ClusterIP   <none>        443/TCP          9h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dynatrace-operator   1/1     1            1           9h
deployment.apps/dynatrace-webhook    2/2     2            2           9h

NAME                                          DESIRED CURRENT READY   AGE
replicaset.apps/dynatrace-operator-6d6457bc86 1       1       1       9h
replicaset.apps/dynatrace-webhook-5fb848c58f  2       2       2       9h

Extra steps should be performed for the complete configuration. First of all, you need to sign-up, you can try Dynatrace for 15 days free of charge:

There are many integrations for different clouds and workloads, e.g. you can connect AWS accounts via IAM user or IAM role:

AWS workloads appears after that:

For the Kubernetes cluster you need to create token and apply the provided manifest:

Here is an example of the downloaded dynakube.yaml, we provide token and API url for connection:

apiVersion: v1
  apiToken: ZHQwYzAxLlEy*********TZVTTJQQkY1
  dataIngestToken: ZHQwYzAxLjVLQkIyNUNEUEM0TEV********xZRk40
kind: Secret
  name: demo
  namespace: dynatrace
type: Opaque
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
  name: demo
  namespace: dynatrace
    feature.dynatrace.com/automatic-kubernetes-api-monitoring: "true"
  apiUrl: https://a******5.live.dynatrace.com/api
  skipCertCheck: true
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
          operator: Exists
        - effect: NoSchedule
          key: node-role.kubernetes.io/control-plane
          operator: Exists
          value: "false"
      - routing
      - kubernetes-monitoring
      - dynatrace-api
    image: ""
        cpu: 500m
        memory: 512Mi
        cpu: 1000m
        memory: 1.5Gi

New objects appear after that:

$ kubectl get all -n dynatrace 
NAME                                    READY  STATUS   RESTARTS      AGE
pod/demo-activegate-0                   1/1    Running  0            2m52s
pod/demo-oneagent-csk6m                 1/1    Running  0            2m55s
pod/demo-oneagent-swmdz                 1/1    Running  0            2m55s
pod/dynatrace-operator-6d6457bc86-g5hdl 1/1    Running  1 (6h39m ago)  9h
pod/dynatrace-webhook-5fb848c58f-h8dzc  1/1    Running  0              9h
pod/dynatrace-webhook-5fb848c58f-r2cst  1/1    Running  0              9h

NAME                      TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)          AGE
service/demo-activegate   ClusterIP  <none>       443/TCP,80/TCP   2m55s
service/dynatrace-webhook ClusterIP  <none>       443/TCP          9h

daemonset.apps/demo-oneagent   2         2         2       2            2           <none>          2m56s

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dynatrace-operator   1/1     1            1           9h
deployment.apps/dynatrace-webhook    2/2     2            2           9h

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/dynatrace-operator-6d6457bc86   1         1         1       9h
replicaset.apps/dynatrace-webhook-5fb848c58f    2         2         2       9h

NAME                               READY   AGE
statefulset.apps/demo-activegate   1/1     2m53s

Kubernetes cluster, nodes and applications appears in the Dynatarce console:

Logs are also available with filtering capabilities:

Smartscape is a map for your application topology. As the Dynatrace OneAgent discovers all the components and dependencies in your application environment, Smartscape technology simultaneously builds an interactive map of how everything is interconnected:

  • Visualizations get built dynamically and automatically without any need for manual configuration, additional instrumentation, or scripts.
  • Intuitive infographics make it easy to understand the complexities of your application stack and delivery chain.
  • Smartscape provides 100% end-to-end observability into all application components and dependencies up, down, and across all tiers of your stack—no gaps or blind spots.

Moreover, Dynatrace has the capabilities of:

Pricing is:

This is quite an interesting product with many functions and features that can be a worthy competitor to other solutions for monitoring and tracing.

Tetrate Istio Distro

Tetrate Istio Distro is an open-source project from Tetrate that provides vetted builds of Istio tested against all major cloud platforms. TID provides extended Istio version support beyond upstream Istio (release date plus 14 months). It also includes the GetMesh lifecycle and change management CLI.

The TID Istio distributions are hardened and performant and are full distributions of the upstream Istio project.

Nothing special happens, it just installs an Istio control plane and you are ready to configure your service mesh.

$ kubectl get po -n istio-system
NAME                      READY   STATUS    RESTARTS   AGE
istiod-7997d87f64-t5ms7   1/1     Running   0          32h

$ kubectl api-resources | grep istio
NAME                    SHORTNAMES           APIGROUP           NAMESPACED   KIND
wasmplugins             extensions.istio.io                       true         WasmPlugin
istiooperators          iop,io               install.istio.io     true         IstioOperator
destinationrules        dr                   networking.istio.io  true         DestinationRule
envoyfilters                                 networking.istio.io  true         EnvoyFilter
gateways                gw                   networking.istio.io  true         Gateway
proxyconfigs                                 networking.istio.io  true         ProxyConfig
serviceentries          se                   networking.istio.io  true         ServiceEntry
sidecars                                     networking.istio.io  true         Sidecar
virtualservices         vs                   networking.istio.io  true         VirtualService
workloadentries         we                   networking.istio.io  true         WorkloadEntry
workloadgroups          wg                   networking.istio.io  true         WorkloadGroup
authorizationpolicies                        security.istio.io    true         AuthorizationPolicy
peerauthentications     pa                   security.istio.io    true         PeerAuthentication
requestauthentications  ra                   security.istio.io    true         RequestAuthentication
telemetries             telemetry            telemetry.istio.io   true         Telemetry


In this post, we looked at EKS add-ons from AWS Marketplace, such as Kubecost, Dynatrace, and Istio. It looks very interesting and convenient to install and manage. Of course, add-ons are already supported by Terraform and CloudFormation and in the next posts we will check others.