Boosting Kubernetes Platform Engineering with EKS Capabilities - Chapter 1
kro and ACK

Welcome to the first chapter of this series. In this blog post, we are going to explore,
What is kro (Kube Resource Orchestrator)
What is ACK (AWS Controllers for Kubernetes)
EKS capabilities advantages in building a platform for developers
How the workload for platform engineers can be reduced (operational efficiency)
The primary focus of this chapter is to cover kro and ACK capabilities and how to deploy an application using ACK and Kubernetes APIs abstracted with kro. In the demo, I'll use a single account setup, but don't worry, later in this series, I will cover multi-account setups and more.
Okay, let me try to simplify kro and ACK. Imagine your Kubernetes cluster is a high-end restaurant kitchen.
The Developers are the Hungry Customers They don't want to know how the stove works or where you bought the carrots. They just want a meal (an app with a database) delivered to their table, fast.
ACK is the Fully Stocked Pantry ACK connects your kitchen to the massive warehouse of ingredients (AWS Resources).
kro is the Head Chef’s Recipe Card (The Menu) kro is the tool the Head Chef (Platform Engineer) uses to combine those raw ingredients into a finished dish. Abstracting API’s in the Kubernetes cluster and creates single unit, reusable API’s for the developers.
What are kro and ACK?
ACK (AWS Controllers for Kubernetes)
ACK allows you to define AWS resources directly as Kubernetes objects. Instead of logging into the AWS Console or running Terraform, you apply a YAML file for an S3Bucket or Table (DynamoDB), and the ACK controller provisions it for you.
User creates the Kubernetes manifest files for Kubernetes objects with default api objects, also AWS Resources using ACK.
Not using any IaC tools like
Cloudformationorterraformto create AWS resources. Using a Kuberntes manifest file we can create the AWS resources.In order to do that, we need to give the right IAM permission for the ACK. Don’t worry, we will be revisiting this part with more details later.
Take a look in the below manifest which creates a lambda execution role using ACK. When we deploy the manifest, IAM role will be created in the AWS Account. So simply put, using a Kubernetes manifest we can create AWS Resources without any IaC tools. While this is a simplified example, I will expand into the advanced details later in this post.
apiVersion: iam.services.k8s.aws/v1alpha1
kind: Role
metadata:
name: lambda-execution-role
namespace: default
spec:
name: lambda-execution-role
assumeRolePolicyDocument: |
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
policies:
- arn:aws:iam::<accountif>:policy/lambda-execution-policy
tags:
- key: App
value: hello-world-lambda
- key: Environment
value: test
Think back to the high-end restaurant kitchen analogy I mentioned earlier, here is how it applies,
Need a DynamoDB Table? ACK brings you a crate of raw potatoes.
Need an S3 Bucket? ACK brings you a bag of flour.
The Problem: You can't serve a customer a raw potato and a bag of flour. It’s too messy, complicated, and they don't know how to cook it. That is where kro comes into the picture.
kro (Kube Resource Orchestrator)
If ACK provides the ingredients, kro provides the recipe. kro is an open-source project that lets you create custom, single unit, reusable API’s. It acts as the abstracter, bundling multiple resources (like a Deployment, Service, and DynamoDB Table [via ACK API’s]) into a high-level API. kro uses CEL (Common Expression Language), the same language used by Kubernetes webhooks, for logical operations.
kro is an abstracter, it manages groups of resources as a single, reusable custom API.
Building on that same high-end restaurant kitchen analogy,
You define a recipe called "The Microservice Special".
The recipe says: "Take 1 bag of flour (S3 via ACK), 2 potatoes (DynamoDB via ACK), and cook them with a side of Compute (Deployment)."
The Magic: Now, you provide the customer (Developer) a Menu. They don't order "flour and potatoes"; they just point to "The Microservice Special."
Resource Graph Definition (RGD) Cluster-scoped :
A Resource Graph Definition is a reusable template that defines: A developer-facing schema (API) A resource graph with dependencies. Here's a simplified example showing the core structure:
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
name: webappstack.kro.run
spec:
schema:
apiVersion: v1alpha1
kind: WebAppStack
spec:
name: string
team: string
image: string | default="nginx"
replicas: integer | default=2
bucket:
enabled: boolean | default=false
name: string | default=""
region: string | default="us-west-2"
status:
deploymentStatus: ${deployment.status.conditions}
bucketStatus: ${s3-bucket.status.ackResourceMetadata.arn}
resources:
- id: deployment
template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${schema.spec.name}
spec:
replicas: ${schema.spec.replicas}
selector:
matchLabels:
app: ${schema.spec.name}
template:
spec:
containers:
- name: ${schema.spec.name}
image: ${schema.spec.image}
# ACK S3 Bucket Resource
- id: s3-bucket
includeWhen:
- ${schema.spec.bucket.enabled}
template:
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
name: ${schema.spec.bucket.name}
namespace: ${schema.metadata.namespace}
labels:
team: ${schema.spec.team}
spec:
name: ${schema.spec.bucket.name}
createBucketConfiguration:
locationConstraint: ${schema.spec.bucket.region}
Highlighted Components:
Abstraction Layer: The schema is the developer interface; they don't see Kubernetes complexity
Dependency Resolution: The platform resolves references
Conditional Resources: The
includeWhendirective shows conditional resource creation based on developer input. As per to the above example, if developer needs an s3bucket simply can set this value to true.Status Aggregation: The status section exposes runtime information back to developers
Resource Group Instance (Namespace-scoped):
Developers create instances using the RGD schema. They focus on the desired state, leaving the implementation to the controllers. Developers simply need to apply the Resource Group Instance YAML.
apiVersion: kro.run/v1alpha1
kind: WebAppStack
metadata:
name: blog-api
namespace: production
spec:
name: blog-api
team: backend-team
image: my-registry.io/blog-api:v1.2.3
replicas: 3
containerPort: 8080
ingress:
enabled: true
# Request the ACK Bucket
bucket:
enabled: true
name: blog-api-assets-prod-001
Platform Engineering Benefits:
Self-Service: Developers provision infrastructure without platform team tickets
Consistency: All web apps follow the same patterns and best practices
Governance: Platform team controls what can be created via the RGD
Abstraction: Developers think in application terms (replicas, image), not Kubernetes resources
If the RGD is the recipe; the instance is the customer order.
If you came this far you might be thinking “where is EKS Capabilities, all I read is kro and ACK”. Don’t worry, we are getting in to that soon. However, before we jump into the EKS Capabilities, let’s ensure we’re all on the same page regarding the basics of how these two tools function independently.
The Old Way: Self-Managing ACK and kro
Before EKS Capabilities, platform engineers installed and managed ACK and the kro themselves.
Additional operational burden:
Running ACK, kro pods
Managing ACK, kro upgrades and compatibility
The Reality:
Platform engineers spent significant time on:
Controller installation and configuration
Security patches and vulnerability management
Capacity planning for controller workloads
Debugging controller reconciliation loops
This operational overhead diverts resources such as time, from developing new tools and enhancing the platform
You can watch this episode from The Zacs’ Show, where we share how to install ACK and kro controllers to an EKS cluster: https://www.youtube.com/watch?v=Fex-xxKTMC4
The New Way: EKS Capabilities
EKS Capabilities provides fully managed ACK and KRO, running in AWS-owned infrastructure outside your cluster. It also includes Argo CD for GitOps (to be covered in upcoming posts).
What Changed:
No In-Cluster Controllers: ACK and kro run in AWS-managed infrastructure
GitOps Ready: ArgoCD is included as a managed capability (keep an eye out in future chapters for a detailed explanation)
Automatic Lifecycle Management: AWS handles scaling, patching, and upgrades
But now:
No controller pods in your cluster
No manual upgrades or patches
Focus on building platform abstractions
Platform Engineering Benefits:
No controller management
More time on developer experience
AWS managed security patches
AWS handles availability and scaling
This shift allows platform engineers to concentrate on creating abstractions that empower developers to help themselves, instead of managing the underlying infrastructure tools.
Let's get hands-on.
Demo Time: Building a Full-Stack Application with EKS Capabilities
We'll deploy a full-stack application that demonstrates (kro and ACK) EKS Capabilities in action. The platform team defines a Resource Graph Definition (RGD) that abstracts infrastructure complexity, and the development team provisions their application with a single Kubernetes manifest.
The Scenario:
Platform team: Creates a reusable Resource Graph Definition (RGD) that sets up a complete application stack.
Development team: Uses the RGD to deploy their application.
Result: A functioning application with AWS resources, Kubernetes workloads, and networking all from one simple manifest.
When you apply the Resource Group Instance, you'll see:
Automatic Resource Creation: KRO manages the creation of multiple resources in the correct order.
Dependency Resolution: Resources that rely on others (like an IAM Role referencing an IAM Policy) are created in sequence.
Status Propagation: The RGD status fields are automatically filled as resources become available.
Conditional Resources: The Ingress resource is created only when
ingress.enabled: true.End-to-End Integration: Your application pods automatically receive AWS credentials through Pod Identity and can access DynamoDB.
Readiness: Using
readyWhen:ensures a resource is fully ready before deployment begins.
Resources created in this Demo
PS: You might notice Feijoa mentioned in this demo. Feijoa is a fruit available in New Zealand 🇳🇿 and it's my favorite fruit too 😛
Here's the complete inventory of resources that will be provisioned:
KRO Resources
ResourceGraphDefinition
feijoaappstack.kro.run- The platform abstraction template- We are going to create a single API called
feijoaappstack.
- We are going to create a single API called
ACK Resources
IAM Policy
iam.services.k8s.aws/v1alpha1- Grants DynamoDB permissionsIAM Role
iam.services.k8s.aws/v1alpha1- Assumable by EKS podsPodIdentityAssociation
eks.services.k8s.aws/v1alpha1- Links IAM role to Kubernetes ServiceAccountDynamoDB Table
dynamodb.services.k8s.aws/v1alpha1- Application data store
Kubernetes Native Resources
ServiceAccount - Used by pods for Pod Identity
Deployment - Application workload with configurable replicas
Service - ClusterIP service exposing the deployment
PodDisruptionBudget - Ensures availability during disruptions
IngressClass - Defines ALB ingress controller (created once)
Ingress - Application Load Balancer (conditional, only if ingress.enabled: true)
Demo Steps:
Step 1: EKS Cluster
- I used an EKS cluster with Auto Mode enabled, therefore, I don’t have to setup the core add-ons and other dependencies. If you are new to EKS Auto Mode check this : https://blog.awsfanboy.com/lets-explore-amazon-eks-auto-mode
Step 2: Enable EKS Capabilities
There are several ways to do this. In this demo, I used the AWS Console to enable it.
Go to the EKS Console and click on capabilities, and you will see this page.
Select the kro and ACK capabilities, then create a Capability role here. For ACK, this role will create the AWS resources for you.
Follow the "Create admin role" instructions to create the capabilities for both ACK and kro.

In this demo, we will deploy everything in the default namespace. However, in this series, I will also share how to deploy with namespace-specific permissions using
IAMRoleSelector. This means we can assign specific IAM roles for each namespace, allowing developers with access to that namespace to deploy resources using a designated IAM role.
Having completed these steps, you will see the capabilities are active now.

Now, if you check the available APIs in the EKS cluster, you will see over 200 available APIs.
You can see the resourcegraphdefinitions API from kro.
Also, you can see APIs available for AWS resources from ACK.
There are more ;) ,
Step 4: Configure RBAC for KRO
Important note: The kro capability can manage ResourceGraphDefinitions and their instances by default using the AmazonEKSKROPolicy, but it requires additional permissions to handle Kubernetes resources like ACK resources. These permissions need to be granted through access entry policies or Kubernetes RBAC.
For this demo, I am creating a ClusterRoleBinding with the cluster-admin ClusterRole. Ensure that /KRO is in uppercase, as the name is case-sensitive. Reference: Kubernetes RoleBinding Example. However, in a production environment, you can set more granular access. I will cover a real production setup in future posts in this series, so stay tuned.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kro-cluster-admin
subjects:
- kind: User # Replace the name: with with your IAM role
name: arn:aws:sts::<account_id>:assumed-role/AmazonEKSCapabilityKRORole/KRO
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
Step 5: Create Resource Graph Definition (Platform Team)
Platform team creates the RGD template that defines the application stack abstraction.
# Apply the Resource Graph Definition
kubectl apply -f chapter-1/platform-team/feijoa-rgd.yaml
# Verify RGD was created
kubectl get resourcegraphdefinition feijoaappstack.kro.run
# Check RGD details
kubectl describe resourcegraphdefinition feijoaappstack.kro.run
If your RGD is ready, you will see the Status as active same as the below screenshot.
Step 6: Create Resource Group Instance (Application Team)
The application team uses the RGD to deploy their application with a single manifest.
# Apply the Resource Group Instance
kubectl apply -f chapter-1/dev-team/feijoa-instance.yml
# Watch resources being created
kubectl get feijoaappstack store -w
# Check instance status
kubectl get feijoaappstack store
Our Store instance is ready
Step 7: Verify Resource Creation and Dependencies
Confirm that all resources were created correctly and that dependencies are resolved.
We can list some of the resources created using kubectl get all, serviceaccount, ingress, pdb, table, roles.iam.services.k8s.aws, policies.iam.services.k8s.aws. The output will look something like this. Now, we can see that all the resources have been successfully created.
You can use the following commands to check and play around.
# Check RGD status fields are populated
kubectl get feijoaappstack store -o jsonpath='{.status}'
# Verify IAM role ARN is in PodIdentityAssociation
kubectl get podidentityassociation store -o jsonpath='{.spec.roleARN}'
# Verify DynamoDB table ARN in deployment env vars
kubectl get deployment store -o jsonpath='{.spec.template.spec.containers[0].env}'
# Verify ACK resources
kubectl get policy,role
kubectl get table
kubectl get podidentityassociation
# Verify Kubernetes resources
kubectl get deployment,service,serviceaccount
kubectl get ingress
kubectl get poddisruptionbudget
# Check all resources eg:
kubectl get all,serviceaccount,ingress,pdb,table,roles.iam.services.k8s.aws,policies.iam.services.k8s.aws
Step 8: Test the Application
Verify that the application is accessible and can interact with DynamoDB.
# Get Ingress URL
kubectl get ingress store-ingress -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
Now that you have the ALB URL, you can access the app. Try adding some Feijoa fruits to the bucket.
We can see that DynamoDB is updating, which means everything is working end-to-end.
Step 9: Cleanup
To clean up the demo environment, make sure to delete the cluster resources first before disabling the EKS capabilities.
# Delete the instance (cascades to all resources)
kubectl delete feijoaappstack store
# Verify resources are deleted
kubectl get all,serviceaccount,ingress,pdb -l app=store
# Delete RGD (if needed)
kubectl delete resourcegraphdefinition feijoaappstack.kro.run
Once the resources are deleted, we can follow these steps to disable the EKS Capabilities from the EKS cluster.
Summary
In this demo, we have successfully:
Enabled EKS Capabilities (ACK, kro)
Configured IAM roles and access entries
Set up RBAC for kro capability role
Created a Resource Graph Definition (platform abstraction)
Used the API we created to deploy a complete application stack
Verified all resources and dependencies
Tested the application end-to-end
Key Takeaway
From a single developer-friendly manifest, we managed 10 resources across AWS and Kubernetes, showing the strength of platform engineering with EKS Capabilities. With kro, we easily can create reusable single API units to set up multiple instances in a standard way for developers. Also, using ACK we can create the AWS resources and dependencies for the Kubernetes resources.





