John

Senior Cloud Engineer & Technical Lead

Understanding AWS Application Load Balancers with EKS: From Ingress to Pod

I was setting up a new EKS cluster last week and found myself staring at the AWS Load Balancer Controller documentation, trying to piece together how all the components fit together. The magic of “apply an Ingress manifest and an ALB appears” seemed too opaque. I needed to understand the complete picture - from a user typing a domain name to traffic hitting my pods.

The Problem

When you’re new to EKS networking, the relationship between Kubernetes Ingress resources and AWS Application Load Balancers can feel like a black box. You create an Ingress with ingressClassName: alb, and somehow an actual AWS ALB materializes in your account. But what’s happening under the hood? How does Route 53 fit in? What are target groups actually targeting?

I decided to trace the complete network flow and document everything I learned.

The AWS Load Balancer Controller

The AWS Load Balancer Controller is the bridge between Kubernetes and AWS load balancing services. It’s a Kubernetes controller that runs as a deployment in your cluster and watches for Ingress and Service resources.

Here’s the mental model: the controller acts as a translator between Kubernetes-native resources and AWS infrastructure.

flowchart TB subgraph EKS["EKS Cluster"] subgraph ControlPlane["Control Plane"] API["Kubernetes API Server"] end subgraph WorkerNodes["Worker Nodes"] ALBC["AWS Load Balancer Controller
(Deployment)"] Pod1["Application Pod"] Pod2["Application Pod"] Pod3["Application Pod"] end Ingress["Ingress Resource
ingressClassName: alb"] Service["Service
(ClusterIP or NodePort)"] end subgraph AWS["AWS Account"] ALB["Application Load Balancer"] TG["Target Group"] Listener["Listener (HTTP/HTTPS)"] Rules["Listener Rules"] end API --> ALBC ALBC -->|"Watches"| Ingress ALBC -->|"Watches"| Service ALBC -->|"Creates/Updates"| ALB ALBC -->|"Creates/Updates"| TG ALBC -->|"Creates/Updates"| Listener ALBC -->|"Creates/Updates"| Rules ALB --> Listener Listener --> Rules Rules --> TG TG -->|"IP Mode"| Pod1 TG -->|"IP Mode"| Pod2 TG -->|"IP Mode"| Pod3

How the Controller Works

The controller uses a reconciliation loop pattern:

  1. Watch Phase: It continuously watches the Kubernetes API for Ingress resources with ingressClassName: alb
  2. Diff Phase: When it detects a new or modified Ingress, it compares the desired state (from the manifest) with the current state (in AWS)
  3. Act Phase: It makes AWS API calls to create, update, or delete load balancer resources to match the desired state

Installing the Controller

Before you can use ALB Ingress, you need the controller running in your cluster:

# Add the EKS Helm repository
helm repo add eks https://aws.github.io/eks-charts
helm repo update

# Install the controller
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=my-cluster \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller

The controller needs IAM permissions to manage AWS resources. This is typically handled through IAM Roles for Service Accounts (IRSA):

# Create the IAM policy
aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json

# Associate the policy with a service account
eksctl create iamserviceaccount \
  --cluster=my-cluster \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --attach-policy-arn=arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy \
  --approve

Creating an ALB with Ingress

Here’s where the magic happens. When you apply an Ingress resource with the ALB class, the controller springs into action:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/healthcheck-path: /health
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/ssl-redirect: '443'
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:111122223333:certificate/abc123
spec:
  ingressClassName: alb
  rules:
    - host: myapp.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-app-service
                port:
                  number: 80
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: my-api-service
                port:
                  number: 8080

Key Annotations Explained

Annotation Purpose
scheme: internet-facing Creates a public ALB (vs internal for private)
target-type: ip Routes directly to pod IPs (recommended for most cases)
healthcheck-path Path the ALB uses to check pod health
listen-ports Which ports the ALB listens on
ssl-redirect Redirects HTTP to HTTPS
certificate-arn ACM certificate for HTTPS

What Gets Created in AWS

When you apply this Ingress, the controller creates:

  1. Application Load Balancer - The actual load balancer in your VPC
  2. Target Group - One per backend service, containing pod IPs
  3. Listeners - HTTP (80) and HTTPS (443) listeners
  4. Listener Rules - Path-based routing rules matching your Ingress paths
  5. Security Group - Attached to the ALB for traffic control

You can verify the creation:

# Check the Ingress status
kubectl get ingress my-app-ingress -o wide

# The ADDRESS field will show the ALB DNS name once provisioned
# Example: k8s-default-myapping-abc123-1234567890.us-west-2.elb.amazonaws.com

Target Groups: IP Mode vs Instance Mode

The controller supports two target types, and understanding the difference is crucial for troubleshooting.

flowchart LR subgraph IPMode["IP Mode (target-type: ip)"] ALB1["ALB"] --> TG1["Target Group"] TG1 --> P1["Pod IP: 10.0.1.15"] TG1 --> P2["Pod IP: 10.0.1.16"] TG1 --> P3["Pod IP: 10.0.2.22"] end subgraph InstanceMode["Instance Mode (target-type: instance)"] ALB2["ALB"] --> TG2["Target Group"] TG2 --> N1["Node: 10.0.1.5:32145"] TG2 --> N2["Node: 10.0.2.8:32145"] N1 --> NP1["NodePort Service"] N2 --> NP2["NodePort Service"] NP1 --> P4["Pod"] NP2 --> P5["Pod"] end
alb.ingress.kubernetes.io/target-type: ip
  • Registers pod IPs directly in the target group
  • Traffic goes straight to pods (fewer hops)
  • Requires pods to be in VPC-routable subnets (works with AWS VPC CNI)
  • Better for performance and observability

Instance Mode

alb.ingress.kubernetes.io/target-type: instance
  • Registers EC2 node IPs with NodePort
  • Requires a NodePort Service
  • Traffic flows: ALB -> Node -> kube-proxy -> Pod
  • Works with any CNI plugin

Route 53 Integration

Once your ALB is created, you need to point your domain to it. There are two approaches:

Manual DNS Record

# Get the ALB DNS name
ALB_DNS=$(kubectl get ingress my-app-ingress -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

# Create an alias record in Route 53
aws route53 change-resource-record-sets \
  --hosted-zone-id Z1234567890ABC \
  --change-batch '{
    "Changes": [{
      "Action": "UPSERT",
      "ResourceRecordSet": {
        "Name": "myapp.example.com",
        "Type": "A",
        "AliasTarget": {
          "HostedZoneId": "Z35SXDOTRQ7X7K",
          "DNSName": "'$ALB_DNS'",
          "EvaluateTargetHealth": true
        }
      }
    }]
  }'

Automated with External DNS

For production, I recommend External DNS to automatically sync DNS records:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
spec:
  template:
    spec:
      containers:
        - name: external-dns
          image: registry.k8s.io/external-dns/external-dns:v0.14.0
          args:
            - --source=ingress
            - --domain-filter=example.com
            - --provider=aws
            - --aws-zone-type=public
            - --registry=txt
            - --txt-owner-id=my-cluster

With External DNS running, add this annotation to your Ingress:

metadata:
  annotations:
    external-dns.alpha.kubernetes.io/hostname: myapp.example.com

External DNS will automatically create and update the Route 53 record.

The Complete Network Flow

Now let’s trace a request from a user’s browser all the way to a pod. This is the mental model I wish I had when starting out.

sequenceDiagram participant User as User Browser participant DNS as Route 53 participant ALB as Application Load Balancer participant TG as Target Group participant Pod as Application Pod User->>DNS: 1. DNS Query: myapp.example.com DNS-->>User: 2. Returns ALB DNS (k8s-xxx.elb.amazonaws.com) User->>ALB: 3. HTTPS Request to ALB IP Note over ALB: 4. SSL Termination
(using ACM certificate) ALB->>ALB: 5. Listener receives request ALB->>ALB: 6. Evaluate listener rules
(path matching) ALB->>TG: 7. Forward to Target Group Note over TG: 8. Health check status
determines available targets TG->>Pod: 9. Route to healthy pod IP Note over Pod: 10. Pod processes request Pod-->>TG: 11. Response TG-->>ALB: 12. Response ALB-->>User: 13. HTTPS Response

Detailed Flow Breakdown

Step 1-2: DNS Resolution

  • User’s browser queries myapp.example.com
  • Route 53 returns the ALB’s DNS name
  • Browser resolves ALB DNS to actual IP addresses

Step 3-4: TLS Termination

  • Request hits the ALB on port 443
  • ALB terminates SSL using the ACM certificate
  • Connection is now decrypted at the ALB

Step 5-6: Listener Rules

  • HTTPS listener (port 443) receives the request
  • ALB evaluates rules in priority order
  • Matches /api path to api-service target group
  • Matches / path to app-service target group

Step 7-9: Target Group Routing

  • Target group contains healthy pod IPs
  • ALB uses round-robin (or least connections) to pick a target
  • Request is forwarded to the pod’s IP and port

Step 10-13: Response Path

  • Pod processes the request
  • Response travels back through the same path
  • ALB re-encrypts if needed and sends to user

Architectural Overview

Here’s the complete architecture showing all components working together:

flowchart TB subgraph Internet["Internet"] User["User Browser"] end subgraph AWS["AWS Cloud"] subgraph Route53["Route 53"] DNS["myapp.example.com
A Record (Alias)"] end subgraph VPC["VPC"] subgraph PublicSubnets["Public Subnets"] ALB["Application Load Balancer
internet-facing"] end subgraph PrivateSubnets["Private Subnets"] subgraph EKS["EKS Cluster"] subgraph NS["Namespace: default"] Ingress["Ingress
ingressClassName: alb"] SVC["Service
ClusterIP"] Deploy["Deployment"] end subgraph System["Namespace: kube-system"] ALBC["AWS LB Controller"] ExtDNS["External DNS"] end subgraph Nodes["Worker Nodes"] Node1["Node 1"] Node2["Node 2"] Pod1["Pod
10.0.1.15"] Pod2["Pod
10.0.1.16"] Pod3["Pod
10.0.2.22"] end end end TG["Target Group
(Pod IPs)"] SG["Security Group
(ALB)"] end end User -->|"1. DNS Query"| DNS DNS -->|"2. ALB DNS"| User User -->|"3. HTTPS"| ALB ALB --> SG ALB -->|"4. Forward"| TG TG -->|"5. Route"| Pod1 TG -->|"5. Route"| Pod2 TG -->|"5. Route"| Pod3 ALBC -->|"Watches"| Ingress ALBC -->|"Creates"| ALB ALBC -->|"Manages"| TG ExtDNS -->|"Watches"| Ingress ExtDNS -->|"Updates"| DNS SVC --> Pod1 SVC --> Pod2 SVC --> Pod3 Node1 -.- Pod1 Node1 -.- Pod2 Node2 -.- Pod3

Troubleshooting Common Issues

After setting this up a few times, I’ve run into these issues:

ALB Not Creating

# Check controller logs
kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller

# Common issues:
# - Missing IAM permissions
# - Subnet tags not set correctly
# - IngressClass not created

Subnets need these tags for ALB discovery:

# Public subnets (for internet-facing ALBs)
kubernetes.io/role/elb = 1

# Private subnets (for internal ALBs)
kubernetes.io/role/internal-elb = 1

Targets Showing Unhealthy

# Check target group health in AWS Console or CLI
aws elbv2 describe-target-health --target-group-arn arn:aws:elasticloadbalancing:...

# Common issues:
# - Security group not allowing health check traffic
# - Health check path returning non-200
# - Pod not ready

502 Bad Gateway

Usually means the ALB can’t reach the targets:

# Verify pod is running and ready
kubectl get pods -l app=my-app

# Check if pod is listening on the right port
kubectl exec -it my-pod -- netstat -tlnp

# Verify security groups allow traffic from ALB to pods

Complete Working Example

Here’s a complete, copy-paste example that ties everything together:

---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-app
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      containers:
        - name: nginx
          image: nginx:1.25
          ports:
            - containerPort: 80
          readinessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 5
---
# Service
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: ClusterIP
  selector:
    app: nginx-app
  ports:
    - port: 80
      targetPort: 80
---
# Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/healthcheck-path: /
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/ssl-redirect: '443'
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:111122223333:certificate/your-cert
    external-dns.alpha.kubernetes.io/hostname: nginx.example.com
spec:
  ingressClassName: alb
  rules:
    - host: nginx.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx-service
                port:
                  number: 80

Apply it:

kubectl apply -f nginx-complete.yaml

# Watch the ALB get created
kubectl get ingress nginx-ingress -w

# After a few minutes, you'll see the ALB address
NAME            CLASS   HOSTS               ADDRESS                                                    PORTS   AGE
nginx-ingress   alb     nginx.example.com   k8s-default-nginxing-abc123-1234567890.us-west-2.elb.amazonaws.com   80      2m

Key Learnings

  • AWS Load Balancer Controller is the bridge - It watches Kubernetes Ingress resources and translates them into AWS ALB configurations through the AWS API
  • IP mode targets pods directly - Using target-type: ip eliminates the extra hop through NodePort, improving performance and simplifying troubleshooting
  • Subnet tags are critical - ALBs won’t provision correctly without kubernetes.io/role/elb or kubernetes.io/role/internal-elb tags on your subnets
  • External DNS automates Route 53 - Instead of manually managing DNS records, External DNS watches Ingress resources and syncs hostnames to Route 53
  • Target groups track pod IPs - The controller updates target group registrations as pods scale up/down or get rescheduled
  • SSL termination happens at the ALB - ACM certificates are attached to the ALB, not the pods, simplifying certificate management
  • Health checks determine routing - Unhealthy pods are automatically removed from rotation; always configure appropriate health check paths
  • The complete flow is: User -> Route 53 -> ALB -> Target Group -> Pod - Understanding this path makes troubleshooting much more straightforward

The key insight that clicked for me: the AWS Load Balancer Controller is essentially running a continuous reconciliation loop, ensuring that whatever you declare in your Ingress manifests becomes reality in AWS. It’s infrastructure as code, but the controller is doing the heavy lifting of translating Kubernetes abstractions into AWS resources.