Blue-Collar Engineering Dispatch #0: "You Don’t Need Kubernetes (Yet)"

You Don’t Need Kubernetes (Yet)

Hi there, and welcome!

Let’s kick things off with one of the most overused (and misunderstood) tools in modern engineering: Kubernetes.

The Tale of Brewly: A Coffee Startup Brewing Complexity

Brewly was a hot new startup with a simple idea: deliver freshly brewed, sour artisanal coffee to people’s doorsteps in under 10 minutes. Think of it as Uber for caffeine-deprived millennials. With $1 million in seed funding and a team of five scrappy engineers, they were ready to change the world—or at least wake it up.

The Brewly MVP (Minimum Viable Product) was a hit. Customers could place orders on their app, and the backend—hosted on a single t3.micro EC2 instance—sent notifications to nearby delivery people. The system ran on Node.js with a simple PostgreSQL database, and everything worked like a charm. “We’re scaling like crazy!” their CEO, Rishi, proclaimed after they got 37 orders in one day. “We need to future-proof our platform.”

Jaden, starting five engineer, had been to a tech conference once, where someone mentioned Kubernetes. He didn’t fully understand it but remembered phrases like “containers,” “scaling,” “cloud-native,” and something about “crypto”. It sounded like something Brewly needed. After all, you couldn’t just run the future of coffee on one server, right, right? The engineers weren’t so sure. “Kubernetes seems like overkill for what we’re doing,” said Lani, the lead engineer. “Overkill and Gen AI is what investors love,” Rishi replied. “Make it happen.”

The team dove in. Kubernetes wasn’t just a tool—it was a lifestyle. The engineers were even considering matching K8s tattoos if this worked. For weeks, they worked tirelessly to containerize their app, learn Helm, and figure out how to connect pods to their database. They set up an EKS cluster on AWS, complete with auto-scaling, rolling updates, egress, and a custom monitoring dashboard using Prometheus and Grafana. It only cost $5,000 a month.

Their moment of triumph came when they deployed the new Kubernetes-powered system. Orders came rolling in! Unfortunately, so did the errors.

  • One pod randomly restarted in a crash loop, causing customers’ coffee orders to disappear mid-transaction.

  • Another pod wasn’t connecting to the database because Helm chart YAML file had a typo. “Password1!” instead of “P@ssword1!”.

  • The Horizontal Pod Auto scaler (HPA) spun up 20 pods after VJ placed a test order, maxing out their AWS budget in under an hour.

“I thought Kubernetes was supposed to fix scaling problems,” Lani muttered, as she combed through logs trying to figure out why their Pod’s kept getting OOM’d. Things reached a boiling point when a delivery person named Mark placed an order for a double mocha latte drip to test the app. Instead of sending one notification, the system sent 10,000 pings to every delivery biker in the city. Chaos ensued. The delivery engineers phones buzzed non-stop, some of them crashed, and one unlucky person who lived near the AWS data center claimed his phone caught fire. (He later admitted it was already broken, but still, it wasn’t a great look for Brewly.)

Rishi called an emergency meeting. “Why is this so hard? We used to be able to deploy in five minutes!” he yelled. “Well,” Lani replied, “we’ve added a lot of complexity. Half the team is working just to maintain Kubernetes. The other half is debugging why it broke in the first place.”

“So what’s the solution?” Rishi asked.

Lani looked him in the eye. “We go back to the t3.micro. Engineering is hard sometimes”.

With a reluctant nod, Rishi agreed. The engineers tore down their Kubernetes cluster, migrated the app back to a single EC2 instance, and used Docker Compose to handle the containers. The cost dropped to $30 a month. Deployments were fast again. The app worked…for now.

Orders skyrocketed to 53 a day, and customers were thrilled. Rishi went back to investors with a new pitch: “We’ve reinvented simplicity in cloud architecture.”

I know, the story is contrite and silly, but the struggle is real for a lot companies. Even larger companies can feel this pain with a lack of skill set to enjoy the benefits of what Kubernetes can offer. This is just an observation, of course. Your mileage may vary, and maybe you do need Kubernetes to run your coffee-ordering app. But for the majority of us, simpler solutions often work just fine. Over engineering adds unnecessary cognitive load, burns through budgets, and—let’s face it—leaves us staring at our code thinking, “Why is this so complicated?”

Enough with the lead in, let’s talk about Kubernetes!

The Concept: Avoiding Over-Engineering with Kubernetes

Kubernetes is amazing—for companies like Google, Netflix, or anyone running a sprawling ecosystem of micro-services. But for most startups or side projects, it’s like bringing a 20-ton crane to plant a flower bed.

Here’s why you probably don’t need Kubernetes (yet):

  • Overhead:

    Running Kubernetes isn’t just about spinning up a cluster—it’s about maintaining it.

    • You’ll need to configure tools like Helm, manage YAML files for every deployment, set up ingress controllers, and troubleshoot why pods are in CrashLoopBackOff at 2 AM.

    • Plus, there is the operational cost: managed Kubernetes services like EKS or GKE still need thoughtful monitoring, scaling policies, and network configurations. If your current deployment process is something like "push to GitHub and deploy to a single VM," Kubernetes is a step function increase in complexity.

  • Underutilization:

    Kubernetes shines when orchestrating hundreds or thousands of services across a dynamic environment. But if you’re hosting a few APIs, a database, and maybe a frontend, you’re likely paying a "Kubernetes tax" without reaping the benefits.

    • Load balancers, auto-scaling groups, and even simple cloud-hosted container runtimes (like AWS Fargate or ECS) can handle traffic spikes with less fuss.

    • Starting with a small foot print allows you to spend less money and engineering effort in the beginning when time and money extremely paramount.

  • Team Knowledge Gap:

    Kubernetes requires a deep understanding of concepts like pod affinity, tolerations, network policies, and persistent volume claims—not to mention a working knowledge of debugging distributed systems.

    • Unless your team already has Kubernetes expertise, you’ll spend weeks or months learning and configuring it instead of delivering business value.

    • It also creates hiring challenges: finding engineers comfortable with Kubernetes is harder and often more expensive. Why introduce that pain early on when simpler alternatives exist?

When Kubernetes Might Be the Right Tool

The fact is, I really like Kubernetes. It can become an invaluable tool as your systems and teams grow. If you’re managing dozens of services that need fine-grained control over deployment, scaling, and fault tolerance, Kubernetes can help you orchestrate and automate. So, here is a short list of reasons why you should use Kubernetes in your environment:

  • If you are deploying across multiple cloud providers or hybrid environments, Kubernetes provides a consistent abstraction layer, making your workloads portable.

  • If your traffic patterns are highly variable, Kubernetes can dynamically scale your workloads to handle spikes without over-provisioning resources.

  • If you workload has specific needs that are not available in the standard cloud, Kubernetes has Operators and robust API’s to build the right tooling and mechanisms.

  • And for larger teams, it creates a standard workflow, ensuring engineers follow the same deployment practices, regardless of what they’re working on.

The key is to let your needs—not FOMO—drive the decision. Another lesson I have learned when unleashing the nautical beast is being opinionated from the jump. It is easy to try to accommodate all the teams and services for the sake of speed. However, this causes fragmentation and a bunch of PETS. Pick tooling and processes early, your future self will thank you. Remember, Kubernetes is powerful, but power without purpose is just overhead.

Hands-On: A Simple Path to Production

Let's contrast the Kubernetes complexity with a straightforward application deployment on a cloud provider like AWS. I have provided all the necessary files and sample application for you to try this on your own. All the files can be found here in the companion GitHub repository.

Goal: Deploy a resilient, auto-scaling web application without Kubernetes

What You'll Need:

  • AWS account

  • IAM user with appropriate permissions

  • AWS CLI configured with your credentials

📢 When using the following commands, keep in mind that they are targeted towards the AWS Free Tier and your actual costs may differ. To avoid any unexpected charges, please thoroughly review these commands and consider scaling up or adjusting your usage before executing them.

Once you have the basic per-requisites configured in AWS (User, Access, VPC) it can be very trivial to launch some compute and add fault tolerance to it. AWS provides an easy way to setup a blueprint for all new instances and a controller (Auto Scaling Group) to manage those instances. Here's a condensed version of the commands to get your application up and running:

Step 1: Launch Your First Instance

USER_DATA=$(printf '#!/bin/sh\\napt update\\napt install -y docker.io\\nsystemctl start docker\\ndocker pull %s\\ndocker run -d -p 80:80 %s' "$DOCKER_IMAGE" "$DOCKER_IMAGE" | base64)

aws ec2 create-launch-template \\
  --launch-template-name my-launch-template \\
  --version-description "Initial version" \\
  --launch-template-data '{
      "ImageId": "ami-0d4eea77bb23270f4",
      "InstanceType": "t4g.micro",
      "KeyName": "my-key-pair",
      "SecurityGroupIds": ["sg-12345678"],
      "UserData": "$USER_DATA"
  }'

This will create a blueprint of what every instance will look like. For example, all of the instances should have docker installed and pull a containerized image on start using the name set in $DOCKER_IMAGE.

Step 2: Enable Auto-Scaling


aws autoscaling create-auto-scaling-group \\
    --auto-scaling-group-name "SimpleAppASG" \\
    --launch-template "LaunchTemplateName=my-launch-template" \\
    --min-size 1 \\
    --max-size 3 \\
    --desired-capacity 2 \\
    --vpc-zone-identifier "subnet-12345678"

This command creates an auto scaling group based on the above launch template. We set the initial capacity for 2 instances with room to scale up to 3 in case of load. If any instance becomes unhealthy or fails, the ASG will automatically maintain the desired capacity.

Step 3: Set Scaling Policy

aws autoscaling put-scaling-policy \\
  --auto-scaling-group-name SimpleAppASG \\
  --policy-name my-scaling-policy \\
  --policy-type TargetTrackingScaling \\
  --target-tracking-configuration '{
      "TargetValue": 50.0,
      "PredefinedMetricSpecification": {
          "PredefinedMetricType": "ASGAverageCPUUtilization"
      }
  }'

This last command creates an auto scaling policy for the above ASG. This policy will try to maintain an average CPU utilization of 50% across all instances. So, if one of the instances increases it’s CPU than another instance will be created to even out the average across the group (stopping at max-size). Once the average CPU utilization falls below 50%, then the policy will remove instances until desired range is met or min-size is hit.

OK, there are missing glue pieces that I am conveniently glossing over, but for this purpose, that's basically it! You now have a scalable setup that can:

  • Automatically handle traffic spikes

  • Replace failed instances

  • Scale based on CPU usage

  • Cost less than a basic Kubernetes cluster

This setup is more approachable than diving straight into Kubernetes. Another advantage is that if you decide to transition to Kubernetes in the future, you’ll already have a scalable, containerized environment in place.

📢 Check out the GitHub repository for all the scripts and commands for this sample lab.

The Takeaway

A manageable system is one that uses just enough engineering. Start small, ship fast, and layer complexity as you grow. Kubernetes might have its time and place—but only when your needs demand it.

Reader Challenge

What’s a tool or technology you’ve seen overused in small systems? Reply to this email and let me know—I might feature your story in a future issue!

Until next time,

Bradley

Chief Advocate for Keeping It Simple