top of page

Dev created AWS account - best practices implemented

  • mooneya9
  • Mar 1, 2024
  • 3 min read

Updated: Jul 22

It is common to see early-stage applications running in AWS environments that were originally set up by the same developers who built the product. While such configurations may function well in the short term, they often fall short of long-term operational and security best practices - especially as the system grows or begins to handle sensitive data.


One new client operated a Kubernetes-based application on AWS (EKS) that stored and processed personally identifiable information (PII). During a security review, several serious issues were identified. All pods were assigned public IP addresses, and the associated security groups were open to the world. This effectively exposed the entire application surface to the internet with no meaningful restriction beyond application-level security.


This design introduced significant risk. The only defence in place was the assumption that developers would never make a mistake - an unreliable foundation in any production environment.

To address this, we applied the principle of defence-in-depth: the concept of layering multiple security controls so that if one layer fails, others still prevent compromise. For example, even if a developer mistakenly opens a port on a security group, a subnet-level NACL can still block traffic, providing a second line of defence.


A common issue seen in similar environments is the use of administrative tools, such as phpMyAdmin, installed alongside the web application containers. If these are left unprotected and exposed to the internet, the risk of compromise becomes high. Without multiple security boundaries, even a minor configuration oversight can lead to full environment exposure.


For this client, we re-architected the environment to isolate workloads to private subnets. Only the application load balancers remained in public subnets, exposing only standard web ports (80 and 443). Security groups were tightened to only allow necessary traffic, and subnet-level NACLs were configured to block non-required ports by default.


To improve detection and governance, we deployed AWS Config rules that triggered alerts if insecure configurations - such as overly permissive security groups - were applied. AWS Security Hub was also integrated with the client's ticketing system to ensure security findings were reviewed and actioned.


After the mini re-architecture, the client’s network had four layers of defence-in-depth:

  • The application load balancer did not listen on unnecessary ports.

  • Network ACLs blocked disallowed traffic at the subnet level.

  • Security groups tightly controlled allowed traffic per resource.

  • The workloads were moved to private subnets, eliminating direct internet exposure.


With this design, even if a developer installed an administrative tool that listened on an obscure port (e.g. 9000), there would be no viable path for external access without multiple layers of failure or misconfiguration.


Whilst these are the basics of defence-in-depth, in AWS there are several other options that can be added to further enhance the security posture. It really depends on the nature of the workload. The client in this case was advised to include more than these due to the nature of the data they handled, however in the immediate term they opted for the network re-architecture with a task to consider the additional recommendations later.


Additional improvements were also introduced to raise operational maturity. One of these was the implementation of deployment approval controls. Previously, developers could push changes directly to production. We implemented a CI/CD workflow that required all production deployments to pass automated tests and include an approval step by designated change controllers. This not only enforced change governance but also aided future root cause analysis, as it allowed the team to rule out unapproved deployments as the source of potential outages.


This case illustrates how adhering to best practices in cloud architecture can dramatically reduce risk and improve operational clarity. Many of the most valuable improvements go unnoticed - because they prevent incidents from occurring in the first place. In secure, well-managed systems, the absence of incidents is often the clearest indicator of the value that thoughtful design provides.

 
 

Recent Posts

See All
RDS database slow - storage layer

In this case study we explore a problem where we tackled performance issues plaguing an enterprise application responsible for processing...

 
 
bottom of page