top of page

34% reduction - AWS Cost optimisation

  • Adrian Mooney
  • Feb 28, 2024
  • 2 min read

Updated: Jun 12

As AWS environments grow organically over time, many organisations accumulate inefficiencies that go unnoticed until spending becomes a concern. This is especially common in environments where developers have focused primarily on functionality and delivery, with less attention paid to cost structure. Without regular optimisation, these inefficiencies compound year after year - resulting in avoidable and often substantial waste.


In this case, a client engaged us to review a long-standing AWS account hosting their core platform. The system had been built several years earlier, with various developers deploying services over time. Like many such environments, the account had not undergone a structured cost optimisation review since its inception.


Several common inefficiencies were identified:

  • Old snapshots of EBS volumes and RDS databases, some dating back several years and no longer associated with active resources.

  • Oversized EC2 and RDS instances, chosen without alignment to actual workload requirements. In many cases, general-purpose instances were used where memory-optimised or CPU-optimised types would have been more cost-effective.

  • Multi-AZ configurations in development environments, doubling costs where high availability was unnecessary.

  • Lack of commitment-based discounts, such as Savings Plans or Reserved Instances, for predictable and stable workloads.

  • No use of spot instances for suitable workloads like Kubernetes jobs or other stateless compute tasks.

  • Aurora Serverless used for steady database workloads, resulting in higher long-term costs compared to reserved instances.


In the client's specific case, large volumes of aged EBS and RDS snapshots were discovered - many with labels such as "backup before patching" - and had remained in storage, forgotten for years. Several EC2 AMIs were also found with no recent launches associated with them. Each snapshot and AMI was assessed to confirm whether it could be safely deleted or needed to be retained for recovery or audit purposes.


The RDS database was running on Aurora Serverless with autoscaling enabled. While appropriate for unpredictable traffic patterns, the client's system had shown consistent usage over multiple years. This made it a strong candidate for a standard instance type. We selected an instance class with sufficient capacity and reserved it under a three-year term, significantly reducing ongoing costs.


Additionally, many EC2 instances were running at only a fraction of their capacity. These were right-sized and converted to Reserved Instances where long-term use was expected, immediately reducing the per-hour cost.


Overall, the optimisation reduced the client’s monthly AWS spend by approximately 34%. The savings quickly offset the cost of the analysis and implementation, providing a tangible return on investment within just a few billing cycles.


This case illustrates the value of regular cost optimisation in cloud environments, especially those that have evolved over time. Small inefficiencies can persist undetected for years, but when addressed methodically, they offer significant cost-saving opportunities without compromising system reliability or performance.

 
 

Recent Posts

See All
RDS database slow - storage layer

In this case study we explore a problem where we tackled performance issues plaguing an enterprise application responsible for processing...

 
 
bottom of page