Which AWS Services Are Overkill for Your Drupal Site (and What to Use Instead)
Blog

Which AWS Services Are Overkill for Your Drupal Site (and What to Use Instead)

Running Drupal on AWS gives you flexibility, scale, and speed. But it also gives you a big opportunity to overspend, especially when you start using services that don’t match your real needs. A lot of teams plug in high-end AWS services, thinking they’re “best practice,” when in reality, they’re just unnecessary for how Drupal actually works.

If you’re on a Drupal on AWS setup, it’s time to clean house. This article breaks down what’s overkill, what’s better, and how to avoid paying for things that add zero value to your site.

AWS RDS with Provisioned IOPS: Overkill for Most Drupal Sites

Unless you're running a high-transaction commerce platform or have unpredictable spikes in database queries, you likely don’t need RDS with provisioned IOPS. Drupal’s queries are mostly read-heavy and can be heavily cached. For most business sites, standard RDS with general-purpose SSD storage (gp3) works just fine.

Instead of overprovisioning for speed you won’t use, optimize your Drupal Views and caching layers. You’ll reduce the query load and get better performance with fewer resources. And if you must scale, consider Aurora Serverless instead, it adjusts to load automatically and often costs less.

AWS ElasticSearch Service: Too Much for Search

Elasticsearch is powerful but expensive, and for most Drupal sites, it’s simply too much. If you’re using it just to improve basic site search, you’re wasting money. It also comes with overhead: memory tuning, index monitoring, and unplanned outages that can break search entirely.

Stick with Search API Solr, which integrates natively with Drupal and runs well on smaller EC2s or even managed Solr platforms. You get fast, relevant search without a heavyweight bill. And if your site doesn’t need deep filtering or faceted search, Drupal’s built-in search can still be good enough with a bit of tuning.

AWS Redshift: A Misfit for Drupal Reporting

Redshift is built for massive-scale analytics and data warehouses, not CMS reporting. If you’ve plugged Redshift into your Drupal stack to run basic content reports or user dashboards, you’re misapplying the tool.

Instead, log structured data to S3, then query it with Athena or pipe it into a lightweight BI tool. Most of Drupal’s reporting needs, like content trends, user engagement, or editorial performance, can be handled with native database queries or external analytics tools like Matomo or GA4.

AWS Lambda for Drupal Cron Jobs: More Complex Than It’s Worth

Yes, you can run your Drupal cron jobs in AWS Lambda. But should you? Probably not. Cron jobs in Drupal are already handled by its native queue system or scheduled via standard Linux crontab on EC2. Moving this to Lambda adds unnecessary complexity and makes debugging harder.

If your cron jobs are bloated, the solution isn’t Lambda. It’s streamlining what you’re doing in them. Break up large jobs, monitor execution time, and keep them stateless. You’ll avoid timeouts and still run them efficiently on a basic EC2 instance.

Using Dedicated ELB for Every Environment: Just Burn

Many teams set up a full-blown Elastic Load Balancer for dev, test, and staging environments. That’s a fast way to inflate costs without getting real benefit. These environments don’t need full-scale load balancing or autoscaling; they just need access and uptime for testing.

Instead, run dev and staging environments on smaller single EC2 instances or even containers. Use Application Load Balancer only where it matters, on production, where real users access the site.

CloudFront for Admin Interfaces: Unnecessary and Risky

CloudFront is excellent for caching and performance, but it’s not designed to sit in front of admin panels or backend logins. It introduces caching behaviors that can mess with authenticated sessions and form submissions. Plus, you’ll be paying for global edge delivery where it’s not needed.

Use CloudFront where it shines: for public assets, images, documents, and static files. For your admin URLs, route traffic directly through your load balancer or EC2 instance to keep things predictable.

ECS or EKS for a Simple Drupal Site? Wait.

Containerizing Drupal makes sense if you're deploying frequently or managing dozens of microservices. But for a single or even multi-site Drupal setup with moderate changes, running ECS or EKS is often unnecessary. You end up spending more time maintaining containers, writing Dockerfiles, and debugging infrastructure than you save.

Stick with a standard EC2-based auto-scaling setup unless your DevOps maturity truly demands container orchestration. Simplicity saves money and downtime.

S3 Without Lifecycle Rules: A Silent Budget Killer

Using S3 for media and backups is smart. But forgetting to set up lifecycle policies? That’s how bills quietly rise. Drupal doesn’t auto-clean old assets or temp files stored in S3. Without rules, you’re paying for every unused MB sitting there forever.

Set up S3 lifecycle policies to move files to infrequent access or archive storage after a set period. Better yet, routinely audit your buckets and clear unused files from temporary folders or deprecated sites.

What This Means for You

If you’re using Drupal on AWS, this isn’t about cutting corners. It’s about aligning what you’re using with what your Drupal site actually needs. Cloud spending only becomes a problem when teams start plugging in services because they seem “enterprise-grade” or “future-ready.”

Drupal is powerful, but it’s also modular. You can build high-performing, scalable, and cost-efficient systems without drowning in AWS complexity. The real savings come from knowing when not to use a service.

Download the Drupal Guide
Enter your email address to receive the guide.
get in touch