DynamoDB Pricing and Cost Optimization Guide
Amazon Web Services DynamoDB, a NoSQL database service, is excellent for applications that require low-latency data access, such as web, mobile, IoT, and gaming apps. It improves the durability of an application by handling large amounts of data quickly and efficiently. Among its features are built-in caching and security and backup support for web applications. Because AWS DynamoDB supports ACID transactions it can be scaled at the enterprise level, allowing for the development of business-critical applications.
AWS DynamoDB does offer a free tier, but its pricing for paid plans charge based on six factors:
- The amount of data storage
- The amount of data you read and write
- The amount of data transfer
- Backup and restore operations you performed
- DynamoDB streams
- Amount of write request units replicated when using global tables
This article will thoroughly explain DynamoDB pricing structure and how to reduce expenses in DynamoDB by taking the above factors into account, so you can get the best performance at the lowest cost.
DynamoDB can be extremely expensive to use. There are two pricing structures to choose from: provisioned capacity and on-demand capacity.
DynamoDB Provisioned Capacity
In this Amazon DynamoDB Pricing Plan, you’re billed hourly per the use of operational capacity units (or read and write capacity units). You can control costs by specifying the maximum amount of resources needed by each database table being managed. The provisioned capacity provides autoscaling and dynamically adapts to an increase in traffic. However, it does not implement autoscaling for sudden changes in data traffic unless that’s enabled.
DynamoDB On-demand Pricing
This plan is billed per request units (or read and write request units). You’re only charged for the requests you make, making this a truly serverless choice. This choice can become expensive when handling large production workloads, though. The on-demand capacity method is perfect for autoscaling if you’re not sure how much traffic to expect.
Knowing which capacity best suits your requirements is the first step in optimizing your costs with DynamoDB. Here are some factors to consider before making your choice.
You should use provisioned capacity when:
- You have an idea of the maximum workload your application will have
- Your application’s traffic is consistent and does not require scaling (unless you enable the autoscaling feature, which costs more)
You should use on-demand capacity when:
- You’re not sure about the workload your application will have
- You don’t know how consistent your application’s data traffic will be
- You only want to pay for what you use
You can learn more about how costs are calculated for both pricing structures for DynamoDB here.
DynamoDB Pricing Calculator
There are a few options available to help estimate and calculate what you might pay for with AWS DynamoDB. The best we’ve found for DynamoDB is AWS’s own AWS Pricing Calculator which can be found here, directly:DynamoDB Pricing Calculator. With this calculator, you can easily pick and choose DynamoDB features, the provisioned capacity, read/write settings and then get a clear estimate:
DynamoDB’s Read and Write Capacity
The read capacity of a DynamoDB table indicates how much you can read from it. Read capacity units (RCUs) are used to measure the read capacity of a table. For an object up to 500 KB in size, one RCU equals one strongly consistent read per second or two ultimately consistent reads per second.
The write capacity of a DynamoDB table informs you how much you can write into it. Write capacity units (WCUs) denote one write per second for items up to 1 KB in size.
Database behaviors can be tricky to measure, which makes scaling problematic. Underscaling your database can lead to catastrophe, while overscaling can lead to a waste of resources. The DynamoDB autoscaling functionality configures suitable read and write throughput to meet the request rate of your application. This means that when your workload changes, DynamoDB automatically adjusts and dynamically redistributes your database partitions to better fit changes in read throughput, write throughput, and storage.
Autoscaling is the default capacity setting when you create a DynamoDB table, but you can activate it on any table. In DynamoDB, you define autoscaling by specifying the minimum and maximum levels of read and write capacity, as well as the desired usage percentage. When the amount of consumed reads or writes exceeds the desired usage percentage for two minutes in a row, the upper threshold alarm is activated. The lower threshold alarm is triggered when traffic falls below the desired utilization, minus twenty percent, for fifteen minutes in a row. When both alarms are raised, the autoscaling process begins.
Monitoring DynamoDB Resources
Monitoring AWS resources such as DynamoDB for latency, traffic, errors, and saturation is called resource monitoring. It makes scaling your DynamoDB database easier since you can get the metrics you need, like network throughput, CPU utilization, or read/write operations. For example, after monitoring your database, you discover that the database experienced a high surge in traffic. This suggests that a large amount of data is being either read from or written to your database. You may decide to increase the read capacity of your database to accommodate more read requests or increase the write capacity to accommodate more write requests.
DynamoDB Cost Optimization
Now that you know some of the factors behind how you are billed when using AWS DynamoDB, here are some suggestions for making these factors work in your favor and ensuring DynamoDB is cost-effective
Picking the Right Capacity
You may already know which capacity structure you’d like to adopt, but keep these points in mind as you make your final choice.
DynamoDB On-Demand Capacity Cost Optimization
According to Yan Cui’s blog, his calculations suggest that on-demand tables are about five to six times more costly per request than provisioned tables. If your workload maintains consistent usage with no unexpected spikes, but you’re unsure about future usage, consider using provisioned mode with autoscaling enabled.
DynamoDB Provisioned Capacity Cost Optimization
If you use provisioned capacity and your capacity exceeds 100 units, consider purchasing reserved capacity. Compared to granted throughput capacity, reserved capacity delivers a seventy-six percent discount over a three-year term and a fifty-three percent discount over one year.
Finding Unused DynamoDB Tables
Unused DynamoDB tables are a waste of resources and unnecessarily raise your costs. You have two options for handling this. You can use the on-demand capacity mode to ensure you only pay for the database tables to which you make read/write requests. Alternately, try to detect the unused tables and eliminate them. To do this, you need to review the read/write operations on your tables. If a table has no read/write activity in the last ninety days, it is an unused table.
Reduce DynamoDB Backup Needs
According to estimations, the Amazon DynamoDB backup pipeline, which enables automatic backups and drops of data, can considerably raise your costs. This is due to the amount of WCUs constantly expended on creating a backup of the current DynamoDB tables and the amount of RCUs that will need to be expended when retrieving the backup tables. As a result, the backup table has no specified resource sizes and the table is growing without bounds. Instead use the native DynamoDB backup, which uses provisioned capacity. That way you know how much backup size you need and how much read/write resource capacity it will require.
Using AWS Cheaper Regions
Some AWS regions are more expensive than others. If you’re not concerned about your data location, choose the cheapest region you can get.
The cheapest AWS regions are us-east-1, us-east-2, and us-west-2, costing $0.25 per GB/month, $0.00065 per WCU/hour, and $0.00013 per RCU/hour.
Amazon DynamoDB can be an important tool for your software projects, but it can also be an expensive tool if you’re not careful. Following these tips to optimize your costs can help you keep your budget down and free you up to focus on other aspects of your business.
Manage, track, and report your AWS spending in seconds — not hours
CloudForecast’s focused daily AWS cost monitoring reports to help busy engineering teams understand their AWS costs, rapidly respond to any overspends, and promote opportunities to save costs.
Monitor & Manage AWS Cost in Seconds — Not Hours
CloudForecast makes the tedious work of AWS cost monitoring less tedious.
More from CloudForecast
AWS cost management is easy with CloudForecast
We would love to learn more about the problems you are facing around AWS cost. Connect with us directly and we’ll schedule a time to chat!