Voiced by Amazon Polly
With NoSQL databases leading the next generation data stores, MongoDB and DynamoDB are two very viable options for quite a number of usecases. While MongoDB is a open source document store database, DynamoDB is a managed key-value store which is offered as a service by Amazon. MongoDB, being available as open source and has support for various platforms (cloud and in-house), offers higher control over the database when compared to managed datastore like DynamoDB. My colleague wrote and article 5 reasons why DynamoDB is better than MongoDB. In response to that below I give five reasons to choose DynamoDB over MongoDB.
Reason 1: It’s all about the data model and indexes
Proper indexing is arguably the most important part in a NoSQL database design. MongoDB allows you to have arbitrary index on any of the fields. It also allows you to have many other kind of indexes which include compound indexes, multi-indexes on arrays and geospatial indexes. In version 2.6, MongoDB has support for full text indexes to enable faster text pattern searches.
DynamoDB on the other hand provides limited indexing capabilities. Primary key is indexed, and now it allows indexing based on other keys via Local Secondary Indexes (LSI) and now Global Secondary Indexes (GSI) , but its no where near the capabilities of MongoDB. If you want efficient queries, it might be that DynamoDB’s limited capabilities are just not going to work for you.
Helping organizations transform their IT infrastructure with top-notch Cloud Computing services
- Cloud Migration
- AIML & IoT
Reason 2: Change is the only constant thing in life
You designed your tables and indexes according to your current knowledge about the application and how your users will use it. But guess what, as applications evolves so should that database. Most of the time you will need to change schema. MongoDB allows you to change schema, although it requires some downtime. As MongoDB supports flexible schema, the developers have the freedom to change the structure of a document according to the current requirements, without having the overhead of updating the collection schema on the backend. But, updating the already present documents and indexes to the new schema would require a downtime.
DynamoDB on the other hand is very rigid when it comes to changes on indexes. You have to pretty much delete and recreate the entire table. This is usually not an option for most production systems. Many users create new tables with desired indexes, backfill the data and change application to point to new tables. This can be an operational nightmare to say the least.
Reason 3: In this case, size matters
DynamoDB has a lot of operational limits that most new users are not aware of. When they reach that scale, suddenly they could be up for a shock that their data can no longer grow. For example, there is a kind of index where your index is a composite key where you have one key that is a hash key and another key a range key. A lot of users love this kind of index as it allows to –
1) Have a hash key that is not unique. The combination of hash + range key has to be unique
2) All data for the same hash key is stored in the same partition which makes queries limited to a single hash key fast.
DynamoDB also supports local secondary indexes but, such keys levies a restriction that a single hash key collection cannot be greater than 10 GB.
There are also restriction on size of a single item, which is just 64KB. Compared to MongoDB limit of 16MB per BSON document, it’s really low. It might put artificial restriction on the database design.
Reason 4: Planning capacity for individual tables can be taxing
In theory, being able to plan capacity in terms of read and write per second sounds great. In case of DynamoDB, the capacity planning is per table. When the capacity planning is per table and your application has around 100 tables, it can be quite taxing to just come up with right read/write capacity of each table. Also user behavior changes based on small changes made to frontend and it can result on some tables being used more compared to others. The onus will be on you to find this out, and change the capacity allocation. For example, you wrote a Fantasy Football app. The app is going well, you have adjusted capacity based on how the users are using the system. Now you decide to add a small change in UI, where a user’s position in the leaderboard is displayed on the main page. This triggers change in behavior where more users are now going to leaderboard, and suddenly your leader board requests are failing, as the leaderboard table does not have enough capacity provisioned. In MongoDB such trivial change in behavior does not usually result in you having to change anything, if the average engagement remains the same.
Reason 5: People want platform independence
While MongoDB can be deployed on any server, may it be on AWS, Azure, Google Cloud, in-house datacenter, or any other IaaS provider, DynamoDB is a proprietary solution developed and managed by AWS. Thus it can only be used on AWS (well in most use cases). Also once you start using DynamoDB you are married to the AWS platform, moving away from AWS will mean finding a replacement for DynamoDB (there is no database which has feature parity with DynamoDB at the moment of this writing), and almost rewriting your application. As a result, a lot of companies that I have consulted for are staying away from DynamoDB, just in case they might have to move out of AWS.
If you are looking for a NoSQL data store for enterprise needs with complete control over the data and also a constant and reliable support, MongoDB would definitely be a viable option and hence give it a try.
Get your new hires billable within 1-60 days. Experience our Capability Development Framework today.
- Cloud Training
- Customized Training
- Experiential Learning
WRITTEN BY CloudThat