Big data and in-memory technology company Terracotta has introduced BigMemory Go, a tool designed to help IT architects capitalize on "inexpensive" server RAM to move data in-memory with the aim of speeding application performance.
Deployable on as many servers as a development shop wishes, BigMemory Go ships via a 32GB per-instance production license. Hyoun Park, principal analyst at Nucleus Research, described it as a route for developers to in-memory solutions that can "substantially improve enterprise-grade data" without making significant financial investment.
"BigMemory Go maps nicely to the volume and velocity aspects of Big Data. People want the real-time access that in-memory solutions can provide — and business data is growing at an incredible rate, so organizations have to face the challenge of scaling up their data infrastructure to meet this new pressure," said Nathaniel Rowe, research analyst, Aberdeen Group.
"Advances in server hardware and application design have led to a potential solution to this issue: in-memory computing. Aberdeen's research into Big Data showed that organizations with in-memory computing were not only able to analyze larger amounts of data in less time than their competitors — they were literally orders of magnitude faster," added Rowe.
Terracotta's major play with BigMemory Go hinges on a promise of a less costly and more scalable alternative to disk-backed relational databases and specialized appliances with limited capacity. The product also provides Ehcache users with an upgrade including "substantially more" in-memory capacity as well as robust search and management capabilities.
NOTE: Ehcache is an open source, standards-based cache used to boost performance, offload the database, and simplify scalability. Its developers say that Ehcache is robust, proven, and full-featured and this has made it the most widely used Java-based cache.
BigMemory Go includes an in-memory data store to allow users to store as much data in memory as their servers have available. The product boasts fast searches of in-memory data in a predictable manner with extremely low latencies — it also provides a fault tolerant, persistent store that supports the latest SSD and disk technologies.