Storage I/O can be a significant performance bottleneck for Hadoop* clusters, especially in hyperscale deployments like those at Twitter, where a single cluster can have up to 10,000 nodes and nearly 100 PB of logical storage. The typical Hadoop cluster at Twitter contains over 100,000 hard disk drives (HDDs)—but this configuration was reaching an I/O performance limit because while HDD capacity has increased over time, HDD performance has not significantly changed.2 Therefore, simply adding more, bigger HDDs wasn’t going to solve Twitter’s scaling challenges—in fact, it would make things worse as the I/O per GB decreases. Adding more spindles per node was not feasible due to space and power limitations.
Working in collaboration with an Intel engineering team, Twitter engineers conducted a series of experiments that revealed that storing temporary files managed by YARN* (Yet Another Resource Negotiator*) on a fast SSD enabled significant performance improvements on existing hardware (up to a 50 percent reduction in runtime).3 The team also discovered that removing a storage I/O bottleneck enabled them to use larger hard drives while simultaneously increasing processor utilization, which in turn resulted in the ability to use higher-core-count processors. This positively affected storage performance, and contributed to higher data center density by reducing the number of required HDDs.
Higher density leads to total cost of ownership (TCO) savings through energy efficiency, fewer racks, and a smaller data center footprint. Overall, Twitter expects that caching temporary data and increasing core counts will result in approximately 30 percent lower TCO and over 50 percent faster runtimes, compared to their legacy production cluster configuration.1
Read the white paper - Boosting Hadoop* Performance and Cost Efficiency with Caching, Fast SSDs, and More Compute