The evolving nature of the digital landscape, the attendant energy resources required to support it and the resulting carbon emissions are daunting. Over the past decade, the worldwide number of data centers increased from 500,000 to more than 8 million. By 2025, the IT industry could use 20 percent of all electricity produced and emit up to 5.5 percent of the world’s carbon emissions.
While a majority of companies understand climate change poses significant risk to their facilities and finances, only half the companies surveyed by the Task Force on Climate-Related Financial Disclosures, figure climate change into their current risk management. This illustrates a real gap between knowing there is a problem and knowing how to fix it.
The increasing proliferation of AI, machine learning, advanced data analytics, and other technologies makes the infrastructure required for data performance that much more urgent and future sustainable data center architectures will have to be optimized for this rapidly evolving technology. For example, those still deploying storage area networks (SAN) and other disk-based technologies will need to transition to all-digital solutions like flash memory. Ultra-fast, low-power flash is required to feed data to accelerator technologies such as GPUs, which can effectively process AI calculations. Mechanical media are just not up to the performance requirements of these algorithms.
As we learned during COVID, interruptions in the global supply chain put additional indirect pressure on organizations to optimize infrastructure. In order to avoid interruptions in workflow, organizations must maximize use of existing and any new infrastructure, as well as future-proof it by deploying resources that are reliably in production longer.
Data center sustainability requires low(er) power data performance at scale
The so-called “rip and replace” system of infrastructure restoration in the tech industry is no longer reasonable, much less sustainable. That is not just because of budgets or government policies and the quarterly requirement to report on corporate sustainability efforts. Application-level technologies are evolving so rapidly that planning infrastructure for the next three to five years is far less predictable than it was … well, three to five years ago.
In addition, during the past decade-plus of infrastructure refreshes, the industry did have a chance to just replace SATA/SAS HDDs with SATA/SAS SSDs. For a long time, however, flash memory was just much too expensive compared to HDD, making flash good primarily for workloads that required the highest performance, but costs made it difficult to utilize at scale.
The industry made compromises at the price and performance level. Though HDDs could not compete on performance, they were highly competitive when it came to overall TCO at scale. Now that flash is nearing HDD prices, could the industry return to the model of scaling large numbers of slow and low-capacity flash drives? In short, no, because the capacity of flash drives has exponentially increased over that time, enabling IT users to fit as much 1 PB of data performance on a single shelf compared to what used to take an entire refrigerator-sized rack of HDDs.
This approach of using fewer (but faster) drives saves enormous amounts of power. With this approach, you also don’t have to build a new energy-efficient data center. Instead IT users equip every shelf with SSDs that are worlds faster than HDDs. The power savings come from the fact that you only need a few NVMe drives. Individually the SSDs use more power than the HDDs, but collectively the power savings are extremely compelling vs. HDDs and the performance is significantly increased.
While it’s true that HDDs have been chasing >32 TB, it’s taken a decade to get to that density. Phison is already shipping SSDs with 32 TB this year and plan to make 64 TB drives available next year. So IT leaders have a limited set of choices for the 1 PB scenario:
- Use a large number of slow HDDs to achieve high bandwidth, but burn a lot of power and take up a lot of rack space.
- Use a small number of high-density HDDs but have low bandwidth for 1 PB.
- Use a small number of very fast SSDs and exceed the DRAM bandwidth on the server.
No matter how you envision it, however, the collective performance of HDDs will not achieve the price, performance or low-power footprint of flash in this new scenario. In short, the HDD has already lost this contest.
AI is key to the ongoing evolution of the data center
As taxing on infrastructure as AI is, it is also being built into the heart of new sustainable data center operations in order to significantly improve efficiencies. Hyperscalers are leading the charge in developing new data center models meant to address the reality of climate change and meaningfully reduce emissions and power usage for the data center to create a more sustainable model for growth.
Phison invests in innovative sustainability solutions for data centers
Phison has pioneered advancements in low-power, custom flash memory solutions, reaching new levels of design flexibility with its IMAGIN+ design service. IMAGIN+ delivers industry-leading data performance, with low-weight devices in smaller dimensions—and lower power consumption—than its competitors.
Phison also collaborates closely with customers to make sure the exact storage system is designed to support what they need today. The company also tracks the lifespan of individual SSDs in order to predictably deploy, service and replace individual components. Specialized configurations can be tailored to meet the data performance requirements of accelerator technologies that deliver maximum efficiency with the minimum footprint. In addition, the power state of Phison IMAGIN+ devices can be customized to run at peak efficiency depending on the workload, making certain that devices do not sit idle and waste power.
As the data center ecosystem continues to evolve, engineers will continue to wring efficiencies out of the hardware required to keep the world running. New inefficiencies will be introduced and will require further engineering to address them. Infrastructure designed around fast, flexible, low-power storage, tailored to meet the specific compute requirements, will continue to be foundational to the evolution of the data center.