April 6, 2016 By Larry Loeb 2 min read

The role of the data center is changing in the cloud-based world. Once the exclusive repository for an enterprise’s information, it has been widened to enable the storage and use of data that is physically located in other places. There is still a need for data to be controlled on-premises for security as well as operational reasons, but the elements that make up these data centers are changing.

It has long been an underlying principle in computing that CPUs perform better and are more expensive than input/output (I/O) devices. But in a new development, Storage Class Memories (SCM) — high-speed, nonvolatile storage devices — may change this long-held assumption.

The Standard I/O Ways

Over the last 30 years, processors have improved their speeds faster than I/O devices such as disks. Processing power (like CPUs) has been fast, while disks were slow. This led to stratagems that kept the processor issuing many I/O requests to disks and then waiting for the results to show up in cache.

There is a whole hierarchy of caching that depends on what level of processing is occurring. CPUs will cache the contents of RAM, operating systems will cache disk sectors to internal buffer caches and application-level architectures load back-end databases into storage caches. All have the goal of reducing data latency so that the data is available when needed for processing.

However, solid-state drives (and their up-and-coming cousins, nonvolatile DIMMs) interface with the CPU as if they were DRAM, not disks, according to McObject. As a result, they can provide a massive upgrade in performance; and the technology is only improving with regard to capacity and overall performance. This upsets the apple cart.

A paper published in the Journal of the ACM looked at the implications of this change in the architecture of a data center. The authors enumerated differing systems that can take advantage of SCM characteristics to do more beyond just throwing faster memory at the problem.

How to Balance I/O

One of the first options is balanced systems. They address capacity shortfalls and other bottlenecks caused by the presence of SCMs. This would mean that sufficient CPU cores should be present, and the network needs to provide enough connectivity between CPUs and storage for the data to be served out of that storage at full capacity.

By replacing a slow disk with faster SCMs, the performance bottleneck is just shifted. Eliminating the bottlenecks with balance deals with the logjams wherever they may occur. TritonSort is one example of this kind of system.

The second type of system is contention-free I/O-centric scheduling. It allows multiple CPUs to access a single SCM without serializing accesses across all the CPUs present in the data center. Recent research on mTCP described how this approach might be done.

Horizontal scaling and placement awareness distributes data across the cluster and proactively moves it for better load balancing. Higher-speed SCM can cache data from lower-speed sources, changing as the workflow progresses. While distributed storage systems have dealt with the same issues, SCM makes it more acute.

Organizations may also use workload-aware storage tiering, which works with accesses done locally to balance performance, capacity and cost requirements.

It Comes Down to the Storage Mix

All these techniques share the common goal of rebalancing workloads from the older timing models. SCMs are here to stay, and experts believe it will only increase in performance in the future.

The challenge of rethinking how a data center functions is inextricably bound to the use of new technologies that aim to improve how real-world computing gets done.

More from Mainframe

How dangerous is the cyberattack risk to transportation?

4 min read - If an attacker breaches a transit agency’s systems, the impact could reach far beyond server downtime or leaked emails. Imagine an attack against a transportation authority that manages train and subway routes. The results could be terrible. Between June of 2020 and June of 2021, the transportation industry witnessed a 186% increase in weekly ransomware attacks. In one event, attackers breached the New York Metropolitan Transportation Authority (MTA) systems. Thankfully, no one was harmed, but incidents like these are cause…

Low-code is easy, but is it secure?

4 min read - Low-code and no-code solutions are awesome. Why? With limited or no programming experience, you can quickly create software using a visual dashboard. This amounts to huge time and money savings. But with all this software out there, security experts worry about the risks. The global low-code platform market revenue was valued at nearly $13 billion in 2020. The market is forecast to reach over $47 billion in 2025 and $65 billion in 2027 with a CAGR of 26.1%. Very few,…

Starting From Scratch: How to Build a Small Business Cybersecurity Program

4 min read - When you run a small business, outsourcing for services like IT and security makes a lot of sense. While you might not have the budget for a full-time professional on staff to do these jobs, you still need the services.However, while it might be helpful to have a managed service provider handle your software and computing issues, cybersecurity for small and medium businesses (SMBs) also requires a personal, hands-on approach. While you can continue to outsource some areas of cybersecurity,…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today