It's Still 'Intel Inside', But Now Intel's Inside The Cloud

Intel re-architects x86 toward the needs of the cloud, builds in management features for large scale server power savings.

Charles Babcock, Editor at Large, Cloud

February 1, 2017

5 Min Read
Credit: Pixabay

 More on Cloud Live at Interop ITX
More on Cloud
Live at Interop ITX

Intel used to be driven by the needs of the latest personal computer application or the data center's growing appetite for database applications and network connectivity. But they're old hat now. The cloud is in the driver's seat now, and like IT managers inside the enterprise, Intel executives think they are ill-advised to ignore it.

The need to manage servers in the cloud by the rack, or more likely by the hundreds of racks, is dictating changes to the way Intel builds chips and circuit boards that are slated to end up in the heart of the cloud. One of the main changes is the way it is instrumenting the CPUs and boards so that they produce more data on how they're running and how they can be managed.

Intel, for example, has placed sensors on both ends of the circuit board to report the temperature of cooling air starting over the board and its temperature as it exits. There are multiple makers of power monitoring and control devices that need that information. Intel makes it available through an API gateway and supplies a software developer kit to the likes of Schneider Electric, Siemens or Emerson Network Power to capture the data flow and use it in their power management systems, said Jeff Klaus, general manager of Intel data center management solutions.

Google is a leader in the use of machine learning to cut data center power consumption. See Google's Deepmind AI Cuts Data Center Power Bills.

"There's a flow of information out to the facilities infrastructure and power management systems" and it tells those systems what the needs are for airflow and cooling and how electricity being supplied to a rack might be conserved, Klaus said in an interview.

When running a data center in a country such as India, where a power outage may occur at any time of day, being able to monitor power consumption and issue commands affecting consumption can be a key part of operating a data center. Indian data centers have generators on site that crank out backup power if electricity flow on the grid is interrupted. During the period when the generators are being fired up and phased in, most data centers rely on a reserve of battery power, good for 12 or 15 minutes. If a problem is encountered phasing in the replacement power, it can be a big help to have the life of the battery power extended, which can be accomplished if servers are given a command to reduce their cycle frequency, Klaus said.

Today, that's possible through the combination of Intel's circuit board instrumentation, software development kit and API gateway interfaced to a third-party's tools and power management system, he said.

Another place with frequent, unpredictable power outages is Turkey. "In Turkey, they need to switch to diesel power frequently. So when they turn on the backup power, they cycle down the servers to conserve their batteries. A workload might take longer, but there's no catastrophic failure," he noted.

But a third party's power management system must be in place and ready to issue commands to the base board controller on server circuit boards to slow their cycles and cut their power consumption. Klaus isn't sure exactly how Amazon Web Services' EC2 or Microsoft's Azure cloud operations do this. The major cloud suppliers "hate any black box code inside their environment" or third party code that the supplier has neither written or understands. Consequently, "they look at what we've done and go and write their own tools" to manage their data centers, he said.

On the other hand, co-location providers and managed services providers, such as Rackspace, have collaborated closely with Intel to achieve advanced power management. Intel has produced its own Data Center Manager software from what it's learned of their practices in the field. "The leading co-location suppliers have helped us advance our power usage models. They've taken us in new directions that we would not have thought of on our own," he said.

Rackspace in particular "has helped us with our dynamic power mapping in the data center," he said.

IT managers need these modern systems to squeeze expense out of the their data center operations, but to accomplish that, they need a way to talk to the facilities managers who are supervising power delivery and the devices that manage that delivery.

High performance servers in the cloud will consume at least half of their peak power usage when they're sitting idle and doing nothing, Klaus continued. Through a power management system that can bridge the gap between facilities management and IT operations management, they can define activity threshholds and draw up actions that respond to slowdowns in the data center, scaling back power when full power is no longer needed.

"We're adding instrumentation with each iteration of the chipset. In the future, there's going to be more information on how your data center is running," he said.

The Chinese gaming and ecommerce company TenCent, is one of the early beneficiaries of more sophisticated stacking of servers in a rack with subsequent power management. The firm's data centers typically installed 15-18 servers to a 42u rack. Intel illustrated how upping the count to 20-25 servers yielded a big savings in space, power usage, cabling and capital expense.

Intel's Data Center Manager software is licensed to Schneider Electric, Fujitsu, IBM, Lenovo and Dell, among others. Dell markets it as the Dell Open Management Power Center.

With the growing sophistication of CPU and circuit board feedback, the market for data center information management systems is expected to grow at a compound annual rate of 21% a year to $2.86 billion by the year 2024, according to this Aug. 19 report by Transparency Market Research.

What once had to be done by hunch and experienced intuition is likely to become more of an exact science, with multiple tools and management systems insuring that, whatever power can be saved, will be.

Read more about:

20172017

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights