The “No Power Struggles Project” sounds like some utopian political system where different factions work for the common good. In fact, it's the name an HP researcher gives to his dream of a harmonious data center.
Researcher Parthasarathy Ranganathan foretells a future in which power management features will be built into the processor, memory, server, software and cooling systems. Coordination will be paramount. “What happens if you turn all these elements on at the same time?” the principal research scientist at HP Labs asks. “How do I make sure that the system doesn't explode?”
Power management systems will have to operate holistically, without one component conflicting with another, Ranganathan says. Ranganathan is just one of many researchers at the tech industry's biggest labs looking beyond virtualization, multicore processors and other established technologies to see how future data centers will handle increasing demands for processing capability and energy efficiency while simplifying IT. Another is Laura Anderson, IS manager at IBM's Almaden Research Center. “I think we're on the cusp of another revolution,” she says. “We're talking about doing something to simplify and integrate these things in a way so that mere mortals can manage them.”
Cloud computing
Cloud computing, one approach Almaden researchers are pursuing, already has manifested itself in the Blue Cloud initiative IBM launched three months ago. Under the Blue Cloud architecture, enterprises can get Internet-like access to processing capacity from a large number of servers, physical and virtual. By not having to add machines locally, enterprises save on the cost of powering up and outfitting new computing facilities. Cloud computing also could help reduce ongoing energy consumption, as enterprises will not need to accommodate capacity they will not use all the time.
This spring IBM will take the concept further, offering BladeCenter servers with power and x86 processors, and service management software – a “'Cloud in a Box,' so to speak,” says Dennis Quan, senior technical staff member at IBM's Silicon Valley Lab.
Cloud computing will mature in coming years as enterprises increasingly turn to IT to serve their markets, Quan says. Certainly Web 2.0 sites posting user-generated content will proliferate, driving the need for cloud computing. But demand will come from mainstream enterprises, too. “Financial services firms are saying, 'We've run out of space . . . so what can we do?'” he says. “They need to have a compute infrastructure that's scalable.”
Liquid cooling
Liquid cooling, once featured in IBM mainframes and Cray supercomputers, may be returning to data centers as an alternative to air conditioning, says Tommy Minyard, assistant director of advanced computing systems at the Texas Advanced Computing Center at the University of Texas at Austin.
In a white paper, data-center solutions provider 42U describes a variety of liquid-cooling approaches under development. They include modular liquid-cooling units placed between racks of servers; a new door at the back of a server rack with tubes flowing with chilled water; and server racks with integrated power supply, distribution and liquid cooling.
Sun Labs is researching liquid cooling but is looking for an environmentally correct alternative to Freon, says Ali Alasti, vice president of engineering of the systems group at Sun Labs.
“You're going to see a lot more of [liquid cooling] in the next five years, but [in a form] that is a little more friendly to the idea that we don't want people choking on some gas that may be dangerous to them,” Alasti says.
Computing without wires
Sun Labs is looking at a way to eliminate copper from processors with what it calls “proximity communication.” Signals now are sent from one chip to another with copper wire. With proximity communication, processor dies touch one another directly, eliminating the need for wiring. “The basic principle is to use capacitor coupling directly on the die to transfer data from one chip to another chip,” says Hans Eberle, a distinguished engineer at Sun Labs.
The technology is a couple of years away from being used in a product, Eberle says. But once in use, the result would be a hundredfold increase in I/O density and lower power consumption.