In its Global Data Cloud Index released Oct. 15, Cisco forecasts that global cloud traffic will grow 4.5-fold, a 35 percent combined annual growth rate, by 2017. Today’s data centers in which rows of servers sprawl over hundreds of thousands of square feet already consume some 30 billion watts of electricity, The New York Times reported last year.
And this is all the more reason to rethink data center design and size so they require a lot less power and space, said two University of California, San Diego researchers in the Oct. 11 issue of the journal Science.
Cloud computing is the fastest growing segment of data center traffic, approximating 1.2 zettabytes of annual traffic in 2012, according to Cisco’s annual report. Meanwhile, overall global data center traffic will grow threefold to 7.7 zettabytes by 2017. Cisco provides a little social math to illustrate just how much data traffic we’re talking about.
“A zettabyte is one billion terabytes. For context, 7.7 zettabytes is the equivalent to:
- 107 trillion hours of streaming music -- about 1.5 years of continuous music streaming for the world’s population in 2017.
- 19 trillion hours of business web conferencing -- about 14 hours of daily web conferencing for the world’s workforce in 2017.
- 8 trillion hours of online high-definition (HD) video streaming -- about 2.5 hours of daily streamed HD video for the world’s population in 2017.”
In their commentary, electrical engineer Yeshaiahu (Shaya) Fainman and computer scientist George Porter proposed replacing the racks and racks of servers in today’s data centers into a single chip. “These ‘rack-on-chips’ will be networked, internally and externally, with both optical circuit switching to support large flows of data and electronic packet switching to support high-priority data flows,” Fainman and Porter write.
Fainman is professor and chair of the Department of Electrical and Computer Engineering, and Porter is a research scientist in the Center for Networked Systems at UC San Diego Jacobs School of Engineering.
|Graduate student Qing Gu at work in the Fainman lab.|
Their proposed solution will require several significant technology advances. The most significant are how to network all the individual processors on a single chip as well as how to network multiple rack-on-chips to each other, said Porter. “To handle Big Data processing and data-intensive applications, you've got to have an enormous amount of network bandwidth, and we're developing new technologies to deliver that bandwidth cheaper, with less power and heat, and in a smaller form-factor than existing approaches,” Porter said. Fainman’s lab has been developing several aspects of the dense integration of electronics and photonics and nanophotonic technology required to achieve this vision in collaboration with several other universities as part of the Center for Integrated Access Networks. Fainman’s lab last year built the smallest no-waste laser to date, a significant step needed to enable future computer chips with optical communications. Their breakthrough was reported in the journal Nature.
Motherboard reported on Fainman and Porter’s idea for “nanoservers” this week:
“But the meat and potatoes of Yeshaiahu Fainman and George Porter’s server-rack-on-a-chip vision is really about taking the existing framework for a server rack and recreating it at the nano-level. They say that miniaturizing all server components so that several servers can fit onto a computer chip would increase processing speed. Making circuit systems to support all these mini-components using advanced lithography is already feasible, but scientists have yet to realize nano-transceivers and circuit-switchers—the key components that transmit data. And while silicon chips are increasing being used to transmit data-carrying light waves in fiber optic networks, efficiently generating light on a silicon chip is still early in its development. The researchers offer some solutions, like including light generating nanolasers in the chip design.”