- Data Transfer Hits Record of 186 Gb/s
PASADENA, Calif., Dec. 15, 2012 — With a sustained data transfer rate of 186 Gb/s, a new world record has been set, ushering in the next generation of high-speed network technology.
At the SuperComputing 2011 conference in Seattle in mid-November, an international team of high-energy physicists, computer scientists and network engineers — led by the California Institute of Technology (Caltech), the University of Victoria, the University of Michigan, the European Center for Nuclear Research (CERN), Florida International University, and other partners — transferred data in opposite directions at a combined rate of 186 Gb/s in a wide-area network circuit. This is equivalent to moving 2 million gigabytes per day — fast enough to transfer nearly 100,000 full Blu-ray discs, each with a complete movie and all the extras.
The researchers said the achievement will help establish new ways to transport the increasingly large quantities of data that traverse continents and oceans via global networks of optical fibers. These new methods are needed for the next generation of network technology — allowing transfer rates of 40 and 100 Gb/s — that will be built in the next couple of years.
An international team of researchers transferred data in opposite directions at a combined rate of 186 Gb/s in a wide-area network circuit. The rate is equivalent to moving 2 million gigabytes per day — fast enough to transfer nearly 100,000 full Blu-ray discs, each with a complete movie and all the extras, in a day. (Image: Stock)
“Our group and its partners are showing how massive amounts of data will be handled and transported in the future,” said Harvey Newman of Caltech, head of the physics team. “Having these tools in our hands allows us to engage in realizable visions others do not have. We can see a clear path to a future others cannot yet imagine with any confidence.”
The fast transfer rate is also crucial for dealing with the tremendous amounts of data coming from the Large Hadron Collider (LHC) at CERN. More than 100 petabytes of data have been processed, distributed and analyzed using a global grid of 300 computing and storage facilities located at laboratories and universities around the world. That data volume is expected to rise 1000 times as physicists crank up the collision rates and energies at the LHC.
The video shows the 100G network demonstration between the University of Victoria and the Caltech booth at the SuperComputing 2011 Conference in Seattle. (Primary video production by Holly Leavett-Brown.)
The researchers said the key to the discovery is in picking out the rare signals that may indicate new physics discoveries from a sea of potentially overwhelming background noise caused by already understood particle interactions. To do this, individual physicists and small groups located around the world must repeatedly access — and sometimes extract and transport — multiterabyte data sets on demand from petabyte data stores. That’s equivalent to grabbing hundreds of Blu-ray movies all at once from a pool of hundreds of thousands. The HEP team hopes that the demonstration will pave the way toward more effective distribution and use for discoveries of the masses of LHC data.
For more information, visit: www.caltech.edu
MORE FROM PHOTONICS MEDIA