Caltech has an amazing history of scientific discoveries and breakthroughs. For many people Caltech is synonymous with “rocket science.” The institute was the first to discover gravitational waves and the existence of a ninth planet. Caltech discovered the largest and farthest reservoir of water ever detected in the universe, some 30 billion trillion miles away in a quasar. The list is long and ever growing.
One exciting project underway is mapping the first light in the universe. In essence, this mean establishing a snapshot of what it looked like as the universe came into being. Obviously, this is a monumental task and the completion is something that can even defy the imagination.
The project is based on measuring and studying the polarization spectrum of the cosmic microwave background (CMB). Creating this understanding involves vast amounts of data and the combined work of scientists and powerful computers. Our advanced testing technology is a distributed effort, and having a flexible, high-performance network is essential. We use a Niagara Networks packet broker as a key element in aggregating data from different sources.
We are designing an array of 20,000 detectors to capture analog CMB images and send them to graphical processing units for rendering digital data and deriving useful information. Data from the detectors need to reliably and efficiently get to the processing server. Any loss of packets is devastating. Even if we lose a single packet, the integrity of the data is compromised.
Right away, you can understand that the network needs to have the necessary throughput and speed and that, above all, it must be reliable. These are things that have always been important to most organizations. There is another quality that is also essential. Because the research effort changes and grows, it is important to have complete flexibility with the network. Basically, we need to be able to connect anything with anything and ensure that the speed, capacity and reliability continue as expected.
Perhaps we increase the number of detectors or the data rate of those detectors? Maybe there is a change on the processing back-end? Different applications will have different speeds and capacities, so being able to “mix and match” various speeds and throughput is important. We need the network to accommodate these changes. We also need full flexibility without a lot of overhead or work. The ease of making changes is something I think of as agility. To us, the network must provide both flexibility and agility.
With Niagara Networks, flexible provisioning means more than having a high-performance intelligent cross connect and high performance ports that can accommodate links with different speeds. Using an intuitive, graphical user interface, the Niagara Networks packet broker enables instant mixing and matching of input and output ports with a few mouse clicks.
You could say that the network is a key part of scaling our R&D capabilities. Most people wouldn’t think of it—and maybe they shouldn’t have to—but the network must serve as an enabler. Perhaps that is how it is with most organizations, including enterprises. People just expect that the network will work—that is will always be available, that it will work and will always have capacity and speed. Most commonly, you hear about network equipment speeds and feeds, but not a lot about flexibility and agility. For us, especially for aggregating data sources in a R&D environment, it has to meet our performance requirements, and it has to have flexibility and agility. I suspect that we are not alone with these network aggregation needs. Whether in a research lab or in a datacenter, these are the new network values.