• Oak Ridge National Laboratory, the University of California, San Diego and The University of Texas at Austin partner with Dell to expand research computing and harness the power of Big Data for discovery
  • Institutions secure $18 million in National Science Foundation (NSF) funding for new supercomputing initiatives

At SC13, Dell reaffirmed its long-standing commitment to improve access to and use of high-performance computing (HPC) in research computing. Over the last five years, Dell and its research computing partners have combined integrated server, storage and networking solutions designed specifically for hyperscale and research computing environments with scalable, cost-effective usage models such as HPC-as-a-Service and HPC in the cloud to simplify collaborative science, improve access to compute capacity and accelerate discovery for the research computing community.

Earlier this year, Dell took its commitment a step further, introducing Active Infrastructure for HPC Life Sciences, a converged solution designed specifically for genomics analysis-a very specialized and rapidly growing area of research computing. The new solution integrates computing, storage and networking to reduce lengthy implementation timelines and process up to 37 genomes per day and 259 genomes per week.

Oak Ridge National Laboratory, University of California at San Diego, The University of Texas at Austin, University of Florida, Clemson University, University of Wisconsin at Madison and Stanford University are a few of the hundreds of organizations utilizing Dell's HPC solutions today to harness the power of data for discovery.

Oak Ridge National Laboratory Supercomputer Achieves I/O Rate of More Than One Terabyte Per Second
To boost the productivity of its Titan supercomputer-the fastest computer in America dedicated solely to scientific research-- and better support its 1,200 users and more than 150 research projects, the Oak Ridge National Laboratory (ORNL) Leadership Computing Facility needed a file system with the high speed interconnects that would match the supercomputer's peak theoretical performance of 27 petaflops or 27,000 trillion calculations per second. Working with Dell and other technology partners, ORNL upgraded its Lustre-based file system "Spider" to Spider II to quadruple the size and speed of its file system. It also upgraded the interconnects between Titan and Spider to a new InfiniBand fourteen data rate (FDR) network that can or is designated to be seven times faster and support an I/O rate in excess of one terabyte per second.

The University of California, San Diego Deploying XSEDE's First Virtualized HPC Cluster with Comet
The San Diego Supercomputer Center (SDSC) at the University of California, San Diego is deploying Comet, a new virtualized petascale supercomputer designed to fulfill pent-up demand for computing in areas such as social sciences and genomics, areas where there is a growing need for computing capacity for a broader set of researchers. Funded by a $12 million NSF grant and scheduled to start operations in early 2015, Comet will be a Dell-based cluster featuring next-generation Intel Xeon processors. With peak performance of nearly two petaflops and the first XSEDE production system to support high-performance virtualization, Comet will be uniquely designed to support many modest-scale jobs: each node will be equipped with two processors, 128 gigabytes (GB) of traditional DRAM and 320 GB of flash memory. Comet will also include some large-scale nodes as well as nodes with NVIDIA GPUs to support visualization, molecular dynamic simulations or genome assembly.

"Comet is all about HPC for the 99 percent," said SDSC Director Michael Norman, Comet principal investigator. "As the world's first virtualized HPC cluster, Comet is designed to deliver a significantly increased level of computing capacity and customizability to support data-enabled science and engineering at the campus, regional and national levels."

The University of Texas at Austin to Deploy Wrangler, An Innovative New Data System
The Texas Advanced Computing Center (TACC) at The University of Texas at Austin recently announced plans to build Wrangler, a groundbreaking data analysis and management system for the national open science community that will be funded by a $6 million National Science Foundation (NSF) grant. Featuring 20 petabytes of storage on the Dell C8000 platform and using PowerEdge R620 and R720 compute nodes, Wrangler is designed for high-performance access to community data sets. It will support the popular MapReduce software framework and a full ecosystem of analytics for Big Data when completed in January 2015. Wrangler will integrate with TACC's Stampede supercomputer and through TACC will be extended to NSF Extreme Science and Engineering Discovery Environment (XSEDE) resources around the country.

"Wrangler is designed from the ground up for emerging and existing applications in data intensive science," said Dan Stanzione, Wrangler's lead principal investigator and TACC deputy director. "Wrangler will be one of the largest secure, replicated storage options for the national open science community."

Dell at SC13
Hear from experts from the University of Florida, Clemson University, University of North Texas, University of Wisconsin at Madison and Stanford University about how they are harnessing the power of data for discovery at the "Solving the HPC Data Deluge" session on Nov. 20, 1:30-2:30 p.m. at Dell Booth #1301. And learn about HPC virtualization from the University of California at San Francisco, Florida State University, Cambridge University, Oklahoma University and Australian National University from 3-4 p.m. For more information on Dell's presence at SC13 visit this blog, and follow the conversation at HPCatDell.

Dell World
Join us at Dell World 2013, Dell's premier customer event exploring how technology solutions and services are driving business innovation. Learn more at www.dellworld.com, attend our virtual Dell World: Live Online event or follow # DellWorldon Twitter.

distributed by