Contact: Jon Bashor, [email protected] | ||||||||||||||||||||
When researchers at the Production Genome Facility at DOE’s Joint Genome Institute found they were generating data so fast they couldn’t find anywhere to store the files, let alone make them easily accessible for analysis, a collaboration with NERSC’s Mass Storage Group developed strategies for improving the reliability of data storage while making retrieval easier.
The Department of Energy’s Joint Genome Institute (JGI) is one of the world’s leading facilities in the scientific quest to unravel the genetic data intrinsic to all living things. With advances in automatic sequencing of genomic information, however, scientists at the JGI’s Production Genome Facility (PGF) found themselves overrun with sequencing data; their production capacity had grown so rapidly that data had overflowed the existing storage capacity. Since the resulting data are used by researchers around the world, ensuring that the data are both reliably archived and easily retrievable is a key issue. As one of the world’s largest public DNA sequencing facilities, the PGF produces 2 million files per month of trace data, 25 to 100 kb each, 100 assembled projects per month of 50 to 250 mb each, and several very large assembled projects per year, on the order of 50 gb. In aggregate, this averages about 2,000 gb per month. (The sequence of a strand of DNA or RNA is the order of its base pairs. Kb stands for kilobase, a thousand base pairs, mb for megabase, a million base pairs, and gb for gigabase, a billion base pairs.) In addition to the amount of data, the way the data are produced is a major challenge to storage and retrieval. Data from the sequencing of many different organisms are produced in parallel each day, such that a daily “archive” spreads the data for a particular organism over many tapes. DNA sequences are the fundamental building blocks in the rapidly expanding field of genomics. Constructing a genomic sequence is an iterative process. The trace fragments are assembled and then the sequence is refined by comparing it with other sequences to confirm the assembly. Once the sequence is assembled, information about its function is gleaned by comparing and contrasting the sequence with other sequences from both the same organism and other organisms. Current sequencing methods generate a large volume of trace files that have to be managed ‑‑ typically 100,000 files or more. And to check for errors in the sequence or make detailed comparisons with other sequences, researchers often need to refer back to these traces. Unfortunately, the traces are usually provided as a group of files lacking information about where the traces occur in the sequence, making the researcher’s job more difficult. This problem was compounded by the PGF’s lack of sufficient online storage, which made organization and retrieval of data difficult and led to unnecessary replication of files. The situation required significant staff time, moving files and reorganizing file systems, to find sufficient space for ongoing production needs, and it required auxiliary tape storage that was not particularly reliable. Enter NERSC’s Archiving ExpertiseStaff from the PGF and the Mass Storage Group at the National Energy Research Scientific Computing Center (NERSC) agreed to work together to address the two key issues facing the genome researchers. The immediate goal was for a NERSC High Performance Storage System (HPSS) to become the archive for the JGI data, replacing the less reliable local tape operation and freeing up disk space at the PGF for more immediate production needs. The second goal was to collaborate with JGI to improve the data handling capabilities of the sequencing and distribution processes. NERSC storage systems are robust and available 24 hours a day, seven days a week, as well as highly scalable and configurable. Through ESnet, the Energy Sciences Network, NERSC has high-quality, high-bandwidth connectivity to other DOE laboratories and major universities. Most of the low-level data produced by the PGF are now routinely archived at NERSC, with roughly 50 gb’s worth of raw trace data being transferred from JGI to NERSC each night. This archive forms the foundation for further steps to enhance the utility of the data. To accomplish the archive process, NERSC staff came up with the following solutions to address the main challenges:
Using these techniques, the archiving system can be scaled up over time as the amount of data continues to increase — up to billions of files can be handled. The data have been aggregated into larger collections which hold tens of thousands of files in a single file in the NERSC storage system. This data can now be accessed as one large file, or each individual file can be accessed without retrieving the whole aggregate. Not only will the new techniques be able to handle future data, they also saved the day when the PGF staff discovered a major problem: raw data processed by software with an undetected bug. By rough estimate, the original data collection comprised up to 100,000 files a day at a cost of a dollar a file — $1.2 million worth of processing over a period of six months. But rather than go back to the sequencing machines the JGI staff were able to retrieve the raw data from NERSC and reprocess it in a month and a half. The estimated savings was about a million dollars, and the end result was a more reliable archive — proving that dependable, flexible data storage is not only a better way to do science but can save lots of time and lots of money. Additional information
|