Abstract
| Delivering complex software across a worldwide distributed system is a major challenge in high-throughput scientific computing. The problem arises at different scales for many scientific communities that use grids, clouds, and distributed clusters to satisfy their computing needs. For high-energy physics (HEP) collaborations dealing with large amounts of data that rely on hundreds of thousands of cores spread around the world for data processing, the challenge is particularly acute. To serve the needs of the HEP community, several iterations were made to create a scalable, user-level filesystem that delivers software worldwide on a daily basis. The implementation was designed in 2006 to serve the needs of one experiment running on thousands of machines. Since that time, this idea evolved into a new production global-scale filesystem serving the needs of multiple science communities on hundreds of thousands of machines around the world. |