Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3219104.3219157acmotherconferencesArticle/Chapter ViewAbstractPublication PagespearcConference Proceedingsconference-collections
research-article

Homogenizing OSG and XSEDE: Providing Access to XSEDE Allocations through OSG Infrastructure

Published: 22 July 2018 Publication History

Abstract

We present a system that allows individual researchers and virtual organizations (VOs) to access allocations on Stampede2 and Bridges through the Open Science Grid (OSG), a national grid infrastructure for running high throughput computing (HTC) tasks. Using this system, VOs and researchers are able to run larger workflows than can be done with OSG resources alone. This system allows a VO or user to run on XSEDE resources (with their allocation) using the same framework used with OSG resources. The system consists of two parts: the compute element (CE) that routes workloads to the appropriate user accounts and allocation on XSEDE resources, and simulated access to the CernVM Filesystem (CVMFS) servers used by OSG and VOs to distribute software and data. This allows jobs submitted through this system to work on a homogeneous environment regardless of whether they run on XSEDE HPC resources (like Stampede2 and Bridges) or OSG.

References

[1]
C. Aguado Sanchez, J. Bloomer, P. Buncic, L. Franco, S. Klemer, and P. Mato. 2008. CVMFS - a file system for the CernVM virtual appliance. In Proceedings of XII Advanced Computing and Analysis Techniques in Physics Research. Article 52, 52 pages.
[2]
W. W. Armstrong et al. 1994. ATLAS: Technical proposal for a general-purpose p p experiment at the Large Hadron Collider at CERN. (1994).
[3]
B Bockelman, T Cartwright, J Frey, E M Fajardo, B Lin, M Selmeci, T Tannenbaum, and M Zvada. 2015. Commissioning the HTCondor-CE for the Open Science Grid. Journal of Physics: Conference Series 664, 6 (2015), 062003. http://stacks.iop.org/1742-6596/664/i=6/a=062003
[4]
S. Chatrchyan et al. 2008. The CMS Experiment at the CERN LHC. JINST 3 (2008), S08004.
[5]
D.H.J. Epema, M. Livny, R. van Dantzig, X. Evers, and J. Pruyne. 1996. A worldwide flock of Condors: Load sharing among workstation clusters. Future Generation Computer Systems 12, 1 (1996), 53 -- 65. Resource Management in Distributed Systems.
[6]
Burt Holzman, Lothar A. T. Bauerdick, Brian Bockelman, Dave Dykstra, Ian Fisk, Stuart Fuess, Gabriele Garzoglio, Maria Girone, Oliver Gutsche, Dirk Hufnagel, Hyunwoo Kim, Robert Kennedy, Nicolo Magini, David Mason, P Spentzouris, Anthony Tiradani, Steve Timm, and Eric W. Vaandering. 2017. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation. 1 (12 2017).
[7]
Gregory M. Kurtzer, Vanessa Sochat, and Michael W. Bauer. 2017. Singularity: Scientific containers for mobility of compute. PLOS ONE 12, 5 (05 2017), 1--20.
[8]
T. Maeno, K. De, T. Wenaus, P. Nilsson, G. A. Stewart, R. Walker, A. Stradling, J. Caballero, M. Potekhin, and D. Smith. 2011. Overview of ATLAS PanDA workload management. J. Phys. Conf. Ser. 331 (2011), 072024.
[9]
Massimo Mezzadri, Francesco Prelz, and David Rebatto. 2011. Job submission and control on a generic batch system: the BLAH experience. Journal of Physics: Conference Series 331, 6 (2011), 062039. http://stacks.iop.org/1742-6596/331/i=6/a=062039
[10]
Nicholas A. Nystrom, Michael J. Levine, Ralph Z. Roskies, and J. Ray Scott. 2015. Bridges: A Uniquely Flexible HPC Resource for New Communities and Data Analytics. In Proceedings of the 2015 XSEDE Conference: Scientific Advancements Enabled by Enhanced Cyberinfrastructure (XSEDE '15). ACM, New York, NY, USA, Article 30, 8 pages.
[11]
Ruth Pordes, Don Petravick, Bill Kramer, Doug Olson, Miron Livny, Alain Roy, Paul Avery, Kent Blackburn, Torre Wenaus, Frank WÃijrthwein, Ian Foster, Rob Gardner, Mike Wilde, Alan Blatecky, John McGee, and Rob Quick. 2007. The open science grid. Journal of Physics: Conference Series 78, 1 (2007), 012057. http://stacks.iop.org/1742-6596/78/i=1/a=012057
[12]
I. Sfiligoi, D. C. Bradley, B. Holzman, P. Mhashilkar, S. Padhi, and F. Wurthwein. 2009. The Pilot Way to Grid Resources Using glideinWMS. In 2009 WRI World Congress on Computer Science and Information Engineering, Vol. 2. 428--432.
[13]
Igor Sfiligoi, Daniel C. Bradley, Burt Holzman, Parag Mhashilkar, Sanjay Padhi, and Frank Wurthwrin. 2009. The pilot way to Grid resources using glideinWMS. WRI World Congress 2 (2009), 428--432.
[14]
TACC 2018. Texas Advanced Computing Center Stampede2. (mar 2018). Retrieved March 24, 2018 from https://www.tacc.utexas.edu/systems/stampede2
[15]
J. Towns, T. Cockerill, M. Dahan, I. Foster, K. Gaither, A. Grimshaw, V. Hazlewood, S. Lathrop, D. Lifka, G. D. Peterson, R. Roskies, J. R. Scott, and N. Wilkins-Diehr. 2014. XSEDE: Accelerating Scientific Discovery. Computing in Science Engineering 16, 5 (Sept 2014), 62--74.
[16]
Rick Wagner, Philip Papadopoulos, Dmitry Mishin, Trevor Cooper, Mahidhar Tatineti, Gregor von Laszewski, Fugang Wang, and Geoffrey C. Fox. 2016. User Managed Virtual Clusters in Comet. In Proceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale (XSEDE16). ACM, New York, NY, USA, Article 24, 8 pages.
[17]
Derek Weitzel, Brian Bockelman, Duncan A. Brown, Peter Couvares, Frank Würthwein, and Edgar Fajardo Hernandez. 2017. Data Access for LIGO on the OSG. In Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact (PEARC17). ACM, New York, NY, USA, Article 24, 6 pages.
[18]
D Weitzel, I Sfiligoi, B Bockelman, J Frey, F Wuerthwein, D Fraser, and D Swanson. 2014. Accessing opportunistic resources with Bosco. Journal of Physics: Conference Series 513, 3 (2014), 032105. http://stacks.iop.org/1742-6596/513/i=3/a=032105
[19]
Von Welch, Ian Foster, Carl Kesselman, Olle Mulmo, Laura Pearlman, Steven Tuecke, Jarek Gawor, Sam Meder, and Frank Siebenlist. 2004. X. 509 proxy certificates for dynamic delegation. (01 2004).

Cited By

View all
  • (2019)Supporting High-Performance and High-Throughput Computing for Experimental ScienceComputing and Software for Big Science10.1007/s41781-019-0022-73:1Online publication date: 8-Feb-2019

Index Terms

  1. Homogenizing OSG and XSEDE: Providing Access to XSEDE Allocations through OSG Infrastructure

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    PEARC '18: Proceedings of the Practice and Experience on Advanced Research Computing: Seamless Creativity
    July 2018
    652 pages
    ISBN:9781450364461
    DOI:10.1145/3219104
    Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 July 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. ATLAS
    2. Bridges
    3. CMS
    4. CVMFS
    5. OSG
    6. PSC
    7. Stampede2
    8. TACC
    9. XSEDE
    10. distributed data access

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    PEARC '18

    Acceptance Rates

    PEARC '18 Paper Acceptance Rate 79 of 123 submissions, 64%;
    Overall Acceptance Rate 133 of 202 submissions, 66%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 22 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2019)Supporting High-Performance and High-Throughput Computing for Experimental ScienceComputing and Software for Big Science10.1007/s41781-019-0022-73:1Online publication date: 8-Feb-2019

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media