Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3219104.3219144acmotherconferencesArticle/Chapter ViewAbstractPublication PagespearcConference Proceedingsconference-collections
research-article

Building the SLATE Platform

Published: 22 July 2018 Publication History

Abstract

We describe progress on building the SLATE (Services Layer at the Edge) platform. The high level goal of SLATE is to facilitate creation of multi-institutional science computing systems by augmenting the canonical Science DMZ pattern with a generic, "programmable", secure and trusted underlayment platform. This platform permits hosting of advanced container-centric services needed for higher-level capabilities such as data transfer nodes, software and data caches, workflow services and science gateway components. SLATE uses best-of-breed data center virtualization and containerization components, and where available, software defined networking, to enable distributed automation of deployment and service lifecycle management tasks by domain experts. As such it will simplify creation of scalable platforms that connect research teams, institutions and resources to accelerate science while reducing operational costs and development cycle times.

References

[1]
2005--2009. perfSONAR-PS Publications. (2005--2009). http://www.perfsonar.net/publications.html
[2]
2016. GENI Network Stitching Architecture. (2016). http://groups.geni.net/geni/wiki/GeniNetworkStitching
[3]
2018. AlienVault OSSIM. (2018). https://www.alienvault.com/products/ossim
[4]
2018. Center for Trusted Scientific Cyberinfrastructure. (2018). https://trustedci.org/
[5]
2018. Check_Mk. (2018). Retrieved 2018/03/26 from https://mathias-kettner.de/check_mk.html
[6]
2018. Cloud-Init: multi-distribution cloud initialization. (2018). http://cloudinit.readthedocs.io/en/latest/
[7]
2018. CoreOS Container Linux. (2018). https://coreos.com/os/docs/latest
[8]
2018. Cyverse life sciences platform. (2018). http://www.cyverse.org/about
[9]
2018. Docker Website. (2018). https://www.docker.com/
[10]
2018. Elastic Stack. (2018). https://www.elastic.co/guide/index.html
[11]
2018. etcd Documentation. (2018). https://coreos.com/etcd/
[12]
2018. GENI (Global Environment for Network Innovations), a virtual laboratory for networking and distributed systems research and education. (2018). https://www.geni.net/
[13]
2018. GitHub. (2018). https://github.com/
[14]
2018. The Go Programming Language. (2018). https://golang.org/
[15]
2018. Kubernetes | Production-Grade Container Orchestration. (2018). https://kubernetes.io/
[16]
2018. Kubernetes Federation Project. (2018). https://kubernetes.io/docs/concepts/cluster-administration/federation/
[17]
2018. The Kubernetes Package Manager: Kubernetes Helm. (2018). https://github.com/kubernetes/helm
[18]
2018. Minikube GitHub Repository. (2018). https://github.com/kubernetes/minikube
[19]
2018. The Pacific Research Platform. (2018). http://prp.ucsd.edu/
[20]
2018. Prometheus Monitoring and Alerting Toolkit. (2018). https://prometheus.io/
[21]
2018. RancherOS Website. (2018). https://rancher.com/rancher-os/
[22]
2018. SLATE Application Catalog. (2018). https://github.com/slateci/slate-catalog
[23]
2018. SLATE Architecture Doc. (2018). https://docs.google.com/document/d/18FlqV3WGmcz7fxr95aCUAgR-_qqzliQEAMF-AFTsuoE/edit?usp=sharing
[24]
2018. Squid. (2018). Retrieved 2018/03/26 from http://www.squid-cache.org
[25]
2018. XCache Documentation. (2018). http://slateci.io/XCache/
[26]
2018. YAML Ain't Markup Language. (2018). http://yaml.org/
[27]
E. Aprile et al. 2016. Physics reach of the XENON1T dark matter experiment. JCAP 1604, 04 (2016), 027. arXiv:physics.ins-det/1512.07501
[28]
E. Boyd A. Brown M. Grigoriev J. Metzger M. Swany M. Zekauskas B. Tierney, J. Boote and J. Zurawski. 2009. Instantiating a global network measurement framework. In SOSP Workshop on Real Overlays and Distributed Systems (ROADS'09). Big Sky, MT, USA.
[29]
Ilia Baldine, Yufeng Xin, Anirban Mandal, Paul Ruth, Chris Heerman, and Jeff Chase. 2012. ExoGENI: A Multi-domain Infrastructure-as-a-Service Testbed. In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Springer Berlin Heidelberg, 97--113.
[30]
Mark Berman, Jeffrey S. Chase, Lawrence Landweber, Akihiro Nakao, Max Ott, Dipankar Raychaudhuri, Robert Ricci, and Ivan Seskar. 2014. GENI: A federated testbed for innovative network experiments. Computer Networks 61 (mar 2014), 5--23.
[31]
Brendan Burns, Brian Grant, David Oppenheimer, Eric Brewer, and John Wilkes. 2016. Borg, Omega, and Kubernetes. ACM Queue 14 (2016), 70--93. http://queue.acm.org/detail.cfm?id=2898444
[32]
D. Cooper, S. Santesson, S. Farrell, S. Boeyen, R. Housley, and W. Polk. 2008. Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile. RFC 5280. RFC Editor. http://www.rfc-editor.org/rfc/rfc5280.txt http://www.rfc-editor.org/rfc/rfc5280.txt.
[33]
Eli Dart, Lauren Rotman, Brian Tierney, Mary Hester, and Jason Zurawski. 2013. The Science DMZ. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis on - SC '13. ACM Press.
[34]
Alvise Dorigo, P Elmer, Fabrizio Furano, and A Hanushevsky. 2005. XROOTD -A highly scalable architecture for data access. 4 (04 2005), 348--353.
[35]
V Garonne, R Vigne, G Stewart, M Barisits, T B eermann, M Lassnig, C Serfon, L Goossens, and A Nairz and. 2014. Rucio -- The next generation of large scale distributed system for ATLAS Data Management. Journal of Physics; Conference Series 513, 4 (jun 2014), 042021.
[36]
J. Mambretti, J. Chen, and F. Yeh 2015. Next Generation Clouds, the Chameleon Cloud Testbed, and Software Defined Networking (SDN). In 2015 International Conference on Cloud Computing Research and Innovation (ICCCRI). 73--79.
[37]
Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru Parulkar, Larry Peterson, Jennifer Rexford, Scott Shenker, and Jonathan Turner. 2008. OpenFlow. ACM SIGCOMM Computer Communication Review 38, 2 (mar 2008), 69.
[38]
P M Mell and T Grance. 2011. The NIST definition of cloud computing. Technical Report.
[39]
Robert Ricci, Eric Eide, and The CloudLab Team. 2014. Introducing CloudLab: Scientific Infrastructure for Advancing Cloud Architectures and Applications. USENIX; login: 39, 6 (Dec. 2014). https://www.usenix.org/publications/login/dec14/ricci
[40]
S. Tuecke, R. Ananthakrishnan, K. Chard, M. Lidman, B. McCollam, S. Rosen, and I. Foster. 2016. Globus auth: A research identity and access management platform. In 2016 IEEE 12th International Conference on e-Science (e-Science). 203--212.
[41]
Abhishek Verma, Luis Pedrosa, Madhukar R. Korupolu, David Oppenheimer, Eric Tune, and John Wilkes. 2015. Large-scale cluster management at Google with Borg. In Proceedings of the European Conference on Computer Systems (EuroSys). Bordeaux, France.

Cited By

View all
  • (2024)AggieGrid: from idle PCs to a distributed High-Throughput Computing systemPractice and Experience in Advanced Research Computing 2024: Human Powered Computing10.1145/3626203.3670567(1-4)Online publication date: 17-Jul-2024
  • (2024)The XENONnT dark matter experimentThe European Physical Journal C10.1140/epjc/s10052-024-12982-584:8Online publication date: 7-Aug-2024
  • (2024)ClusterSlice: A Zero-Touch Deployment Platform for the Edge Cloud Continuum2024 27th Conference on Innovation in Clouds, Internet and Networks (ICIN)10.1109/ICIN60470.2024.10494418(100-102)Online publication date: 11-Mar-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
PEARC '18: Proceedings of the Practice and Experience on Advanced Research Computing: Seamless Creativity
July 2018
652 pages
ISBN:9781450364461
DOI:10.1145/3219104
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 July 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Containerization
  2. Distributed computing
  3. Edge computing

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

PEARC '18

Acceptance Rates

PEARC '18 Paper Acceptance Rate 79 of 123 submissions, 64%;
Overall Acceptance Rate 133 of 202 submissions, 66%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)13
  • Downloads (Last 6 weeks)1
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)AggieGrid: from idle PCs to a distributed High-Throughput Computing systemPractice and Experience in Advanced Research Computing 2024: Human Powered Computing10.1145/3626203.3670567(1-4)Online publication date: 17-Jul-2024
  • (2024)The XENONnT dark matter experimentThe European Physical Journal C10.1140/epjc/s10052-024-12982-584:8Online publication date: 7-Aug-2024
  • (2024)ClusterSlice: A Zero-Touch Deployment Platform for the Edge Cloud Continuum2024 27th Conference on Innovation in Clouds, Internet and Networks (ICIN)10.1109/ICIN60470.2024.10494418(100-102)Online publication date: 11-Mar-2024
  • (2024)Experiences in deploying in-network data cachesEPJ Web of Conferences10.1051/epjconf/202429507018295(07018)Online publication date: 6-May-2024
  • (2024)Clusterslice: Slicing resources for zero-touch Kubernetes-based experimentationFuture Generation Computer Systems10.1016/j.future.2024.06.038161(1-10)Online publication date: Dec-2024
  • (2022)Early Experiences with Tight Integration of Kubernetes in an HPC EnvironmentPractice and Experience in Advanced Research Computing 2022: Revolutionary: Computing, Connections, You10.1145/3491418.3535150(1-4)Online publication date: 8-Jul-2022
  • (2022)Buzzard: Georgia Tech’s Foray into the Open Science GridPractice and Experience in Advanced Research Computing 2022: Revolutionary: Computing, Connections, You10.1145/3491418.3535135(1-5)Online publication date: 8-Jul-2022
  • (2021)Doing more with less: Growth, improvements, and management of NMSU’s computing capabilitiesPractice and Experience in Advanced Research Computing 2021: Evolution Across All Dimensions10.1145/3437359.3465610(1-4)Online publication date: 17-Jul-2021
  • (2020)SLATE: Monitoring Distributed Kubernetes ClustersPractice and Experience in Advanced Research Computing 2020: Catch the Wave10.1145/3311790.3401777(19-25)Online publication date: 26-Jul-2020
  • (2020)Applying OSiRIS NMAL to Network Slices on SLATEEPJ Web of Conferences10.1051/epjconf/202024507055245(07055)Online publication date: 16-Nov-2020
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media