Abstract
| The University of Victoria (UVic) operates an Infrastructure-as-a- Service scientific cloud for Canadian researchers, and a WLCG Tier 2 grid site for the ATLAS experiment at CERN. At first, these were two distinctly separate systems, but over time we have taken steps to migrate the Tier 2 grid services to the cloud. This process has been significantly facilitated by basing our ap- proach on Kubernetes, a versatile, robust, and very widely-adopted automation platform for orchestrating and managing containerized applications. Previous work exploited the batch capabilities of Kubernetes to run the computing jobs of the UVic ATLAS Tier 2, and replace the conventional grid Computing Ele- ments, by interfacing with the Harvester workload management system of the ATLAS experiment. However, the required functionality of a Tier 2 site encom- passes more than just batch computing. Likewise, the capabilities of Kubernetes extend far beyond running batch jobs, and include for example scheduling recur- ring tasks and hosting long-running externally-accessible services in a resilient way. We are now undertaking the more complex and challenging endeavour of adapting and migrating all remaining functions of the Tier 2 site - such as APEL accounting and Squid caching proxies, but in particular the grid storage element - to cloud-native deployments on Kubernetes. We aim to enable fully compre- hensive deployment of a complete ATLAS Tier 2 site on a Kubernetes cluster via Helm charts, which will benefit the community by providing a streamlined and replicable way to install and configure an ATLAS site. We also describe our experience running a high-performance self-managed Kubernetes ATLAS Tier 2 cluster at the scale of 8,000 CPU cores for the last two years, and compare with the conventional setup of grid services. |