Nothing Special   »   [go: up one dir, main page]

skip to main content
Addressing Fault Tolerance for Staging Based Scientific Workflows
  • Author:
  • Shaohua Duan,
  • Advisor:
  • Parashar, Manish,
  • Committee Members:
  • Santosh Nagarakatte,
  • Sudarsun Kannan,
  • George Bosilca
Publisher:
  • Rutgers The State University of New Jersey, School of Graduate Studies
ISBN:979-8-6624-1728-1
Order Number:AAI27837894
Reflects downloads up to 05 Mar 2025Bibliometrics
Skip Abstract Section
Abstract
Abstract

In-situ scientific workflows, i.e., executing the entire application workflows on the HPC system, have emerged as an attractive approach to address data-related challenges by moving computations closer to the data, and staging-based frameworks have been effectively used to support in-situ workflows at scale. However, running in-situ scientific workflows on extreme-scale computing systems presents fault tolerance challenges which significantly affect the correctness and performance of workflows. First, scientific in-situ workflow requires sharing and moving data between coupled applications through data staging. As the data volumes and generate rates keep growing, the traditional data resilience approaches such as n-way replication and erasure codes become cost prohibitive, and data staging requires more scalable and efficient approach to support the data resilience. Second, Increasing scale is also expected to result in an increase in the rate of silent data corruption errors, which will impact both the correctness and performance of applications. Moreover, this impact is amplified in the case of in-situ workflows due to the dataflow between the component applications of the workflow. Third, since coupled applications in workflows frequently interact and exchange the large amount of data, simply applying the state of the art fault tolerance techniques such as checkpoint/restart to individual application component can not guarantee data consistency of workflows after failure recovery. Furthermore, naive use of these fault tolerance techniques to the entire workflows will limit the diversity of resilience approaches of application components, and finally incur a significant latency, storage overheads, and performance degradation.This thesis addresses these challenges related to data resilience and fault tolerance for in-situ scientific workflows, and makes the following contributions. This thesis first presents CoREC, a scalable resilient in-memory data staging runtime for large-scale in-situ workflows. CoREC uses a novel hybrid approach that combines dynamic replication with erasure coding based on data access patterns. CoREC also provides multilevel data resilience to satisfy different fault tolerance requirements. Furthermore, CoREC introduces optimizations for load balancing and conflict avoiding encoding, and a low overhead, lazy data recovery scheme. Then, this thesis addresses silent error detection for extreme scale in-situ workflows, and presents a staging based error detection approach which leverages idle computation resource in data staging to enable timely detection and recovery from silent data corruption. This approach can effectively reduce the propagation of corrupted data and end-to-end workflow execution time in the presence of silent errors. Finally, this thesis addresses fail-stop failures for extreme scale in-situ scientific workflows, and presents a loose coupled checkpoint/restart with data logging framework for in-situ workflows. This proposed approach introduces a data logging mechanism in data staging which is composed by the queue based algorithm and user interface to provide a scalable and flexible fault tolerance scheme for in-situ workflows while still maintaining the data consistency and low resiliency cost. The research concepts and software prototypes have been evaluated using synthetic and real application workflows on production HPC systems.

Contributors
  • Rutgers University–New Brunswick
  • Rutgers University–New Brunswick
  • Rutgers University–New Brunswick
  • The University of Tennessee, Knoxville
Index terms have been assigned to the content through auto-classification.
Please enable JavaScript to view thecomments powered by Disqus.

Recommendations