Export Citations
In-situ scientific workflows, i.e., executing the entire application workflows on the HPC system, have emerged as an attractive approach to address data-related challenges by moving computations closer to the data, and staging-based frameworks have been effectively used to support in-situ workflows at scale. However, running in-situ scientific workflows on extreme-scale computing systems presents fault tolerance challenges which significantly affect the correctness and performance of workflows. First, scientific in-situ workflow requires sharing and moving data between coupled applications through data staging. As the data volumes and generate rates keep growing, the traditional data resilience approaches such as n-way replication and erasure codes become cost prohibitive, and data staging requires more scalable and efficient approach to support the data resilience. Second, Increasing scale is also expected to result in an increase in the rate of silent data corruption errors, which will impact both the correctness and performance of applications. Moreover, this impact is amplified in the case of in-situ workflows due to the dataflow between the component applications of the workflow. Third, since coupled applications in workflows frequently interact and exchange the large amount of data, simply applying the state of the art fault tolerance techniques such as checkpoint/restart to individual application component can not guarantee data consistency of workflows after failure recovery. Furthermore, naive use of these fault tolerance techniques to the entire workflows will limit the diversity of resilience approaches of application components, and finally incur a significant latency, storage overheads, and performance degradation.This thesis addresses these challenges related to data resilience and fault tolerance for in-situ scientific workflows, and makes the following contributions. This thesis first presents CoREC, a scalable resilient in-memory data staging runtime for large-scale in-situ workflows. CoREC uses a novel hybrid approach that combines dynamic replication with erasure coding based on data access patterns. CoREC also provides multilevel data resilience to satisfy different fault tolerance requirements. Furthermore, CoREC introduces optimizations for load balancing and conflict avoiding encoding, and a low overhead, lazy data recovery scheme. Then, this thesis addresses silent error detection for extreme scale in-situ workflows, and presents a staging based error detection approach which leverages idle computation resource in data staging to enable timely detection and recovery from silent data corruption. This approach can effectively reduce the propagation of corrupted data and end-to-end workflow execution time in the presence of silent errors. Finally, this thesis addresses fail-stop failures for extreme scale in-situ scientific workflows, and presents a loose coupled checkpoint/restart with data logging framework for in-situ workflows. This proposed approach introduces a data logging mechanism in data staging which is composed by the queue based algorithm and user interface to provide a scalable and flexible fault tolerance scheme for in-situ workflows while still maintaining the data consistency and low resiliency cost. The research concepts and software prototypes have been evaluated using synthetic and real application workflows on production HPC systems.
Index Terms
- Addressing Fault Tolerance for Staging Based Scientific Workflows
Recommendations
Addressing data resiliency for staging based scientific workflows
SC '19: Proceedings of the International Conference for High Performance Computing, Networking, Storage and AnalysisAs applications move towards extreme scales, data-related challenges are becoming significant concerns, and in-situ workflows based on data staging and in-situ/in-transit data processing have been proposed to address these challenges. Increasing scale ...
Provenance-based fault tolerance technique recommendation for cloud-based scientific workflows: a practical approach
AbstractScientific workflows are abstractions composed of activities, data and dependencies that model a computer simulation and are managed by complex engines named scientific workflow management system (SWfMS). Many workflows demand many computational ...