Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3127479.3132687acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
abstract

Lever: towards low-latency batched stream processing by pre-scheduling

Published: 24 September 2017 Publication History

Abstract

With the vast involvement of streaming big data in many applications (e.g., stock market data, sensor data, social network data, etc.), quickly mining and analyzing such data is becoming more and more important. To provide fault tolerance and efficient stream processing at scale, recent stream processing frameworks have proposed to adapt batch processing systems, such as MapReduce and Spark, to handle streaming data by putting the streams into micro-batches and treating the workloads as a continuous series of small jobs [1].
The fundamental challenge of building a batched stream processing system is to minimize the processing latency of each micro-batch. In this paper, we focus on the straggler problem, where a subset of workers are straggling behind and significantly affecting the job completion time. The straggler problem is a well-known critical problem in parallel processing systems. In comparing to large batch processing, the straggler problems in micro-batch processing are more severe and harder to tackle.
We argue that the problem of using the existing straggler mitigation solutions for micro-batch processing is that they detect (or predict) stragglers and re-schedule stragglers too late in the data handling pipeline. The re-scheduling actions are carried out during the task execution period, hence it would inevitably increase the processing time of the micro-batches. Furthermore, as the data have already been dispatched, re-scheduling would inherently incur expensive data relocation. Such overhead would become significant in micro-batch processing due to the short processing time of each micro-batch. We refer to this type of methods as post-scheduling techniques.
To address the problem, we propose a new pre-scheduling framework, called Lever, which predicts stragglers and makes timely scheduling decisions to minimize the processing latency. As shown in Figure 1, Lever periodically collects and analyzes the historical job profiles of the recurring micro-batch jobs. Based on such information, Lever pre-schedules the data through three main steps, i.e. identify potential stragglers, evaluate node capacity and choose suitable helpers. More importantly, Lever makes the re-scheduling decisions before the batching module dispatches the data. As the scheduling is done while the data are being batched, it would not increase the processing time of the micro-batch.
We implemented Lever in Spark Streaming, which has been contributed to the open source community as an extension of Apache Spark Streaming. To the best of our knowledge, this is the first work specifically addressing the straggler problem in continuous micro-batch processing. We conduct various experiments to validate the effectiveness of Lever. The experimental results demonstrate that Lever reduces job completion time by 30.72% to 42.19% and outperforms traditional techniques significantly.

Reference

[1]
M. Zaharia, T. Das, H. Li, T. Hunter, S. Shenker, and I. Stoica, "Discretized Streams: Fault-Tolerant Streaming Computation at Scale," In SOSP'13.

Cited By

View all
  • (2022)zStream: towards a low latency micro-batch streaming systemCluster Computing10.1007/s10586-022-03758-126:5(2773-2787)Online publication date: 14-Oct-2022
  • (2021)A Multi-Graph Attributed Reinforcement Learning based Optimization Algorithm for Large-scale Hybrid Flow Shop Scheduling ProblemProceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining10.1145/3447548.3467135(3441-3451)Online publication date: 14-Aug-2021
  • (2020)Tails in the cloud: a survey and taxonomy of straggler management within large-scale cloud data centresThe Journal of Supercomputing10.1007/s11227-020-03241-x76:12(10050-10089)Online publication date: 12-Mar-2020
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SoCC '17: Proceedings of the 2017 Symposium on Cloud Computing
September 2017
672 pages
ISBN:9781450350280
DOI:10.1145/3127479
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 September 2017

Check for updates

Author Tags

  1. recurring jobs
  2. scheduling
  3. straggler
  4. stream processing

Qualifiers

  • Abstract

Funding Sources

  • National Key Research and Development Program
  • National Science Foundation of China
  • 863 Hi-Tech Research and Development Program

Conference

SoCC '17
Sponsor:
SoCC '17: ACM Symposium on Cloud Computing
September 24 - 27, 2017
California, Santa Clara

Acceptance Rates

Overall Acceptance Rate 169 of 722 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 12 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2022)zStream: towards a low latency micro-batch streaming systemCluster Computing10.1007/s10586-022-03758-126:5(2773-2787)Online publication date: 14-Oct-2022
  • (2021)A Multi-Graph Attributed Reinforcement Learning based Optimization Algorithm for Large-scale Hybrid Flow Shop Scheduling ProblemProceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining10.1145/3447548.3467135(3441-3451)Online publication date: 14-Aug-2021
  • (2020)Tails in the cloud: a survey and taxonomy of straggler management within large-scale cloud data centresThe Journal of Supercomputing10.1007/s11227-020-03241-x76:12(10050-10089)Online publication date: 12-Mar-2020
  • (2019)A review on big data real-time stream processing and its scheduling techniquesInternational Journal of Parallel, Emergent and Distributed Systems10.1080/17445760.2019.158584835:5(571-601)Online publication date: 1-Mar-2019
  • (2018)Radar: Reducing Tail Latencies for Batched Stream Processing with Blank Scheduling2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS)10.1109/HPCC/SmartCity/DSS.2018.00134(797-804)Online publication date: Jun-2018

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media