Abstract
Hadoop and Spark are widely used distributed processing frameworks for large-scale data processing in an efficient and fault-tolerant manner on private or public clouds. These big-data processing systems are extensively used by many industries, e.g., Google, Facebook, and Amazon, for solving a large class of problems, e.g., search, clustering, log analysis, different types of join operations, matrix multiplication, pattern matching, and social network analysis. However, all these popular systems have a major drawback in terms of locally distributed computations, which prevent them in implementing geographically distributed data processing. The increasing amount of geographically distributed massive data is pushing industries and academia to rethink the current big-data processing systems. The novel frameworks, which will be beyond state-of-the-art architectures and technologies involved in the current system, are expected to process geographically distributed data at their locations without moving entire raw datasets to a single location. In this paper, we investigate and discuss challenges and requirements in designing geographically distributed data processing frameworks and protocols. We classify and study batch processing (MapReduce-based systems), stream processing (Spark-based systems), and SQL-style processing geo-distributed frameworks, models, and algorithms with their overhead issues.
Original language | English (US) |
---|---|
Article number | 7968343 |
Pages (from-to) | 60-80 |
Number of pages | 21 |
Journal | IEEE Transactions on Big Data |
Volume | 5 |
Issue number | 1 |
DOIs | |
State | Published - Mar 1 2019 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Information Systems
- Information Systems and Management
Keywords
- HDFS Federation
- Hadoop
- MapReduce
- Spark
- YARN
- cloud computing
- geographically distributed data