![]() ![]() Todor Ivanov and Max-Georg Beer, Evaluating Hive and Spark SQL with BigBench, Frankfurt Big Data Lab, Technical Report No.2015-2,. Cloudera Hadoop Distribution (CDH 5.11) Hardware Configuration - 4 node cluster.Use of graphical representations (charts and diagrams) for further. resource utilization statistics (CPU, Network, I/O and Memory) benchmark metrics (typically execution time and throughput) Evaluation and validation of the benchmark results. measure the standard deviation between the 3 or more executions to check the Measure latency (execution time) as an average of the 3 or more executions. to clear the caches to assure consistent state to reset (generate new) test data for each test run Before repeating an experiment make sure:. to make sure that there are no cache effects or undesired influences between the repeated 3 or more times to ensure the representativeness of the results Configure the Big Data platform under test to address the benchmark scenario. Generate the test data using typically data generator Scale Factor, query type, workload type, ) Install and configure the Big Data Benchmark. Operating System, Network, Programming Frameworks, Install and configure all hardware and software components.The general approach consists of 4 phases:. by using a different columnar file format configuration by changing the columnar file format type or Investigate how the overall performance of an engine (Hive or Parquet is first choice for SparkSQL and ImpalaĬontrary to other studies, we compared ORC and Parquet File Formatsīy executing each file format on the same processing engine! SQL-on-Hadoop Engines + Default File Format offer a high-level abstraction on top of processing engine (like MapReduce provide SQL-like dialect (called HiveQL) to work with structured data efficiently query data stored in columnar file formats (typically in HDFS) can be used or integrated with any data processing framework or engine take advantage of data encoding and compression strategies open source, general purpose columnar file formats Data encoding and compression algorithms can take advantage of the dataĬolumnar File Formats and SQL-on-Hadoop Engines.It is efficient to scan only a subset of columns!.Data compression or encoding is inefficient because different data types are.To select a subset of columns, all rows need to be read!.Storage and processing of data-intensive applications.Complex distributed software systems (Hadoop, Spark etc.).Big Data benchmarking / Performance optimizations.Senior Researcher, Lab CTO Frankfurt Big Data Lab * Published in the journal Concurrency and Computation: Practice and Experience 2019, The Impact of Columnar File Formats on SQL-on-Hadoop Engine Performance: A Study on ORC and Parquet Exceptions are the queries involving text processing, which do not benefit from using any compression. Using ZLIB compression brings up to 60.2% improvement with ORC, while Parquet achieves up to 7% improvement with Snappy. We show that ORC generally performs better on Hive, whereas Parquet achieves best performance with SparkSQL. Our experiments confirm that the file format selection and its configuration significantly affect the overall performance. We use BigBench (TPCx-BB), a standardized application-level benchmark for Big Data scenarios. We apply our strategy to two processing engines, Hive and SparkSQL, and evaluate the performance of two columnar file formats, ORC and Parquet. In this work, we propose an alternative approach: by executing each file format on the same processing engine, we compare the different file formats as well as their different parameter settings. Related works consider the performance of processing engine and file format together, which makes it impossible to predict their individual impact. Columnar file formats provide an efficient way to store data to be queried by SQL-on-Hadoop engines. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |