Managing Variant Calling Files the Big Data Way
Big Data has been seen as a remedy for the efficient management of the ever-increasing genomic data.
In this paper, we investigate the use of Apache Spark to store and process Variant Calling Files (VCF) on a Hadoop cluster.
We demonstrate Tomatula, a software tool for converting VCF files to Apache Parquet storage format, and an application to query variant calling datasets.
We evaluate how the wall time (i.e. time until the query answer is returned to the user) scales out on a Hadoop cluster storing VCF files, either in the original flat-file format, or using the Apache Parquet columnar storage format. Apache Parquet can compress the VCF data by around a factor of 10, and supports easier querying of VCF files as it exposes the field structure.
We discuss advantages and disadvantages in terms of storage capacity and querying performance with both flat VCF files and Apache Parquet using an open plant breeding dataset.
We conclude that Apache Parquet offers benefits for reducing storage size and wall time, and scales out with larger datasets.
A. Boufea, R. Finkers, M. van Kaauwen, M. Kramer, I. N. Athanasiadis, Managing Variant Calling Files the Big Data Way, Proc. 4th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies (BDCAT'17), 2017, ACM, doi:10.1145/3148055.3148060.
You might also enjoy (View all publications)
- Introducing digital twins to agriculture
- Location-specific vs location-agnostic machine learning metamodels for predicting pasture nitrogen response rate
- Machine learning for large-scale crop yield forecasting