Next Article in Journal
Water Influx through the Wetted Surface of a Sweet Cherry Fruit: Evidence for an Associated Solute Efflux
Previous Article in Journal
In Vitro Antifungal Activity of Peltophorum dubium (Spreng.) Taub. extracts against Aspergillus flavus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Read Mapping and Variant Calling Tools for the Analysis of Plant NGS Data

by
Hanna Marie Schilbert
1,*,
Andreas Rempel
1,2 and
Boas Pucker
1,3
1
Genetics and Genomics of Plants, CeBiTec and Faculty of Biology, Bielefeld University, 33615 Bielefeld, Germany
2
Graduate School DILS, Bielefeld Institute for Bioinformatics Infrastructure (BIBI), Faculty of Technology, Bielefeld University, 33615 Bielefeld, Germany
3
Molecular Genetics and Physiology of Plants, Faculty of Biology and Biotechnology, Ruhr-University Bochum, 44801 Bochum, Germany
*
Author to whom correspondence should be addressed.
Plants 2020, 9(4), 439; https://doi.org/10.3390/plants9040439
Submission received: 15 March 2020 / Revised: 28 March 2020 / Accepted: 30 March 2020 / Published: 2 April 2020
(This article belongs to the Section Plant Genetics, Genomics and Biotechnology)

Abstract

:
High-throughput sequencing technologies have rapidly developed during the past years and have become an essential tool in plant sciences. However, the analysis of genomic data remains challenging and relies mostly on the performance of automatic pipelines. Frequently applied pipelines involve the alignment of sequence reads against a reference sequence and the identification of sequence variants. Since most benchmarking studies of bioinformatics tools for this purpose have been conducted on human datasets, there is a lack of benchmarking studies in plant sciences. In this study, we evaluated the performance of 50 different variant calling pipelines, including five read mappers and ten variant callers, on six real plant datasets of the model organism Arabidopsis thaliana. Sets of variants were evaluated based on various parameters including sensitivity and specificity. We found that all investigated tools are suitable for analysis of NGS data in plant research. When looking at different performance metrics, BWA-MEM and Novoalign were the best mappers and GATK returned the best results in the variant calling step.

1. Introduction

As the basis of biological properties and heredity, the genome of a species is a valuable resource for numerous studies. However, there are subtle differences between individuals of the same species, which are of academic and economic interest as these determine phenotypic differences. Dropping sequencing costs boosted high-throughput sequencing projects, thus facilitating the analysis of this genetic diversity. Variations within the A. thaliana population were studied in the 1001 genomes project [1]. As the number of high-quality reference genome sequences rises continuously, the number of re-sequencing projects increases as well [2]. There are pangenome projects for various species focusing on the genome evolution [3,4,5] and mapping-by-sequencing projects which focus on agronomically important traits of crops [6,7,8,9].
An accurate and comprehensive identification of sequence variants between a sample and the reference sequence is the major challenge in many re-sequencing projects [10]. The large amount and diverse nature of NGS-data types (as reviewed in [11]), the diversity of bioinformatics algorithms, and the quality of the reference genome sequence render the choice of the best approach challenging.
Variant calling pipelines often start with (I) the preprocessing of sequence reads, followed by (II) the alignment (mapping) of these reads to a reference sequence. Finally, the (III) identification (calling) of sequence variants is performed based on alignments. Each of these three steps can be carried out by various alternative programs using different algorithms, which influence the accuracy and sensitivity of the resulting variant set.
First, read processing can be required if the read quality is at least partially low. Some downstream tools require that sequence reads come with quality scores in a certain system, namely Phred33 or Phred64. The conversion between different systems is allowed by some read processing tools. Popular read processing tools are FastQC [12], htSeqTools [13], NGSQC [14], SAMStat [15], and Trimmomatic [16].
As the read mapping determines the quality of the alignment, it is arguably the most important step [10]. Sequence reads are aligned to a suitable, but not necessarily the correct place in the genome sequence. Often, there is a trade-off between mapping speed and the quality of the resulting alignment [17]. Numerous mappers are available, which utilize different algorithms and criteria to generate alignments [10,18]. Consequently, the choice of tool and parameters can have a large influence on the outcome of the mapping [19,20]. Reads originating from PCR duplicates should be removed from the mapping prior to the variant calling to improve the reliability of the results [20]. Moreover, the quality of the reference genome sequence plays an important role for the performance of the mapper. The particular challenges are low-complexity sequences, repetitive regions, collapsed copies of sequences, contaminations, or gaps in the reference genome sequence [10]. Frequently applied read mappers are Bowtie2 [21], BWA-MEM [22], CLC Genomics Workbench (Qiagen), GEM3 [23], Novoalign [24], and SOAP2 [25]. While most of these tools are freely available for academic use as command line versions, CLC Genomics Workbench is a proprietary software suite for genomics with a graphical user interface. Detailed characteristics and algorithms of each mapper have been described elsewhere [18,20,26,27].
Finally, genomic variants like single nucleotide variants (SNVs) or small insertions/deletions (InDels) can be inferred by variant callers based on sequence read alignment. Popular variant callers like SAMtools/BCFtools [28], CLC Genomics Workbench (Qiagen), FreeBayes [29], GATK [30,31,32,33], LoFreq [34], SNVer [35], VarDict [36], and VarScan [37] use a variety of different approaches to call variants. Consequently, resulting variant sets differ depending on the employed methods (e.g., Bayesian), which come with strengths and weaknesses concerning the identification of specific variant types [10,38]. Several factors that contribute to the high accuracy of variant callings are: (I) a high read coverage at the variant position resulting in support for SNVs by several overlapping reads [39], (II) a careful study design [20], and (III) joint variant calling for multiple samples to allow mutual support of genotypes [40].
The initial set of putative sequence variants is usually filtered to remove unreliable variant calls. Possible reasons for the identification of these variants in the first place are incorrect alignments, sequencing errors, or low-quality scores [10]. Read depth, mapping quality, and bias in the alignment to both strands are central criteria used in the filtering step. While this filter step aims to reduce the number of false positives, it simultaneously increases the number of false negatives. The best trade-off between sensitivity and specificity depends on the purpose of the respective study.
Many underlying algorithms of variant calling pipelines were developed for the analysis of variants in the human genome, e.g., to investigate genetic disorders or to study tumor samples [20,41,42,43,44]. Although the applications in biomedical research and plant sciences differ substantially, plant scientists have largely followed benchmarking studies derived from research on human samples assuming similar performances. Moreover, many plant genomes possess unique challenges for variant calling, namely high amounts of repetitive sequences [45], large structural variations [46], and a broad range of heterozygosity and polyploidy [47]. Therefore, the diversity of plant genomes reveals the necessity of a benchmarking study using plant datasets. However, no comprehensive benchmarking study of read mapping and variant calling tools for plant genome sequences is described in the literature. Due to substantial differences in the nucleotide composition, a dedicated benchmarking on plant genome sequences is advised. A recent study compared the performance of BWA-MEM [22], SOAP2 [25], and Bowtie2 [21] with the two variant callers GATK [30,31,32,33] and SAMtools/BCFtools [28] on simulated and real tomato datasets [48]. To expand the sparse knowledge about the performance of other read mapping and variant calling tools on plant data, we set out to perform a systematic comparison. Due to the availability of excellent genomic resources, we selected the well-established plant model organism Arabidopsis thaliana for our study. While A. thaliana is not a perfect representative of all plants, the genome shows the characteristically high proportion of AT. Despite limitations in heterochromatic and centromeric regions [49], many plant repeats are resolved in the high-quality genome sequence of A. thaliana. Our study evaluated the performance of 50 variant calling pipelines including combinations of five read mappers (Bowtie2, BWA-MEM, CLC-mapper, GEM3, Novoalign) and eight different variant callers (SAMtools/BCFtools, CLC-caller, FreeBayes, GATK v3.8/v4.0/v4.1, LoFreq, SNVer, VarDict, VarScan) that are frequently applied in re-sequencing studies. Many combinations perform almost equally well on numerous datasets of the plant model organism A. thaliana. Illumina sequence reads were used for the detection of variants and provide the foundation for the comparison of these pipelines. Independent PacBio long reads were harnessed for the validation of identified variants based on an orthogonal sequencing technology.

2. Results

2.1. General Stats about Mapping of Reads

Six Illumina paired-end sequence read datasets [50,51] from A. thaliana Nd-1 and one control sample of Col-0 [50] were processed using all combinations of five read mapping and eight different variant calling tools (including three different versions of one tool) to evaluate the mapping percentage as well as precision, sensitivity, and specificity of each variant calling pipeline. Due to these combinations (7 × 5 × 10), 350 variant calling sets were generated in this study. Overall, the sequence read quality of the processed datasets was high ranging from an average Phred score of 35 to 38 (Table S1).
We observed only minor differences between the different sequence read datasets with respect to the percentage of properly aligned read pairs (Figure S1). In general, a higher proportion of the 2 × 300 nt paired-end reads was mapped, ranging from 94.8% to 99.5%, while the 2 × 250 nt and the 2 × 100 nt paired-end reads resulted in mapping proportions ranging from 92.7% to 99.6% and 90.0% to 99.1%, respectively.
The comparison of mapping performance revealed that GEM3 had the highest average percentage of aligned read pairs (99.4%), followed by Novoalign (98.8%), Bowtie2 (98.5%), BWA-MEM (98.1%), and the read mapping function within CLC Genomics Workbench (CLC-mapper) (92.9%) (Figure 1).

2.2. Initial Variant Calling Results & Validation Results

The initial variant calling revealed between 32,939 (Bowtie2/CLC-caller) and 1,009,163 (BWA-MEM/VarDict) unfiltered SNVs, while the number of unfiltered InDels ranged from 2,559 (BWA-MEM/VarScan) to 240,879 (GEM3/VarDict) (Table S2). Based on the three variant callers, which were able to call and classify MNVs (CLC-caller, VarDict, and FreeBayes), MNVs ranged from 1,394 (Bowtie2/CLC-caller) to 168,100 (CLC-mapper/FreeBayes) (Table S2).
The quality of a variant call set is determined by the transition/transversion ratio, as a worse variant call set tends to have a lower transition/transversion ratio [52]. While most variant callers showed a similar transition/transversion ratio with a median ranging from 1.256 (LoFreq) to 1.288 (VarDict), SNVer revealed a lower median of 1.2 and especially FreeBayes performed worst, showing a median of 1.15 (Figure 2). In addition, FreeBayes revealed the highest variation ranging from 0.92 to 1.31.
In order to analyze whether a variant caller identifies relatively more SNVs than InDels, the ratio of SNVs to SNVs and InDels was calculated per variant caller (Figure 3). BCFtools identified the highest proportion of SNVs (median = 0.90), while VarDict and GATK 4.1 called the lowest proportion of SNVs (median = 0.824). Moreover, all GATK versions performed similar and revealed low variance when compared to the other variant callers. Interestingly, BWA-MEM/VarScan using the SRR3340910 dataset yielded the highest SNVs/InDels ratio with almost 1 (0.996).
To infer whether a variant caller is more prone to call small or large InDels, the distribution of InDel lengths was inspected (Figure 4). Especially VarDict called very large insertions (up to 981 bp) and very large deletions (up to 998 bp), which are likely to be artifacts since they are filtered out in the corresponding validated call set (Figure S2). VarScan (134 to −93), SNVer (134 to −95), CLC-caller (156 to −95), LoFreq (168 to −109), and BCFtools (216 to −108) showed a narrower range of InDel lengths.
A gold standard, comprising variants which have been validated through orthogonal data, was used for benchmarking (see methods for details). In order to compare the performance of different variant calling pipelines, we calculated the sensitivity, specificity, accuracy, precision, and F1 score (Table 1, Table S2). GATK revealed the highest accuracy in combination with most mappers. The only exception is the combination of GEM3 and VarScan, which performed better than any GATK version (Figure 5). GATK worked best on alignments produced by BWA-MEM and Novoalign. All three evaluated GATK versions (v3.8, v4.0, and v4.1) showed an almost identical performance. In general, Novoalign reached the best (median) results with respect to accuracy. The only exceptions are CLC-caller and VarScan based on alignments produced by CLC-mapper and GEM3, respectively. While Bowtie2 was identified to yield high specificity with most variant callers, it showed a low accuracy with most variant callers except for FreeBayes and VarDict.
In general, the sensitivity of the variant calling pipelines ranged from 0.0219 (Bowtie2/CLC-caller) to 0.6038 (BWA-MEM/VarDict) and the specificity from 0.99450 (CLC-mapper/FreeBayes) to 0.999961 (Bowtie2/CLC-caller) (Figures S3 and S4). Moreover, we observed a negative correlation of −0.8 between specificity and sensitivity, indicating that a pipeline with a high sensitivity showed a low specificity and vice versa. Almost every variant caller, except for VarDict, showed the lowest specificity when used in combination with CLC-mapper, while in parallel these combinations had one of the highest sensitivities. VarDict showed the highest specificity, but lowest sensitivity with Bowtie2 and performed inferior to GEM3 in terms of specificity, while BWA-MEM reached the best results in sensitivity.
All tested GATK versions (v3.8, v4.0, and v4.1) showed a very high sensitivity and were only outperformed by specific VarDict samples, namely the 2 × 100 nt paired-end dataset independent of the choice of the mapper, which reached up to 0.6038 sensitivity (BWA-MEM/VarDict-SRR2919279). However, the specificity of GATK was inferior to some other variant callers. Only minor differences were observed between the three evaluated GATK versions regarding both sensitivity and specificity. The use of different mappers had a substantially higher impact than the applied GATK version.
Followed by GATK, FreeBayes showed a good performance in terms of sensitivity and robust results across all mappers, whereas the other variant callers showed a low performance in combination with Bowtie2. CLC-caller, VarScan, and LoFreq revealed a great variation with respect to sensitivity across all mappers, while GATK and especially VarDict displayed very low variance in their results. When focusing on median sensitivity, the following combinations showed the best results: CLC-mapper/CLC-caller, GEM3/VarScan, CLC-mapper/SNVer, CLC-mapper/LoFreq, CLC-mapper/GATK v3.8, CLC-mapper/GATK v4.0, CLC-mapper/GATK v4.1, CLC-mapper/BCFtools, GEM3/FreeBayes, and BWA-MEM/VarDict. However, in terms of median specificity, all variant callers revealed the best results in combination with Bowtie2, except for FreeBayes, which worked best with Novoalign. Moreover, FreeBayes showed the lowest performance and largest variation across all mappers (Figure S3).
Finally, the harmonic mean of precision and sensitivity, namely the F1 score, was analyzed (Figure S5). Novoalign in combination with GATK revealed the best mean performance with respect to the F1 score. Again, different GATK versions showed almost identical performance (Table 1).

3. Discussion

The major challenge in many pangenome and re-sequencing projects is the accurate and comprehensive identification of sequence variants. Due to the high diversity and complexity of plant genomes and their differences to animal (e.g., human) genomes, variant callings in plant research differ substantially from those in human and biomedical research. Most human benchmarking studies focus on calling SNVs of certain tumor cells in a highly diverse cell set [20,42,43]. In contrast, plant studies usually use the whole cell set derived from one plant without genomic differences between cells. However, the sequencing of pooled DNA from multiple plants aims at the identification of low frequency SNVs. Large amounts and different NGS data types (as reviewed in [11]), the diversity of bioinformatic algorithms, and the quality of the reference genome sequences render the choice of the best approach challenging. Hence, we performed a benchmarking study to provide comparable data showing the performance of combinations of frequently applied mappers and variant callers (variant calling pipelines) on plant datasets. A previous report [48] is extended by providing data about the performance of additional tools both for the mapping and variant calling step.
To allow for a consistent comparison of baseline performance, we used default parameters for all tools as these parameters are frequently chosen in plant science applications [4,5,9]. Sequence read datasets from different sequencing platforms, with different read lengths, and different sizes were processed to ensure a realistic benchmarking of tools. Since all evaluated tools can process a real plant dataset within a day, we refrained from assessing the computational costs of the analysis. There is usually a trade-off between quality of the results and computational costs. In our experience, many plant scientists select the workflow leading to the best results independent of computational costs [51].
The first step in a variant calling pipeline is the alignment (mapping) of reads to a reference sequence. While the mapping of 2 × 250 nt paired-end reads resulted in a higher mapping percentage, the performance difference to 2 × 100 nt reads is only about 10%. As different sequencing platforms were used for the data generation, per base quality might contribute to this difference. It is expected that longer reads are aligned with higher specificity and hence improve the following variant calling.
The quality of the variant calling sets was assessed by the transition/transversion (ti/tv) ratio which was previously described as a quality indicator [52]. Overall, the quality of almost all analyzed call sets was similar. A previous benchmarking study with SAMtools and GATK reported similar ti/tv ratios for all pipelines [53]. A filtering step increased the ti/tv ratios, indicating that the filtering increased the quality of the call sets [53]. This observation is in line with our findings, which revealed an increased ti/tv for the filtered call sets reduced to variants present in the gold standard (Figure S6, Table S3). As FreeBayes showed a substantial increase in the quality through filtering, we recommend checking the ti/tv ratio when applying FreeBayes. This effect might be dataset specific.
The choice of the variant caller is crucial for the number of called SNVs, MNVs, and InDels. For example, only CLC-caller, VarDict and FreeBayes were able to call MNVs, thus being more suitable for the identification of structural differences. Furthermore, variant caller results differ with respect to the ratio of SNVs to InDels, which should be considered depending on the specific requirements of the respective sequencing project. BCFtools called relatively more SNVs than InDels, while GATK revealed relatively more InDels. A unique property of VarDict was the detection of InDels up to almost 1 kb which exceeds the read length. Since the accurate identification of such large variants, which are longer than the average read length, is still a challenging task [54], many of these variants are likely false positives. Moreover, the reduced amount of large insertions in the validated call sets of VarDict supports this assumption.
Depending on the application, a pipeline with a high sensitivity or high specificity is desired. In terms of sensitivity, GATK in combination with CLC-mapper, Novoalign, and BWA-MEM yielded the best and most consistent results across all evaluated datasets. These results are in line with a recent study showing that GATK often outperformed SAMtools in terms of sensitivity, precision, and called raw InDels [48]. Similar results had been shown in rice, tomato, and soybean [48], indicating that GATK is also suitable for various crop species with complex genomes. A high sensitivity is essential when a high number of true positive variations accelerates the power of the analyses, e.g., when looking for a detrimental variation between two samples. In this study, a pipeline comprising Bowtie2 and LoFreq resulted in the highest specificity and can thus be recommended. In contrast, a high specificity is desired in mapping-by-sequencing (MBS) projects, as a high proportion of true positives can keep the signal to noise ratio high. Combining both performance metrics by analyzing the accuracy, the best results were obtained with Novoalign and GATK. The same pipeline yielded the best results regarding the harmonic mean of precision and sensitivity (F1 score). Differences observed between the three evaluated GATK versions (v3.8, v4.0, and v4.1) were negligible. However, functionalities and computational performance might differ between these versions.
In summary, this benchmarking study provides insights into the strengths and weaknesses of different variant calling pipelines when applied on plant NGS datasets. Although the performance of all evaluated tools will differ between samples depending on properties of the read datasets and the genome sequence, we hope that our findings serve as a helpful guide.

4. Materials and Methods

4.1. Sequence Read Datasets

We used paired-end Illumina reads from two different A. thaliana accessions, namely Columbia-0 (Col-0) and Niederzenz-1 (Nd-1) (Table S1). The read quality was checked via FastQC [12] (Table S1). Trimmomatic [16] was applied for light trimming and quality conversion to a Phred score of 33 if applicable. These datasets cover different Illumina sequencing platforms including GAIIx, MiSeq, and HiSeq 1500. While two datasets are the paired-end proportions of mate pair sequencing libraries (SRR2919288 and SRR3340911), these samples are 2 × 250 nt paired-end libraries.

4.2. Sequence Read Mapping

We chose five popular read mappers, namely Bowtie2 v2.3.4.3 [21], BWA-MEM v0.7.17 [22], CLC Genomics Workbench v11 (Qiagen), GEM3 v3.6 [23], and Novoalign v3.09.01 [24] for this analysis. While most of these mappers are freely available for academic use, CLC is a proprietary software suite for genomic analyses. Paired-end reads were mapped against the TAIR10 reference genome sequence [55]. The executed commands for each tool can be found in Table S4. SAMtools v.1.8 [28] was deployed for sorting of the BAM files. Reads originating from PCR duplicates where removed via MarkDuplicates in Picard-bf40841 [56]. Read groups or InDel qualities were assigned as these are required by some tools for downstream processing. While the plastome and chondrome sequences were included during the mapping step, variant caller performance was only assessed for the five chromosome sequences of the nucleome.

4.3. Variant Calling

All mapping results were subjected to variant calling via CLC Genomics Workbench v11 (Qiagen), FreeBayes v1.2.0 [29], Genome Analysis Tool Kit v3.8/v4.0/v4.1 HaplotypeCaller (GATK-HC) [30,31,32,33], LoFreq v2.1.3.1 [34], SAMtools v1.9 [28] in combination with BCFtools v1.9 (alias BCFtools in the following), SNVer [35], VarDict [36], and VarScan [37]. We evaluated three different versions of GATK in order to analyze whether the applied version has a high impact on the variant calling pipeline performance. The executed commands for each tool can be found in Table S4.

4.4. Performance Measure of Variant Calling Pipelines

The overall workflow of our benchmarking study is presented in Figure 6. We applied a previously described pipeline to validate sequence variants against the Nd-1 de novo assembly based on PacBio reads [57], which is crucial in order to assess the performance of each variant calling pipeline. This Nd-1 genome sequence assembly is of high quality due to a high PacBio read coverage of about 112-fold and additional polishing with about 120-fold coverage of accurate short reads [58]. A gold standard was generated from all validated variants by combining them into a single VCF file [59]. Afterwards, the numbers of true positives, true negatives, false positives, and false negatives were calculated based on the gold standard and the initial VCF files for each variant calling pipeline and dataset. Next, performance statistics including sensitivity, specificity, precision, accuracy, and F1 score were calculated per combination of mapper, variant caller, and dataset (Table 1).

Supplementary Materials

The following are available online at https://www.mdpi.com/2223-7747/9/4/439/s1, Table S1: Overview of analyzed datasets with SRR identifiers. Table S2: Variant calling results of the initial VCF files. Table S3: Variant calling results of the validated VCF files. Table S4: Executed commands for each variant calling pipeline. Figure S1: Ratio of mapped read pairs per dataset. Figure S2: Distribution of InDel lengths in the validated call sets per variant caller. Figure S3: Specificity of variant calling pipelines. Figure S4: Sensitivity of variant calling pipelines. Figure S5: F1 score of variant calling pipelines. Figure S6: Ratio of transitions to transversions in the validated variant call sets per variant caller.

Author Contributions

H.M.S., A.R., and B.P. conceived the project. H.M.S., A.R., and B.P. conducted data analysis. H.M.S. and B.P. wrote the manuscript. B.P. supervised the project. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We thank the Bioinformatics Resource Facility support team of the CeBiTec for great technical support. We acknowledge support for the Article Processing Charge by the Open Access Publication Fund of Bielefeld University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Weigel, D.; Mott, R. The 1001 Genomes Project for Arabidopsis thaliana. Genome Biol. 2009, 10, 107. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Varshney, R.K.; Pandey, M.K.; Chitikineni, A. Plant Genetics and Molecular Biology; Springer: New York, NY, USA, 2018; ISBN 978-3-319-91313-1. [Google Scholar]
  3. Li, Y.; Zhou, G.; Ma, J.; Jiang, W.; Jin, L.; Zhang, Z.; Guo, Y.; Zhang, J.; Sui, Y.; Zheng, L.; et al. De novo assembly of soybean wild relatives for pan-genome analysis of diversity and agronomic traits. Nat. Biotechnol. 2014, 32, 1045–1052. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Zhao, Q.; Feng, Q.; Lu, H.; Li, Y.; Wang, A.; Tian, Q.; Zhan, Q.; Lu, Y.; Zhang, L.; Huang, T.; et al. Pan-genome analysis highlights the extent of genomic variation in cultivated and wild rice. Nat. Genet. 2018, 50, 278–284. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Song, J.-M.; Guan, Z.; Hu, J.; Guo, C.; Yang, Z.; Wang, S.; Liu, D.; Wang, B.; Lu, S.; Zhou, R.; et al. Eight high-quality genomes reveal pan-genome architecture and ecotype differentiation of Brassica napus. Nat. Plants 2020, 6, 34–45. [Google Scholar] [CrossRef]
  6. Abe, A.; Kosugi, S.; Yoshida, K.; Natsume, S.; Takagi, H.; Kanzaki, H.; Matsumura, H.; Yoshida, K.; Mitsuoka, C.; Tamiru, M.; et al. Genome sequencing reveals agronomically important loci in rice using MutMap. Nat. Biotechnol. 2012, 30, 174–178. [Google Scholar] [CrossRef] [Green Version]
  7. Liu, S.; Yeh, C.-T.; Tang, H.M.; Nettleton, D.; Schnable, P.S. Gene Mapping via Bulked Segregant RNA-Seq (BSR-Seq). PLoS ONE 2012, 7, e36406. [Google Scholar] [CrossRef] [Green Version]
  8. Mascher, M.; Jost, M.; Kuon, J.-E.; Himmelbach, A.; Aßfalg, A.; Beier, S.; Scholz, U.; Graner, A.; Stein, N. Mapping-by-sequencing accelerates forward genetics in barley. Genome Biol. 2014, 15, R78. [Google Scholar] [CrossRef] [Green Version]
  9. Ries, D.; Holtgräwe, D.; Viehöver, P.; Weisshaar, B. Rapid gene identification in sugar beet using deep sequencing of DNA from phenotypic pools selected from breeding panels. BMC Genom. 2016, 17, 236. [Google Scholar] [CrossRef] [Green Version]
  10. Pfeifer, S.P. From next-generation resequencing reads to a high-quality variant data set. Heredity 2017, 118, 111–124. [Google Scholar] [CrossRef] [Green Version]
  11. Harismendy, O.; Ng, P.C.; Strausberg, R.L.; Wang, X.; Stockwell, T.B.; Beeson, K.Y.; Schork, N.J.; Murray, S.S.; Topol, E.J.; Levy, S.; et al. Evaluation of next generation sequencing platforms for population targeted sequencing studies. Genome Biol. 2009, 10, R32. [Google Scholar] [CrossRef] [Green Version]
  12. Andrews, S. FastQC: A Quality Control Tool for High Throughput Sequence Data. 2010. Available online: http://www.bioinformatics.babraham.ac.uk/projects/fastqc/ (accessed on 14 March 2020).
  13. Planet, E.; Attolini, C.S.-O.; Reina, O.; Flores, O.; Rossell, D. htSeqTools: High-throughput sequencing quality control, processing and visualization in R. Bioinformatics 2012, 28, 589–590. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Dai, M.; Thompson, R.C.; Maher, C.; Contreras-Galindo, R.; Kaplan, M.H.; Markovitz, D.M.; Omenn, G.; Meng, F. NGSQC: Cross-platform quality analysis pipeline for deep sequencing data. BMC Genom. 2010, 11 (Suppl. S4), S7. [Google Scholar] [CrossRef] [Green Version]
  15. Lassmann, T.; Hayashizaki, Y.; Daub, C.O. SAMStat: Monitoring biases in next generation sequencing data. Bioinformatics 2011, 27, 130–131. [Google Scholar] [CrossRef] [Green Version]
  16. Bolger, A.M.; Lohse, M.; Usadel, B. Trimmomatic: A flexible trimmer for Illumina sequence data. Bioinformatics 2014, 30, 2114–2120. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Rodríguez-Ezpeleta, N.; Hackenberg, M.; Aransay, A.M. Bioinformatics for High Throughput Sequencing; Springer Science & Business Media: Berlin, Germany, 2011; ISBN 978-1-4614-0782-9. [Google Scholar]
  18. Reinert, K.; Langmead, B.; Weese, D.; Evers, D.J. Alignment of Next-Generation Sequencing Reads. Annu. Rev. Genom. Hum. Genet. 2015, 16, 133–151. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Shang, J.; Zhu, F.; Vongsangnak, W.; Tang, Y.; Zhang, W.; Shen, B. Evaluation and Comparison of Multiple Aligners for Next-Generation Sequencing Data Analysis. Available online: https://www.hindawi.com/journals/bmri/2014/309650/ (accessed on 22 January 2020).
  20. Pabinger, S.; Dander, A.; Fischer, M.; Snajder, R.; Sperk, M.; Efremova, M.; Krabichler, B.; Speicher, M.R.; Zschocke, J.; Trajanoski, Z. A survey of tools for variant analysis of next-generation genome sequencing data. Brief. Bioinform. 2014, 15, 256–278. [Google Scholar] [CrossRef] [Green Version]
  21. Langmead, B.; Salzberg, S.L. Fast gapped-read alignment with Bowtie 2. Nat. Methods 2012, 9, 357–359. [Google Scholar] [CrossRef] [Green Version]
  22. Li, H. Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. arXiv 2013, arXiv:1303.3997. [Google Scholar]
  23. Marco-Sola, S.; Sammeth, M.; Guigó, R.; Ribeca, P. The GEM mapper: Fast, accurate and versatile alignment by filtration. Nat. Methods 2012, 9, 1185–1188. [Google Scholar] [CrossRef]
  24. Novoalign. Available online: http://novocraft.com/ (accessed on 22 January 2020).
  25. Li, R.; Yu, C.; Li, Y.; Lam, T.-W.; Yiu, S.-M.; Kristiansen, K.; Wang, J. SOAP2: An improved ultrafast tool for short read alignment. Bioinformatics 2009, 25, 1966–1967. [Google Scholar] [CrossRef] [Green Version]
  26. Ruffalo, M.; LaFramboise, T.; Koyutürk, M. Comparative analysis of algorithms for next-generation sequencing read alignment. Bioinformatics 2011, 27, 2790–2796. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Yu, X.; Guda, K.; Willis, J.; Veigl, M.; Wang, Z.; Markowitz, S.; Adams, M.D.; Sun, S. How do alignment programs perform on sequencing data with varying qualities and from repetitive regions? Biodata Min. 2012, 5, 6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Li, H.; Handsaker, B.; Wysoker, A.; Fennell, T.; Ruan, J.; Homer, N.; Marth, G.; Abecasis, G.; Durbin, R. The Sequence Alignment/Map format and SAMtools. Bioinformatics 2009, 25, 2078–2079. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Garrison, E.; Marth, G. Haplotype-based variant detection from short-read sequencing. arXiv 2012, arXiv:1207.3907. [Google Scholar]
  30. McKenna, A.; Hanna, M.; Banks, E.; Sivachenko, A.; Cibulskis, K.; Kernytsky, A.; Garimella, K.; Altshuler, D.; Gabriel, S.; Daly, M.; et al. The Genome Analysis Toolkit: A MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010, 20, 1297–1303. [Google Scholar] [CrossRef] [Green Version]
  31. DePristo, M.A.; Banks, E.; Poplin, R.; Garimella, K.V.; Maguire, J.R.; Hartl, C.; Philippakis, A.A.; del Angel, G.; Rivas, M.A.; Hanna, M.; et al. A framework for variation discovery and genotyping using next-generation DNA sequencing data. Nat. Genet. 2011, 43, 491–498. [Google Scholar] [CrossRef]
  32. Van der Auwera, G.A.; Carneiro, M.O.; Hartl, C.; Poplin, R.; Del Angel, G.; Levy-Moonshine, A.; Jordan, T.; Shakir, K.; Roazen, D.; Thibault, J.; et al. From FastQ data to high confidence variant calls: The Genome Analysis Toolkit best practices pipeline. Curr. Protoc. Bioinform. 2013, 43, 11.10.1–11.10.33. [Google Scholar]
  33. Poplin, R.; Ruano-Rubio, V.; DePristo, M.A.; Fennell, T.J.; Carneiro, M.O.; Van der Auwera, G.A.; Kling, D.E.; Gauthier, L.D.; Levy-Moonshine, A.; Roazen, D.; et al. Scaling accurate genetic variant discovery to tens of thousands of samples. bioRxiv 2018, 201178. [Google Scholar] [CrossRef] [Green Version]
  34. Wilm, A.; Aw, P.P.K.; Bertrand, D.; Yeo, G.H.T.; Ong, S.H.; Wong, C.H.; Khor, C.C.; Petric, R.; Hibberd, M.L.; Nagarajan, N. LoFreq: A sequence-quality aware, ultra-sensitive variant caller for uncovering cell-population heterogeneity from high-throughput sequencing datasets. Nucleic Acids Res. 2012, 40, 11189–11201. [Google Scholar] [CrossRef] [Green Version]
  35. Wei, Z.; Wang, W.; Hu, P.; Lyon, G.J.; Hakonarson, H. SNVer: A statistical tool for variant calling in analysis of pooled or individual next-generation sequencing data. Nucleic Acids Res. 2011, 39, e132. [Google Scholar] [CrossRef]
  36. Lai, Z.; Markovets, A.; Ahdesmaki, M.; Chapman, B.; Hofmann, O.; McEwen, R.; Johnson, J.; Dougherty, B.; Barrett, J.C.; Dry, J.R. VarDict: A novel and versatile variant caller for next-generation sequencing in cancer research. Nucleic Acids Res. 2016, 44, e108. [Google Scholar] [CrossRef] [PubMed]
  37. Koboldt, D.C.; Chen, K.; Wylie, T.; Larson, D.E.; McLellan, M.D.; Mardis, E.R.; Weinstock, G.M.; Wilson, R.K.; Ding, L. VarScan: Variant detection in massively parallel sequencing of individual and pooled samples. Bioinformatics 2009, 25, 2283–2285. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Pucker, B.; Schilbert, H. Genomics and Transcriptomics Advances in Plant Sciences. In Molecular Approaches in Plant Biology and Environmental Challenges; Springer: Singapore, 2019; ISBN 9789811506895. [Google Scholar]
  39. Fumagalli, M. Assessing the effect of sequencing depth and sample size in population genetics inferences. PLoS ONE 2013, 8, e79667. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Nielsen, R.; Paul, J.S.; Albrechtsen, A.; Song, Y.S. Genotype and SNP calling from next-generation sequencing data. Nat. Rev. Genet. 2011, 12, 443–451. [Google Scholar] [CrossRef] [PubMed]
  41. Hwang, S.; Kim, E.; Lee, I.; Marcotte, E.M. Systematic comparison of variant calling pipelines using gold standard personal exome variants. Sci. Rep. 2015, 5, 17875. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Krøigård, A.B.; Thomassen, M.; Lænkholm, A.-V.; Kruse, T.A.; Larsen, M.J. Evaluation of Nine Somatic Variant Callers for Detection of Somatic Mutations in Exome and Targeted Deep Sequencing Data. PLoS ONE 2016, 11, e0151664. [Google Scholar] [CrossRef] [Green Version]
  43. Bian, X.; Zhu, B.; Wang, M.; Hu, Y.; Chen, Q.; Nguyen, C.; Hicks, B.; Meerzaman, D. Comparing the performance of selected variant callers using synthetic data and genome segmentation. BMC Bioinform. 2018, 19, 429. [Google Scholar] [CrossRef]
  44. Hwang, K.-B.; Lee, I.-H.; Li, H.; Won, D.-G.; Hernandez-Ferrer, C.; Negron, J.A.; Kong, S.W. Comparative analysis of whole-genome sequencing pipelines to minimize false negative findings. Sci. Rep. 2019, 9, 1–10. [Google Scholar] [CrossRef] [Green Version]
  45. Nystedt, B.; Street, N.R.; Wetterbom, A.; Zuccolo, A.; Lin, Y.-C.; Scofield, D.G.; Vezzi, F.; Delhomme, N.; Giacomello, S.; Alexeyenko, A.; et al. The Norway spruce genome sequence and conifer genome evolution. Nature 2013, 497, 579–584. [Google Scholar] [CrossRef] [Green Version]
  46. Fuentes, R.R.; Chebotarov, D.; Duitama, J.; Smith, S.; De la Hoz, J.F.D.; Mohiyuddin, M.; Wing, R.A.; McNally, K.L.; Tatarinova, T.; Grigoriev, A.; et al. Structural variants in 3000 rice genomes. Genome Res. 2019, 29, 870–880. [Google Scholar] [CrossRef]
  47. Claros, M.G.; Bautista, R.; Guerrero-Fernández, D.; Benzerki, H.; Seoane, P.; Fernández-Pozo, N. Why Assembling Plant Genome Sequences Is So Challenging. Biology 2012, 1, 439–459. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Wu, X.; Heffelfinger, C.; Zhao, H.; Dellaporta, S.L. Benchmarking variant identification tools for plant diversity discovery. BMC Genom. 2019, 20, 701. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Davison, J.; Tyagi, A.; Comai, L. Large-scale polymorphism of heterochromatic repeats in the DNA of Arabidopsis thaliana. BMC Plant Biol. 2007, 7, 44. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Kleinboelting, N.; Huep, G.; Appelhagen, I.; Viehoever, P.; Li, Y.; Weisshaar, B. The Structural Features of Thousands of T-DNA Insertion Sites Are Consistent with a Double-Strand Break Repair-Based Insertion Mechanism. Mol. Plant. 2015, 8, 1651–1664. [Google Scholar] [CrossRef]
  51. Pucker, B.; Holtgräwe, D.; Rosleff Sörensen, T.; Stracke, R.; Viehöver, P.; Weisshaar, B. A De Novo Genome Sequence Assembly of the Arabidopsis thaliana Accession Niederzenz-1 Displays Presence/Absence Variation and Strong Synteny. PLoS ONE 2016, 11, e0164321. [Google Scholar] [CrossRef] [Green Version]
  52. Pucker, B.; Holtgräwe, D.; Stadermann, K.B.; Frey, K.; Huettel, B.; Reinhardt, R.; Weisshaar, B. A chromosome-level sequence assembly reveals the structure of the Arabidopsis thaliana Nd-1 genome and its gene set. PLoS ONE 2019, 14, e0216233. [Google Scholar] [CrossRef] [Green Version]
  53. Liu, X.; Han, S.; Wang, Z.; Gelernter, J.; Yang, B.-Z. Variant Callers for Next-Generation Sequencing Data: A Comparison Study. PLoS ONE 2013, 8, e75619. [Google Scholar] [CrossRef] [Green Version]
  54. Kavak, P.; Lin, Y.-Y.; Numanagić, I.; Asghari, H.; Güngör, T.; Alkan, C.; Hach, F. Discovery and genotyping of novel sequence insertions in many sequenced individuals. Bioinformatics 2017, 33, i161–i169. [Google Scholar] [CrossRef] [Green Version]
  55. Lamesch, P.; Berardini, T.Z.; Li, D.; Swarbreck, D.; Wilks, C.; Sasidharan, R.; Muller, R.; Dreher, K.; Alexander, D.L.; Garcia-Hernandez, M.; et al. The Arabidopsis Information Resource (TAIR): Improved gene annotation and new tools. Nucleic Acids Res. 2012, 40, D1202–D1210. [Google Scholar] [CrossRef]
  56. Picard Tools. Available online: https://broadinstitute.github.io/picard/ (accessed on 22 January 2020).
  57. Baasner, J.-S.; Howard, D.; Pucker, B. Influence of neighboring small sequence variants on functional impact prediction. bioRxiv 2019, 596718. [Google Scholar] [CrossRef]
  58. Li, H. Improving SNP discovery by base alignment quality. Bioinformatics 2011, 27, 1157–1158. [Google Scholar] [CrossRef] [PubMed]
  59. Schilbert, H.; Rempel, A.; Pucker, B. Gold Standard of Nd1 vs. TAIR10 Sequence Variants; Bielefeld University: Bielefeld, Germany, 2020. [Google Scholar]
Figure 1. Ratio of mapped sequence read pairs per mapper. Sequence reads of six A. thaliana Nd-1 datasets were mapped to the Col-0 reference genome sequence TAIR10. The average ratio of aligned read pairs was calculated for Bowtie2, BWA-MEM, the mapping function in CLC Genomics Workbench (CLC), GEM3, and Novoalign based on all six datasets through the flagstats function of SAMtools. The width of the violin plots is proportional to the density of the data points. The boxplots inside the violin plots indicate quantiles and the white dot indicates the median.
Figure 1. Ratio of mapped sequence read pairs per mapper. Sequence reads of six A. thaliana Nd-1 datasets were mapped to the Col-0 reference genome sequence TAIR10. The average ratio of aligned read pairs was calculated for Bowtie2, BWA-MEM, the mapping function in CLC Genomics Workbench (CLC), GEM3, and Novoalign based on all six datasets through the flagstats function of SAMtools. The width of the violin plots is proportional to the density of the data points. The boxplots inside the violin plots indicate quantiles and the white dot indicates the median.
Plants 09 00439 g001
Figure 2. Ratio of transitions/transversions in the variant call sets per variant caller. Evaluation of call set quality was harnessed by analyzing the transition/transversion ratio. The orange line represents the median, the green triangle represents the mean.
Figure 2. Ratio of transitions/transversions in the variant call sets per variant caller. Evaluation of call set quality was harnessed by analyzing the transition/transversion ratio. The orange line represents the median, the green triangle represents the mean.
Plants 09 00439 g002
Figure 3. Ratio of SNVs to SNVs and InDels per variant caller. Performance of each variant caller was assessed based on 30 mappings of A. thaliana Nd-1 reads against the Col-0 reference genome sequence TAIR10. The proportion of SNVs in the results of each applied variant caller was analyzed. MNPs were excluded because not all variant callers identified MNPs. The orange line represents the median, the green triangle the mean.
Figure 3. Ratio of SNVs to SNVs and InDels per variant caller. Performance of each variant caller was assessed based on 30 mappings of A. thaliana Nd-1 reads against the Col-0 reference genome sequence TAIR10. The proportion of SNVs in the results of each applied variant caller was analyzed. MNPs were excluded because not all variant callers identified MNPs. The orange line represents the median, the green triangle the mean.
Plants 09 00439 g003
Figure 4. Distribution of InDel lengths per variant caller. Performance of variant callers was assessed based on 30 mappings of A. thaliana Nd-1 reads against the Col-0 reference genome sequence TAIR10. The length distribution of all InDels identified by each applied variant caller was analyzed. The orange line represents the median, the green triangle represents the mean.
Figure 4. Distribution of InDel lengths per variant caller. Performance of variant callers was assessed based on 30 mappings of A. thaliana Nd-1 reads against the Col-0 reference genome sequence TAIR10. The length distribution of all InDels identified by each applied variant caller was analyzed. The orange line represents the median, the green triangle represents the mean.
Plants 09 00439 g004
Figure 5. Accuracy of variant calling pipelines. The accuracy for each variant calling pipeline is shown with mean (dashed line) and median (straight line) calculated based on the results of the six analyzed datasets.
Figure 5. Accuracy of variant calling pipelines. The accuracy for each variant calling pipeline is shown with mean (dashed line) and median (straight line) calculated based on the results of the six analyzed datasets.
Plants 09 00439 g005
Figure 6. Workflow for the performance analysis of variant calling pipelines. First, reads within supplied FASTQ files were mapped against the TAIR10 A. thaliana reference genome sequence. Next, variants were called and saved in VCF files. All variants were subjected to a previously described validation process based on the Nd-1 genome sequence [58]. A gold standard was generated based on all validated variants. The initial variants called by each combination of mapper and variant caller were evaluated by analyzing whether they are present or absent in the gold standard. The numbers of SNVs, MNVs, and InDels were retrieved from the validated and from the initial VCF files (Tables S2 and S3). Next, true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN) were calculated for SNVs, MNVs, and InDels identified by each combination of mapper and variant caller. Finally, performance statistics, such as F1 score, sensitivity, specificity, precision, and accuracy were calculated.
Figure 6. Workflow for the performance analysis of variant calling pipelines. First, reads within supplied FASTQ files were mapped against the TAIR10 A. thaliana reference genome sequence. Next, variants were called and saved in VCF files. All variants were subjected to a previously described validation process based on the Nd-1 genome sequence [58]. A gold standard was generated based on all validated variants. The initial variants called by each combination of mapper and variant caller were evaluated by analyzing whether they are present or absent in the gold standard. The numbers of SNVs, MNVs, and InDels were retrieved from the validated and from the initial VCF files (Tables S2 and S3). Next, true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN) were calculated for SNVs, MNVs, and InDels identified by each combination of mapper and variant caller. Finally, performance statistics, such as F1 score, sensitivity, specificity, precision, and accuracy were calculated.
Plants 09 00439 g006
Table 1. Performance statistics of variant calling pipelines. For each variant calling pipeline, the statistics to infer the performance are listed: sen = sensitivity, spe = specificity, pre = precision, acc = accuracy, F1 = F1 score.
Table 1. Performance statistics of variant calling pipelines. For each variant calling pipeline, the statistics to infer the performance are listed: sen = sensitivity, spe = specificity, pre = precision, acc = accuracy, F1 = F1 score.
BCFtoolsCLCFreeBayesGATK v3.8GATK v4.0GATK v4.1LoFreqSNVerVarDictVarScan
Bowtie226.701 sen7.979 sen43.729 sen33.836 sen33.819 sen33.869 sen12.282 sen21.844 sen37.343 sen13.910 sen
99.928 spe99.987 spe99.566 spe99.937 spe99.937 spe99.936 spe99.985 spe99.969 spe99.884 spe99.979 spe
81.800 pre88.244 pre53.735 pre87.286 pre87.342 pre87.035 pre91.607 pre90.107 pre79.803 pre89.530 pre
99.030 acc98.828 acc98.939 acc99.104 acc99.104 acc99.104 acc98.878 acc98.983 acc99.120 acc98.894 acc
0.403 F10.145 F10.47 F10.488 F10.488 F10.488 F10.214 F10.352 F10.512 F10.239 F1
BWA-MEM43.186 sen34.992 sen43.053 sen47.130 sen47.099 sen47.136 sen38.874 sen41.435 sen40.302 sen37.506 sen
99.806 spe99.908 spe99.548 spe99.842 spe99.842 spe99.839 spe99.892 spe99.868 spe99.852 spe99.907 spe
73.182 pre81.892 pre51.289 pre79.129 pre79.083 pre78.786 pre81.122 pre79.521 pre76.686 pre83.535 pre
99.120 acc99.099 acc98.916 acc99.179 acc99.179 acc99.178 acc99.122 acc99.133 acc99.124 acc99.120 acc
0.543 F10.492 F10.459 F10.591 F10.59 F10.59 F10.528 F10.546 F10.531 F10.518 F1
CLC47.682 sen36.382 sen41.564 sen49.237 sen49.357 sen49.413 sen41.730 sen45.352 sen39.508 sen41.349 sen
99.695 spe99.851 spe99.517 spe99.817 spe99.815 spe99.812 spe99.821 spe99.759 spe99.857 spe99.847 spe
65.903 pre73.878 pre48.854 pre77.341 pre77.183 pre76.901 pre73.611 pre70.493 pre76.995 pre76.971 pre
99.064 acc99.062 acc98.882 acc99.175 acc99.175 acc99.174 acc99.066 acc99.060 acc99.112 acc99.100 acc
0.549 F10.491 F10.441 F10.601 F10.602 F10.602 F10.536 F10.549 F10.525 F10.539 F1
GEM335.488 sen32.826 sen44.104 sen40.895 sen40.890 sen40.963 sen33.867 sen34.710 sen39.131 sen41.502 sen
99.887 spe99.932 spe99.523 spe99.902 spe99.902 spe99.900 spe99.941 spe99.925 spe99.847 spe99.904 spe
79.533 pre85.457 pre51.481 pre84.189 pre84.221 pre83.863 pre87.657 pre85.573 pre75.648 pre84.315 pre
99.092 acc99.095 acc98.922 acc99.157 acc99.157 acc99.156 acc99.107 acc99.097 acc99.100 acc99.169 acc
0.49 F10.475 F10.464 F10.55 F10.551 F10.55 F10.489 F10.494 F10.519 F10.557 F1
Novoalign44.160 sen33.858 sen42.986 sen48.599 sen48.575 sen48.620 sen38.892 sen41.436 sen39.863 sen38.170 sen
99.820 spe99.922 spe99.610 spe99.846 spe99.845 spe99.842 spe99.905 spe99.885 spe99.860 spe99.922 spe
75.260 pre84.222 pre54.799 pre80.019 pre79.951 pre79.643 pre83.510 pre81.968 pre77.296 pre85.862 pre
99.147 acc99.097 acc98.966 acc99.202 acc99.202 acc99.200 acc99.137 acc99.149 acc99.127 acc99.144 acc
0.556 F10.483 F10.472 F10.605 F10.604 F10.604 F10.532 F10.551 F10.529 F10.529 F1

Share and Cite

MDPI and ACS Style

Schilbert, H.M.; Rempel, A.; Pucker, B. Comparison of Read Mapping and Variant Calling Tools for the Analysis of Plant NGS Data. Plants 2020, 9, 439. https://doi.org/10.3390/plants9040439

AMA Style

Schilbert HM, Rempel A, Pucker B. Comparison of Read Mapping and Variant Calling Tools for the Analysis of Plant NGS Data. Plants. 2020; 9(4):439. https://doi.org/10.3390/plants9040439

Chicago/Turabian Style

Schilbert, Hanna Marie, Andreas Rempel, and Boas Pucker. 2020. "Comparison of Read Mapping and Variant Calling Tools for the Analysis of Plant NGS Data" Plants 9, no. 4: 439. https://doi.org/10.3390/plants9040439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop