Published paper: NGS and antibiotic resistance
AMR Control just released (some of) the articles of their 2019-20 issue, and among the papers hot of the press is one that I have co-authored with Etienne Ruppé, Yannick Charretier and Jacques Schrenzel on how next-generation sequencing can be used to address antibiotic resistance problems (1).
The paper contains a brief overview of next-generation sequencing platforms and tools, the resources that can be used to detect and quantify resistance from sequencing data, and descriptions of applications in clinical genomics, clinical/human metagenomics as well as in environmental settings (the latter being the part where I contributed the most). Compared to much of the writing on antibiotic resistance and sequencing applications, I think this paper is pretty easily accessible to a general audience.
I first met Etienne on the JRC workshops for how next-generation sequencing could be implemented in the EU’s Coordinated Action Plan against Antimicrobial Resistance (2,3), and it seems quite fitting that we now ended up writing a paper on such implementations together.
- Ruppé E, Bengtsson-Palme J, Charretier Y, Schrenzel J: How next-generation sequencing can address the antimicrobial resistance challenge. AMR Control, 2019-20, 60-65 (2019). [Paper link]
- Angers A, Petrillo P, Patak, A, Querci M, Van den Eede G: The Role and Implementation of Next-Generation Sequencing Technologies in the Coordinated Action Plan against Antimicrobial Resistance. JRC Conference and Workshop Report, EUR 28619 (2017). doi: 10.2760/745099 [Link]
- Angers-Loustau A, Petrillo M, Bengtsson-Palme J, Berendonk T, Blais B, Chan KG, Coque TM, Hammer P, Heß S, Kagkli DM, Krumbiegel C, Lanza VF, Madec J-Y, Naas T, O’Grady J, Paracchini V, Rossen JWA, Ruppé E, Vamathevan J, Venturi V, Van den Eede G: The challenges of designing a benchmark strategy for bioinformatics pipelines in the identification of antimicrobial resistance determinants using next generation sequencing technologies. F1000Research, 7, 459 (2018). doi: 10.12688/f1000research.14509.2 [Paper link]
Published paper: benchmarking resistance gene identification
Since F1000Research uses a somewhat different publication scheme than most journals, I still haven’t understood if this paper is formally published after peer review, but I start to assume it is. There have been very little changes since the last version, so hence I will be lazy and basically repost what I wrote in April when the first version (the “preprint”) was posted online. The paper (1) is the result of a workshop arranged by the JRC in Italy in 2017. It describes various challenges arising from the process of designing a benchmark strategy for bioinformatics pipelines in the identification of antimicrobial resistance genes in next generation sequencing data.
The paper discusses issues about the benchmarking datasets used, testing samples, evaluation criteria for the performance of different tools, and how the benchmarking dataset should be created and distributed. Specially, we address the following questions:
- How should a benchmark strategy handle the current and expanding universe of NGS platforms?
- What should be the quality profile (in terms of read length, error rate, etc.) of in silico reference materials?
- Should different sets of reference materials be produced for each platform? In that case, how to ensure no bias is introduced in the process?
- Should in silico reference material be composed of the output of real experiments, or simulated read sets? If a combination is used, what is the optimal ratio?
- How is it possible to ensure that the simulated output has been simulated “correctly”?
- For real experiment datasets, how to avoid the presence of sensitive information?
- Regarding the quality metrics in the benchmark datasets (e.g. error rate, read quality), should these values be fixed for all datasets, or fall within specific ranges? How wide can/should these ranges be?
- How should the benchmark manage the different mechanisms by which bacteria acquire resistance?
- What is the set of resistance genes/mechanisms that need to be included in the benchmark? How should this set be agreed upon?
- Should datasets representing different sample types (e.g. isolated clones, environmental samples) be included in the same benchmark?
- Is a correct representation of different bacterial species (host genomes) important?
- How can the “true” value of the samples, against which the pipelines will be evaluated, be guaranteed?
- What is needed to demonstrate that the original sample has been correctly characterised, in case real experiments are used?
- How should the target performance thresholds (e.g. specificity, sensitivity, accuracy) for the benchmark suite be set?
- What is the impact of these performance thresholds on the required size of the sample set?
- How can the benchmark stay relevant when new resistance mechanisms are regularly characterized?
- How is the continued quality of the benchmark dataset ensured?
- Who should generate the benchmark resource?
- How can the benchmark resource be efficiently shared?
Of course, we have not answered all these questions, but I think we have come down to a decent description of the problems, which we see as an important foundation for solving these issues and implementing the benchmarking standard. Some of these issues were tackled in our review paper from last year on using metagenomics to study resistance genes in microbial communities (2). The paper also somewhat connects to the database curation paper we published in 2016 (3), although this time the strategies deal with the testing datasets rather than the actual databases. The paper is the first outcome of the workshop arranged by the JRC on “Next-generation sequencing technologies and antimicrobial resistance” held October 4-5 2017 in Ispra, Italy. You can find the paper here (it’s open access).
On another note, the new paper describing the UNITE database (4) has now got a formal issue assigned to it, as has the paper on tandem repeat barcoding in fungi published in Molecular Ecology Resources last year (5).
References and notes
- Angers-Loustau A, Petrillo M, Bengtsson-Palme J, Berendonk T, Blais B, Chan KG, Coque TM, Hammer P, Heß S, Kagkli DM, Krumbiegel C, Lanza VF, Madec J-Y, Naas T, O’Grady J, Paracchini V, Rossen JWA, Ruppé E, Vamathevan J, Venturi V, Van den Eede G: The challenges of designing a benchmark strategy for bioinformatics pipelines in the identification of antimicrobial resistance determinants using next generation sequencing technologies. F1000Research, 7, 459 (2018). doi: 10.12688/f1000research.14509.1
- Bengtsson-Palme J, Larsson DGJ, Kristiansson E: Using metagenomics to investigate human and environmental resistomes. Journal of Antimicrobial Chemotherapy, 72, 2690–2703 (2017). doi: 10.1093/jac/dkx199
- Bengtsson-Palme J, Boulund F, Edström R, Feizi A, Johnning A, Jonsson VA, Karlsson FH, Pal C, Pereira MB, Rehammar A, Sánchez J, Sanli K, Thorell K: Strategies to improve usability and preserve accuracy in biological sequence databases. Proteomics, 16, 18, 2454–2460 (2016). doi: 10.1002/pmic.201600034
- Nilsson RH, Larsson K-H, Taylor AFS, Bengtsson-Palme J, Jeppesen TS, Schigel D, Kennedy P, Picard K, Glöckner FO, Tedersoo L, Saar I, Kõljalg U, Abarenkov K: The UNITE database for molecular identification of fungi: handling dark taxa and parallel taxonomic classifications. Nucleic Acids Research, 47, D1, D259–D264 (2019). doi: 10.1093/nar/gky1022
- Wurzbacher C, Larsson E, Bengtsson-Palme J, Van den Wyngaert S, Svantesson S, Kristiansson E, Kagami M, Nilsson RH: Introducing ribosomal tandem repeat barcoding for fungi. Molecular Ecology Resources, 19, 1, 118–127 (2019). doi: 10.1111/1755-0998.12944
New preprint: benchmarking resistance gene identification
This weekend, F1000Research put online the non-peer-reviewed version of the paper resulting from a workshop arranged by the JRC in Italy last year (1). (I will refer to this as a preprint, but at F1000Research the line is quite blurry between preprint and published paper.) The paper describes various challenges arising from the process of designing a benchmark strategy for bioinformatics pipelines (2) in the identification of antimicrobial resistance genes in next generation sequencing data.
The paper discusses issues about the benchmarking datasets used, testing samples, evaluation criteria for the performance of different tools, and how the benchmarking dataset should be created and distributed. Specially, we address the following questions:
- How should a benchmark strategy handle the current and expanding universe of NGS platforms?
- What should be the quality profile (in terms of read length, error rate, etc.) of in silico reference materials?
- Should different sets of reference materials be produced for each platform? In that case, how to ensure no bias is introduced in the process?
- Should in silico reference material be composed of the output of real experiments, or simulated read sets? If a combination is used, what is the optimal ratio?
- How is it possible to ensure that the simulated output has been simulated “correctly”?
- For real experiment datasets, how to avoid the presence of sensitive information?
- Regarding the quality metrics in the benchmark datasets (e.g. error rate, read quality), should these values be fixed for all datasets, or fall within specific ranges? How wide can/should these ranges be?
- How should the benchmark manage the different mechanisms by which bacteria acquire resistance?
- What is the set of resistance genes/mechanisms that need to be included in the benchmark? How should this set be agreed upon?
- Should datasets representing different sample types (e.g. isolated clones, environmental samples) be included in the same benchmark?
- Is a correct representation of different bacterial species (host genomes) important?
- How can the “true” value of the samples, against which the pipelines will be evaluated, be guaranteed?
- What is needed to demonstrate that the original sample has been correctly characterised, in case real experiments are used?
- How should the target performance thresholds (e.g. specificity, sensitivity, accuracy) for the benchmark suite be set?
- What is the impact of these performance thresholds on the required size of the sample set?
- How can the benchmark stay relevant when new resistance mechanisms are regularly characterized?
- How is the continued quality of the benchmark dataset ensured?
- Who should generate the benchmark resource?
- How can the benchmark resource be efficiently shared?
Of course, we have not answered all these questions, but I think we have come down to a decent description of the problems, which we see as an important foundation for solving these issues and implementing the benchmarking standard. Some of these issues were tackled in our review paper from last year on using metagenomics to study resistance genes in microbial communities (3). The paper also somewhat connects to the database curation paper we published in 2016 (4), although this time the strategies deal with the testing datasets rather than the actual databases. The paper is the first outcome of the workshop arranged by the JRC on “Next-generation sequencing technologies and antimicrobial resistance” held October 4-5 last year in Ispra, Italy. You can find the paper here (it’s open access).
References and notes
- Angers-Loustau A, Petrillo M, Bengtsson-Palme J, Berendonk T, Blais B, Chan KG, Coque TM, Hammer P, Heß S, Kagkli DM, Krumbiegel C, Lanza VF, Madec J-Y, Naas T, O’Grady J, Paracchini V, Rossen JWA, Ruppé E, Vamathevan J, Venturi V, Van den Eede G: The challenges of designing a benchmark strategy for bioinformatics pipelines in the identification of antimicrobial resistance determinants using next generation sequencing technologies. F1000Research, 7, 459 (2018). doi: 10.12688/f1000research.14509.1
- You may remember that I hate the term “pipeline” for bioinformatics protocols. I would have preferred if it was called workflows or similar, but the term “pipeline” has taken hold and I guess this is a battle where I have essentially lost. The bioinformatics workflows will be known as pipelines, for better and worse.
- Bengtsson-Palme J, Larsson DGJ, Kristiansson E: Using metagenomics to investigate human and environmental resistomes. Journal of Antimicrobial Chemotherapy, 72, 2690–2703 (2017). doi: 10.1093/jac/dkx199
- Bengtsson-Palme J, Boulund F, Edström R, Feizi A, Johnning A, Jonsson VA, Karlsson FH, Pal C, Pereira MB, Rehammar A, Sánchez J, Sanli K, Thorell K: Strategies to improve usability and preserve accuracy in biological sequence databases. Proteomics, 16, 18, 2454–2460 (2016). doi: 10.1002/pmic.201600034
Report on JRC AMR workshop
In March, I attended a workshop on the role of NGS technologies in the coordinated action plan against antimicrobial resistance, organised by JRC in Italy. I was, together with 14 other experts, invited to discuss where and how sequencing can be used to investigate and manage antibiotic resistance. The report from the workshop has just recently been published, and is available here. There will be follow-up activities on this workshop, which I also hope that I will be able to participate in, since this is an important and very interesting pet topic of mine.
Reference