Metabarcoding allows researchers to easily and quickly identify species and determine their number in a given location on the basis of environmental DNA (that is DNA released into, for example, the water in a particular lake).
In Japan, this method has been used successfully to detect the number of species in specific locations in the sea by sampling as little as a bucket of water. Monitoring species is part of the effort for conservation of biological resources and maintenance of their economic value, and metabarcoding can be utilized as a less labor-intensive and more cost-effective tool for marine surveys of biodiversity.
The new study reports on the research team’s development of the first DNA primers for metabarcoding of brittle stars.
Brittle stars are the most abundant species in the phylum Echinodermata (approximately 2,100 species), making them promising indicator organisms for environmental DNA metabarcoding. These marine invertebrates are thought to release abundant environmental DNA due to their size, large populations, and habitats in a variety of seafloor environments.
To determine the origin of DNA sequences obtained from samples and used for metabarcoding, Okanishi’s team constructed a database of reference DNA sequences based on specimens identified to 60 brittle star species from Sagami Bay.
Up until now, metabarcoding had not been used for organisms with little mobility such as brittle stars, because many reference DNA sequences had been misidentified or unidentified. The new database will aid further research and application of the technology.
Original source: Okanishi M, Kohtsuka H, Wu Q, Shinji J, Shibata N, Tamada T, Nakano T, Minamoto T (2023) Development of two new sets of PCR primers for eDNA metabarcoding of brittle stars (Echinodermata, Ophiuroidea). Metabarcoding and Metagenomics 7: e94298. https://doi.org/10.3897/mbmg.7.94298
Expert Contact: Masanori Okanishi: Hiroshima Shudo University Assistant Professor. E-mail: email@example.com
Apart from coordinating the Horizon 2020-funded project BiCIKL, scholarly publisher and technology provider Pensoft has been the engine behind what is likely to be the first production-stage semantic system to run on top of a reasonably-sized biodiversity knowledge graph.
OpenBiodiv is a biodiversity database containing knowledge extracted from scientific literature, built as an Open Biodiversity Knowledge Management System.
As of February 2023, OpenBiodiv contains 36,308 processed articles; 69,596 taxon treatments; 1,131 institutions; 460,475 taxon names; 87,876 sequences; 247,023 bibliographic references; 341,594 author names; and 2,770,357 article sections and subsections.
In fact, OpenBiodiv is a whole ecosystem comprising tools and services that enable biodiversity data to be extracted from the text of biodiversity articles published in data-minable XML format, as in the journals published by Pensoft (e.g. ZooKeys, PhytoKeys, MycoKeys, Biodiversity Data Journal), and other taxonomic treatments – available from Plazi and Plazi’s specialised extraction workflow – into Linked Open Data.
“The basics of what was to become the OpenBiodiv database began to come together back in 2015 within the EU-funded BIG4 PhD project of Victor Senderov, later succeeded by another PhD project by Mariya Dimitrova within IGNITE. It was during those two projects that the backend Ontology-O, the first versions of RDF converters and the basic website functionalities were created,”
At the time OpenBiodiv became one of the nine research infrastructures within BiCIKL tasked with the provision of virtual access to open FAIR data, tools and services, it had already evolved into a RDF-based biodiversity knowledge graph, equipped with a fully automated extraction and indexing workflow and user apps.
Currently, Pensoft is working at full speed on new user apps in OpenBiodiv, as the team is continuously bringing into play invaluable feedback and recommendation from end-users and partners at BiCIKL.
As a result, OpenBiodiv is already capable of answering open-ended queries based on the available data. To do this, OpenBiodiv discovers ‘hidden’ links between data classes, i.e. taxon names, taxon treatments, specimens, sequences, persons/authors and collections/institutions.
Thus, the system generates new knowledge about taxa, scientific articles and their subsections, the examined materials and their metadata, localities and sequences, amongst others. Additionally, it is able to return information with a relevant visual representation about any one or a combination of those major data classes within a certain scope and semantic context.
Users can explore the database by either typing in any term (even if misspelt!) in the search engine available from the OpenBiodiv homepage; or integrating an Application Programming Interface (API); as well as by using SPARQL queries.
On the OpenBiodiv website, there is also a list of predefined SPARQL queries, which is continuously being expanded.
“OpenBiodiv is an ambitious project of ours, and it’s surely one close to Pensoft’s heart, given our decades-long dedication to biodiversity science and knowledge sharing. Our previous fruitful partnerships with Plazi, BIG4 and IGNITE, as well as the current exciting and inspirational network of BiCIKL are wonderful examples of how far we can go with the right collaborators,”
All journals published by Pensoft – each using the publisher’s self-developed ARPHA Platform – provide extensive and transparent information about their costs and services in line with the Plan S principles.
In support of transparency and openness in scholarly publishing and academia, the scientific publisher and technology provider Pensoft joined the Journal Comparison Service (JCS) initiative by cOAlition S, an alliance of national funders and charitable bodies working to increase the volume of free-to-read research.
As a result, all journals published by Pensoft – each using the publisher’s self-developed ARPHA Platform – provide extensive and transparent information about their costs and services in line with the Plan S principles.
The JCS was launched to aid libraries and library consortia – the ones negotiating and participating in Open Access agreements with publishers – by providing them with everything they need to know in order to determine whether the prices charged by a certain journal are fair and corresponding to the quality of the service.
According to cOAlition S, an increasing number of libraries and library consortia from Europe, Africa, North America, and Australia have registered with the JCS over the past year since the launch of the portal in September 2021.
While access to the JCS is only open to librarians, individual researchers may also make use of the data provided by the participating publishers and their journals.
This is possible through an integration with the Journal Checker Tool, where researchers can simply enter the name of the journal of interest, their funder and affiliation (if applicable) to check whether the scholarly outlet complies with the Open Access policy of the author’s funder. A full list of all academic titles that provide data to the JCS is also publicly available. By being on the list means a journal and its publisher do not only support cOAlition S, but they also demonstrate that they stand for openness and transparency in scholarly publishing.
“We are delighted that Pensoft, along with a number of other publishers, have shared their price and service data through the Journal Comparison Service. Not only are such publishers demonstrating their commitment to open business models and cultures but are also helping to build understanding and trust within the research community.”
said Robert Kiley, Head of Strategy at cOAlition S.
About cOAlition S:
On 4 September 2018, a group of national research funding organisations, with the support of the European Commission and the European Research Council (ERC), announced the launch of cOAlition S, an initiative to make full and immediate Open Access to research publications a reality. It is built around Plan S, which consists of one target and 10 principles. Read more on the cOAlition S website.
About Plan S:
Plan S is an initiative for Open Access publishing that was launched in September 2018. The plan is supported by cOAlition S, an international consortium of research funding and performing organisations. Plan S requires that, from 2021, scientific publications that result from research funded by public grants must be published in compliant Open Access journals or platforms. Read more on the cOAlition S website.
The Horizon 2020 – funded project BiCIKL has reached its halfway stage and the partners gathered in Plovdiv (Bulgaria) from the 22nd to the 25th of October for the Second General Assembly, organised by Pensoft.
The BiCIKL project will launch a new European community of key research infrastructures, researchers, citizen scientists and other stakeholders in the biodiversity and life sciences based on open science practices through access to data, tools and services.
BiCIKL’s goal is to create a centralised place to connect all key biodiversity data by interlinking 15 research infrastructures and their databases. The 3-year European Commission-supported initiative kicked off in 2021 and involves 14 key natural history institutions from 10 European countries.
BiCIKL is keeping pace as expected with 16 out of the 48 final deliverables already submitted, another 9 currently in progress/under review and due in a few days. Meanwhile, 21 out of the 48 milestones have been successfully achieved.
The hybrid format of the meeting enabled a wider range of participants, which resulted in robust discussions on the next steps of the project, such as the implementation of additional technical features of the FAIR Data Place (FAIR being an abbreviation for Findable, Accessible, Interoperable and Reusable).
This data includes biodiversity information, such as detailed images, DNA, physiology and past studies concerning a specific species and its ‘relatives’, to name a few. Currently, the issue is that all those types of biodiversity data have so far been scattered across various databases, which in turn have been missing meaningful and efficient interconnectedness.
Additionally, the FAIR Data Place, developed within the BiCIKL project, is to give researchers access to plenty of training modules to guide them through the different services.
Halfway through the duration of BiCIKL, the project is at a turning point, where crucial discussions between the partners are playing a central role in the refinement of the FAIR Data Place design. Most importantly, they are tasked with ensuring that their technologies work efficiently with each other, in order to seamlessly exchange, update and share the biodiversity data every one of them is collecting and taking care of.
By Year 3 of the BiCIKL project, the partners agree, when those infrastructures and databases become efficiently interconnected to each other, scientists studying the Earth’s biodiversity across the world will be in a much better position to build on existing research and improve the way and the pace at which nature is being explored and understood. At the end of the day, knowledge is the stepping stone for the preservation of biodiversity and humankind itself.
“Needless to say, it’s an honour and a pleasure to be the coordinator of such an amazing team spanning as many as 14 partnering natural history and biodiversity research institutions from across Europe, but also involving many global long-year collaborators and their infrastructures, such as Wikidata, GBIF, TDWG, Catalogue of Life to name a few,”
said BiCIKL’s project coordinator Prof. Lyubomir Penev, CEO and founder of Pensoft.
“The point is: do we want an integrated structure or do we prefer federated structures? What are the pros and cons of the two options? It’s essential to keep the community united and allied because we can’t afford any information loss and the stakeholders should feel at home with the Project and the Biodiversity Knowledge Hub.”
Joe Miller, Executive Secretary and Director at GBIF, commented:
“We are a brand new community, and we are in the middle of the growth process. We would like to already have answers, but it’s good to have this kind of robust discussion to build on a good basis. We must find the best solution to have linkages between infrastructures and be able to maintain them in the future because the Biodiversity Knowledge Hub is the location to gather the community around best practices, data and guidelines on how to use the BiCIKL services… In order to engage even more partners to fill the eventual gaps in our knowledge.”
“In an era of biodiversity change and loss, leveraging scientific data fully will allow the world to catalogue what we have now, to track and understand how things are changing and to build the tools that we will use to conserve or remediate. The challenge is that the data come from many streams – molecular biology, taxonomy, natural history collections, biodiversity observation – that need to be connected and intersected to allow scientists and others to ask real questions about the data. In its first year, BiCIKL has made some key advances to rise to this challenge,”
“As a partner, we, at the Biodiversity Information Standards – TDWG, are very enthusiastic that our standards are implemented in BiCIKL and serve to link biodiversity data. We know that joining forces and working together is crucial to building efficient infrastructures and sharing knowledge.”
The project will go on with the first Round Table of experts in December and the publications of the projects who participated in the Open Call and will be founded at the beginning of the next year.
By the time authors – who have acknowledged third-party financial support in their research papers submitted to a journal using the Pensoft-developed publishing platform: ARPHA – open their inboxes to the congratulatory message that their work has just been published and made available to the wide world, a similar notification will have also reached their research funder.
This automated workflow is already in effect at all journals (co-)published by Pensoft and those published under their own imprint on the ARPHA Platform, as a result of the new partnership with the OA Switchboard: a community-driven initiative with the mission to serve as a central information exchange hub between stakeholders about open access publications, while making things simpler for everyone involved.
All the submitting author needs to do to ensure that their research funder receives a notification about the publication is to select the supporting agency or the scientific project (e.g. a project supported by Horizon Europe) in the manuscript submission form, using a handy drop-down menu. In either case, the message will be sent to the funding body as soon as the paper is published in the respective journal.
“At Pensoft, we are delighted to announce our integration with the OA Switchboard, as this workflow is yet another excellent practice in scholarly publishing that supports transparency in research. Needless to say, funding and financing are cornerstones in scientific work and scholarship, so it is equally important to ensure funding bodies are provided with full, prompt and convenient reports about their own input.”
comments Prof Lyubomir Penev, CEO and founder of Pensoft and ARPHA.
“Research funders are one of the three key stakeholder groups in OA Switchboard and are represented in our founding partners. They seek support in demonstrating the extent and impact of their research funding and delivering on their commitment to OA. It is great to see Pensoft has started their integration with OA Switchboard with a focus on this specific group, fulfilling an important need,”
adds Yvonne Campfens, Executive Director of the OA Switchboard.
About the OA Switchboard:
A global not-for-profit and independent intermediary established in 2020, the OA Switchboard provides a central hub for research funders, institutions and publishers to exchange OA-related publication-level information. Connecting parties and systems, and streamlining communication and the neutral exchange of metadata, the OA Switchboard provides direct, indirect and community benefits: simplicity and transparency, collaboration and interoperability, and efficiency and cost-effectiveness.
Pensoft is an independent academic publishing company, well known worldwide for its novel cutting-edge publishing tools, workflows and methods for text and data publishing of journals, books and conference materials.
All journals (co-)published by Pensoft are hosted on Pensoft’s full-featured ARPHA Publishing Platform and published in a way that ensures their content is as FAIR as possible, meaning that it is effortlessly readable, discoverable, harvestable, citable and reusable by both humans and machines.
Microbes growing on flowers have adverse effects on their fruit yields. This is why plants are quick to shed their flowers, reveals a new study involving both field experiments and plant microbiome analyses.
Microbes growing on flowers have adverse effects on their yields. This is why plants are quick to shed their flowers, reveals a new study involving both field experiments and plant microbiome analysis.
Scientifically speaking, flowers are a reproductive structure of a plant. Unlike mammals, though, perennial plants develop those de novo every season and only retain them for as long as needed.
While a few earlier studies have already looked into the variation in flower lifespan among species, they were mainly concerned with the tradeoff between plants spending energy on producing and maintaining their flowers, and the benefit they would achieve from retaining their reproductive organs.
Prior to the present study, however, the team found another perspective to look at the phenomenon: why did plants invest their energy – even if the ‘cost’ was minimal – to produce fragile flowers that would wither in a matter of days, rather than investing a bit more of it to produce a lot more durable ones, thereby increasing their reproductive success?
Flowers provide various habitats for microbes. They attract pollinators by secreting nectar, which is rich in sugars, and often contains other nutrients, such as amino acids and lipids. The stigma is a germination bed for pollen grains connected to a growth chamber for pollen tubes. It maintains humidity and nutrients necessary for pollen tube growth. Not surprisingly, abundance of the microbes increases over time on individual flowers after it opens.
Before jumping to their conclusions, the scientists set out to conduct field experiments to see what microbial communities would appear on flowers if their longevity was prolonged.
To do this, they took microbes from old flowers of wild ginger (Alpinia japonica) – a species found in Japan and blooming in the early summer when the hot and humid weather in the country is ideal for microbial growth. Then, they transferred the microbes to other wild ginger plants, whose flowers had just opened.
In line with their initial hypothesis, the research team noted that the plant produced significantly fewer fruits, yet there were no visible symptoms on the flowers or fruits to suggest a disease. However, an analysis of the plants’ microbiomes revealed the presence of several groups of bacteria that were increasing with time. As these bacteria can also be found on the flower buds of flowers that have not been treated, the bacteria is categorised as “resident” for the plant.
“So far, flower characteristics have mostly been studied in the context of their interactions with pollinators. Recent studies have raised the question whether we have overlooked the roles of microbes in the studies of floral characteristics.
For example, flower volatiles – which are often regarded as a primary pollinator attractant – can also function to suppress antagonistic microbes. The impacts of microbes on plant reproductive ecology may be more deeply embedded in the evolution of angiosperms than we have considered,”
Jiménez Elvira N, Ushio M, Sakai S (2022) Are microbes growing on flowers evil? Effects of old flower microbes on fruit set in a wild ginger with one-day flowers, Alpinia japonica (Zingiberaceae). Metabarcoding and Metagenomics 6: e84331. https://doi.org/10.3897/mbmg.6.84331
Follow the Metabarcoding and Metagenomics (MBMG) journal on Twitter and Facebook (@MBMGJournal).
Researchers developed a method to determine which amphibians inhabit a specific area. The new technique will resolve some of the issues with conventional methods, such as capture and observational surveys.
An international collaborative research group of members from seven institutions has developed a method to determine which amphibians (frogs, newts and salamanders) inhabit a specific area. Their work was published in the open-access, peer-reviewed journal Metabarcoding and Metagenomics (MBMG).
To do so, the scientists amplified and analysed extra-organismal DNA (also known as environmental DNA or eDNA) found in the water. This DNA ends up in the water after being expelled from the amphibian’s body along with mucus and excrement.
The newly developed technique will resolve some of the issues with conventional methods, such as capture and observational surveys, which require a specialist surveyor who can visually identify species. Conventional surveys are also prone to discrepancies due to environmental factors, such as climate and season.
The researchers hope that the new method will revolutionise species monitoring, as it will enable anyone to easily monitor the amphibians that inhabit an area by collecting water samples.
While monitoring in general is crucial to conserve the natural ecosystems, the importance of surveying amphibians is even more pressing, given the pace of their populations’ decline.
Amongst major obstacles to amphibian monitoring, however, are the facts that they are nocturnal; their young (e.g. tadpoles) and adults live in different habitats; and that specialist knowledge is required to capture individuals and identify their species. These issues make it particularly difficult to accurately survey amphibians in a standardised way, and results of individual efforts often contradict each other.
First of all, the researchers designed multiple methods for analysing the eDNA of amphibians and evaluated their performance to identify the most effective method. Next, they conducted parallel monitoring of 122 sites in 10 farmlands across Japan using the developed eDNA analysis along with the conventional methods (i.e. capture surveys using a net and observation surveys).
As a result, the newly developed method was able to detect all three orders of amphibians: Caudata (the newts and salamanders), Anura (the frogs), and Gymnophiona (the caecilians).
Amphibian biodiversity is continuing to decline worldwide and collecting basic information about their habitats and other aspects via monitoring is vital for conservation efforts. Traditional methods of monitoring amphibians include visual and auditory observations, and capture surveys.
However, amphibians tend to be small in size and many are nocturnal. The success of surveys varies greatly depending on the climate and season, and specialist knowledge is required to identify species. Consequently, it is difficult to monitor a wide area and assess habitats. The last decade has seen the significant development of environmental DNA analysis techniques, which can be used to investigate the distribution of a species by analysing external DNA (environmental DNA) that is released into the environment along with an organism’s excrement, mucus and other bodily fluids.
The fundamentals of this technique involve collecting water from the survey site and analysing the eDNA contained in it to find out which species inhabit the area. In recent years, the technique has gained attention as a supplement for conventional monitoring methods. Standardised methods of analysis have already been established for other species, especially fishes, and diversity monitoring using eDNA is becoming commonplace.
However, eDNA monitoring of amphibians is still at the development stage. One reason for this is that the proposed eDNA analysis method must be suitable for the target species or taxonomic group, and there are still issues with developing and implementing a comprehensive method for detecting amphibians. If such a method could be developed, this would make it possible for monitoring to be conducted even by people who do not have the specialised knowledge to identify species nor surveying experience.
Follow Metabarcoding and Metagenomics (MBMG) journal on Twitter and Facebook.
Sakata MK, Kawata MU, Kurabayashi A, Kurita T, Nakamura M, Shirako T, Kakehashi R, Nishikawa K, Hossman MY, Nishijima T, Kabamoto J, Miya M, Minamoto T (2022) Development and evaluation of PCR primers for environmental DNA (eDNA) metabarcoding of Amphibia. Metabarcoding and Metagenomics 6: e76534. https://doi.org/10.3897/mbmg.6.76534
Between 2016 and 2021, over 500 researchers collaborated within the DNAqua-Net international network, funded by the European Union’s European Cooperation in Science and Technology programme (COST), with the goal to develop and advance biodiversity assessment methods based on analysis of DNA obtained from the environment (e.g. river water) or from unsorted collections of organisms.
Such innovative methods are a real game changer when it comes to large-scale assessment of biodiversity and ecological monitoring, as collecting environmental samples that are sent to the lab for analysis is much cheaper, faster and non-invasive, compared with capturing and examining live organisms. However, large-scale adoption has been hindered by a lack of standardisation and official guidance.
Recognising the urgent need to scale up ecological monitoring as we respond to the biodiversity and climate crises, the DNAqua-Net team published a guidance document for the implementation of DNA-based biomonitoring tools.
The guide considers four different types of samples: water, sediments, invertebrate collections and diatoms, and two primary analysis types: single species detection via qPCR and similar targeted methods; and assessment of biological communities via DNA metabarcoding. At each stage of the field and laboratory process the guide sets out the scientific consensus, as well as the choices that need to be made and the trade-offs they entail. In particular, the guide considers how the choices may be influenced by common practical constraints such as logistics, time and budget. Available in an Advanced Book format, the guidelines will be updated as the technology continues to evolve.
“The urgency of addressing the twin biodiversity and climate crises means that we need to accelerate the adoption of new technologies that can provide data and insights at large scales. In doing so, we walk a tricky line to agree on sufficiently standardised methods that can be usefully applied as soon as they add value, while still continuing to develop them further and innovate within the field. It was a daunting task to seek consensus from several hundred scientists working in a fast-moving field, but we found that our technology is based on a strong foundation of knowledge and there was a high level of agreement on the core principles – even if the details vary and different users make different choices depending on their environmental, financial or logistical constraints.”
Looking back on the last four years that culminated in the publication of a “living” research publication, Prof. Dr. Kristy Deiner says:
“The document took many twists and turns through more than ten versions and passionate discussions across many workshops and late night drinks. All in the days when we could linger at conferences without fear of the pandemic weighing on us. As we worked to find consensus, one thing was clear: we had a lot to say and a standard review paper was not going to cut it. With the knowledge and experience gathered across the DNAqua-Net, it made sense to not limit this flow of information, but rather to try and tackle it head on and use it to address the many questions we’ve all struggled with while developing DNA-based biodiversity survey methods.”
Now that the document – or at least its first version – is publicly available, the researchers are already planning for the next steps and challenges.
“The bottom line is we’ve come a long way in the last ten years. We have a buffet of methods for which many produce accurate, reliable and actionable data to the aid of biodiversity monitoring and conservation. While there is still much work to be done, the many unanswered questions are because the uptake is so broad. With this broad uptake comes novel challenges, but also new insights and a diversity of minds with new ideas to address them. As said this is planned to be a living document and we welcome continued inputs no matter how great or small,” says Deiner.
Dr. Micaela Hellström recalls:
“The book evolved over the four years of COST Action DNAqua-Net which made it possible for the many scientists and stakeholders involved to collaborate and exchange knowledge on an unprecedented scale. Our whole team is well aware of the urgent need to monitor biodiversity loss and to provide accurate species distribution information on large scales, to protect the species that are left. This was a strong driving force for all of us involved in the production of this document. We need consensus on how to coherently collect biodiversity data to fully understand changes in nature.”
“It was a great and intense experience to be a part of the five-person core writing team. In the months prior to submitting the document, we spent countless hours, weekends and late nights researching the field, communicating with researchers and stakeholders, and joining vivid Zoom discussions. As a result, the present book provides solid guidance on multiple eDNA monitoring methods that are – or will soon become – available as the field moves forward.”
The DNAqua-Net team invites fellow researchers and practitioners to provide their feedback and personal contributions using the contacts below.
Bruce K, Blackman R, Bourlat SJ, Hellström AM, Bakker J, Bista I, Bohmann K, Bouchez A, Brys R, Clark K, Elbrecht V, Fazi S, Fonseca V, Hänfling B, Leese F, Mächler E, Mahon AR, Meissner K, Panksep K, Pawlowski J, Schmidt Yáñez P, Seymour M, Thalinger B, Valentini A, Woodcock P, Traugott M, Vasselon V, Deiner K (2021) A practical guide to DNA-based methods for biodiversity assessment. Advanced Books. https://doi.org/10.3897/ab.e68634
Revolutionary environmental DNA analysis holds great potential for the future of biodiversity monitoring, concludes a new study
In times of exacerbating biodiversity loss, reliable data on species occurrence are essential, in order for prompt and adequate conservation actions to be initiated. This is especially true for freshwater ecosystems, which are particularly vulnerable and threatened by anthropogenic impacts. Their ecological status has already been highlighted as a top priority by multiple national and international directives, such as the European Water Framework Directive.
However, traditional monitoring methods, such as electrofishing, trapping methods, or observation-based assessments, which are the current status-quo in fish monitoring, are often time- and cost-consuming. As a result, over the last decade, scientists progressively agree that we need a more comprehensive and holistic method to assess freshwater biodiversity.
Meanwhile, recent studies have continuously been demonstrating that eDNA metabarcoding analyses, where DNA traces found in the water are used to identify what organisms live there, is an efficient method to capture aquatic biodiversity in a fast, reliable, non-invasive and relatively low-cost manner. In such metabarcoding studies, scientists sample, collect and sequence DNA, so that they can compare it with existing databases and identify the source organisms.
Furthermore, as eDNA metabarcoding assessments use samples from water, often streams, located at the lowest point, one such sample usually contains not only traces of specimens that come into direct contact with water, for example, by swimming or drinking, but also collects traces of terrestrial species indirectly via rainfalls, snowmelt, groundwaters etc.
In standard fish eDNA metabarcoding assessments, these ‘bycatch data’ are typically left aside. Yet, from a viewpoint of a more holistic biodiversity monitoring, they hold immense potential to also detect the presence of terrestrial and semi-terrestrial species in the catchment.
In fact, it took only one day for the team, led by Till-Hendrik Macher, PhD student in the German Federal Environmental Agency-funded GeDNA project, to collect the samples. Using metabarcoding to analyse the DNA from the samples, the researchers identified as much as 50% of the fishes, 22% of the mammal species, and 7.4% of the breeding bird species in the region.
However, the team also concluded that while it would normally take only 10 litres of water to assess the aquatic and semi-terrestrial fauna, terrestrial species required significantly more sampling.
Unlocking data from the increasingly available fish eDNA metabarcoding information enables synergies among terrestrial and aquatic biodiversity monitoring programs, adding further important information on species diversity in space and time.
Macher T-H, Schütz R, Arle J, Beermann AJ, Koschorreck J, Leese F (2021) Beyond fish eDNA metabarcoding: Field replicates disproportionately improve the detection of stream associated vertebrate species. Metabarcoding and Metagenomics 5: e66557. https://doi.org/10.3897/mbmg.5.66557
Recent study conducted at a UK fishery farm provides new evidence that DNA from water samples can accurately determine fish abundance and biomass
Organisms excrete DNA in their surroundings through metabolic waste, sloughed skin cells or gametes, and this genetic material is referred to as environmental DNA (eDNA).
As eDNA can be collected directly from water, soil or air, and analysed using molecular tools with no need to capture the organisms themselves, this genetic information can be used to report biodiversity in bulk. For instance, the presence of many fish species can be identified simultaneously by sampling and sequencing eDNA from water, while avoiding harmful capture methods, such as netting, trapping or electrofishing, currently used for fish monitoring.
While the eDNA approach has already been applied in a number of studies concerning fish diversity in different types of aquatic habitats: rivers, lakes and marine systems, its efficiency in quantifying species abundance (number of individuals per species) is yet to be determined. Even though previous studies, conducted in controlled aquatic systems, such as aquaria, experimental tanks and artificial ponds, have reported positive correlation between the DNA quantity found in the water and the species abundance, it remains unclear how the results would fare in natural environments.
However, a research team from the University of Hull together with the Environment Agency (United Kingdom), took the rare opportunity to use an invasive species eradication programme carried out in a UK fishery farm as the ultimate case study to evaluate the success rate of eDNA sampling in identifying species abundance in natural aquatic habitats. Their findings were published in the open-access, peer-reviewed journal Metabarcoding and Metagenomics.
“Investigating the quantitative power of eDNA in natural aquatic habitats is difficult, as there is no way to ascertain the real species abundance and biomass (weight) in aquatic systems, unless catching all target organisms out of water and counting/measuring them all,”
explains Cristina Di Muri, PhD student at the University of Hull.
During the eradication, the original fish ponds were drained and all fish, except the problematic invasive species: the topmouth gudgeon, were placed in a new pond, while the original ponds were treated with a piscicide to remove the invasive fish. After the eradication, the fish were returned to their original ponds. In the meantime, all individuals were counted, identified and weighed from experts, allowing for the precise estimation of fish abundance and biomass.
“We then carried out our water sampling and ran genetic analysis to assess the diversity and abundance of fish genetic sequences, and compared the results with the manually collected data. We found strong positive correlations between the amount of fish eDNA and the actual fish species biomass and abundance, demonstrating the existence of a strong association between the amount of fish DNA sequences in water and the actual fish abundance in natural aquatic environments,”
reports Di Muri.
The scientists successfully identified all fish species in the ponds: from the most abundant (i.e. 293 carps of 852 kg total weight) to the least abundant ones (i.e. one chub of 0.7 kg), indicating the high accuracy of the non-invasive approach.
“Furthermore, we used different methods of eDNA capture and eDNA storage, and found that results of the genetic analysis were comparable across different eDNA approaches. This consistency allows for a certain flexibility of eDNA protocols, which is fundamental to maintain results comparable across studies and, at the same time, choose the most suitable strategy, based on location surveyed or resources available,”
elaborates Di Muri.
“The opportunity of using eDNA analysis to accurately assess species diversity and abundance in natural environments will drive a step change in future species monitoring programmes, as this non-invasive, flexible tool is adaptable to all aquatic environments and it allows quantitative biodiversity surveillance without hampering the organisms’ welfare.”
Di Muri C, Lawson Handley L, Bean CW, Li J, Peirson G, Sellers GS, Walsh K, Watson HV, Winfield IJ, Hänfling B (2020) Read counts from environmental DNA (eDNA) metabarcoding reflect fish abundance and biomass in drained ponds. Metabarcoding and Metagenomics 4: e56959. https://doi.org/10.3897/mbmg.4.56959