Centrally-managed collections & Peer review flexibility at RIO

RIO updated its article collection approach to evolve into a “project-driven knowledge hub”, where a project coordinator, institution or conference organiser can create and centrally manage a collection under their own logo.

In 2015, Research Ideas and Outcomes (RIO) was launched to streamline dissemination of scientific knowledge throughout the research process, recognised to begin with the inception of a research idea, followed by the submission of a grant proposal and progressing to, for example, data / software management plans and mid-stage project reports, before concluding with the well-known research and review paper.


In order to really expedite and facilitate access to scientific knowledge, the hurdles for engagement with the process need to be minimized for readers, authors, reviewers and editors alike. RIO aims to lay the groundwork for constructive scientific feedback and dialogue that would then lead to the elaboration and refinement of the research work well in its early stage. 

Recently, RIO published its 300th article – about a software for analyzing time series data from a microclimate research site in the Alps – and at that occasion, the RIO team wrote an editorial summarizing how the articles published in RIO so far facilitate engagement with the respective research processes. One of the observations in this regard was that while providing access to the various stages of the research cycle is necessary for meaningful engagement, there is a need for the various outcomes to be packed together, so that we can provide a more complete context for individual published outcomes.

Read the new editorial celebrating RIO’s 5th anniversary and looking back on 300 publications. 

RIO introduced updates to its article collection approach to evolve into a “project-driven knowledge hub”, where a project coordinator, research institution or conference organiser can create and centrally manage a collection under their own logo, so that authors can much more easily contribute. Further, research outputs published elsewhere – including preprints – are also allowed, so that the collection displays each part of the ‘puzzle’ within its context. In this case, the metadata of the paper, i.e. title, authors and publication date, are displayed in the article list within the collection, and link to the original source.

Apart from allowing the inclusion of the whole diversity of research outcomes published in RIO or elsewhere, what particularly appeals to projects, conferences and institutions is the simplicity of opening and managing a self-branded collection at RIO. All they need to do is pay a one-time fee to cover the setup and maintenance of the collection, whereas an option with an unlimited number of publications is also available. Then, authors can add their work – subject to approval by the collection’s editor and the journal’s editorial office – by either starting a new manuscript at RIO and then assigning it to an existing collection; pasting the DOI of a publication available from elsewhere; or posting an author-formatted PDF document to ARPHA Preprints, as it has been submitted to the external evaluator (e.g. funding agency). In the latter two cases, the authors are charged nothing, in order to support greater transparency and contextuality within the research process.

Buttons on RIO Journal’s homepage allow users to create a new collection or add a document to an existing collection by either submitting a new manuscript via RIO Journal or pasting a DOI link of a publication from elsewhere, thus allowing for the collection to link to the original source and display the article’s metadata, i.e. title, authors and publication date.

Find more information about how to edit a collection at RIO and the associated benefits and responsibilities on RIO’s website.

Another thing we have revised at RIO is the peer review policy and workflow, which are now further clarified and tailored to the specificity of each type of research outcome.

Having moved to entirely author-initiated peer review, where the system automatically invites reviewers suggested by the author upon submission of a paper, RIO has also clearly defined which article types are subject to mandatory pre-publication peer review or not (see the full list). In the latter case, RIO no longer prompts the invitation of reviewers. Within their collections, owners and guest editors can decide on the peer review mode, guided by RIO’s existing policies.

While pre-publication peer review is not always mandatory, all papers are subject to editorial evaluation and also remain available in perpetuity for post-submission review. In both cases, reviews are public and disclose the name of their author by default. In turn, RIO registers each review with its own DOI via CrossRef, in order to recognise the valuable input and let the reviewers easily refer to their contributions. 

Both pre- and post-publication reviews at RIO are openly published alongside the paper and bear their own DOI. All papers in RIO remain available for post-publication review in perpetuity (see example).

For article types where peer review is mandatory (e.g. Research Idea, Review article, Research Article, Data Paper), authors are requested to invite a minimum of three suitable reviewers upon the submission of the paper, who are then automatically invited by the system. While significantly expediting the editorial work on a manuscript, this practice doesn’t compromise the quality of peer review in the slightest, since the editor is still overlooking the process and able to invite additional reviewers anytime, if necessary. 

For article types where peer review is not mandatory (e.g. Grant Proposal, Data Management Plan, Project Report and various conference materials), all an author needs to do is provide a statement about the review status of their paper, which will be made public alongside the article. Given that such papers have usually already been scrutinised by a legitimate authority (e.g. funding agency or conference committee), it only makes sense to not withhold their publication and duplicate academic efforts.

By the time it is submitted to RIO, a Grant Proposal like this one has often already been assessed by a legitimate funder, so it only makes sense to not undergo the process again at RIO and thereby slowing down its public dissemination.

Additionally, where the article type of a manuscript requires pre-publication review, RIO encourages the authors to click a checkbox during the submission and post their pre-review manuscript as a preprint on ARPHA Preprints, subject to a quick editorial screening, which would only take a few days.

***

Follow RIO Journal on Twitter, Facebook and LinkedIn.

***

Further reading:

Data checking for biodiversity collections and other biodiversity data compilers from Pensoft

Guest blog post by Dr Robert Mesibov

Proofreading the text of scientific papers isn’t hard, although it can be tedious. Are all the words spelled correctly? Is all the punctuation correct and in the right place? Is the writing clear and concise, with correct grammar? Are all the cited references listed in the References section, and vice-versa? Are the figure and table citations correct?

Proofreading of text is usually done first by the reviewers, and then finished by the editors and copy editors employed by scientific publishers. A similar kind of proofreading is also done with the small tables of data found in scientific papers, mainly by reviewers familiar with the management and analysis of the data concerned.

But what about proofreading the big volumes of data that are common in biodiversity informatics? Tables with tens or hundreds of thousands of rows and dozens of columns? Who does the proofreading?

Sadly, the answer is usually “No one”. Proofreading large amounts of data isn’t easy and requires special skills and digital tools. The people who compile biodiversity data often lack the skills, the software or the time to properly check what they’ve compiled.

The result is that a great deal of the data made available through biodiversity projects like GBIF is — to be charitable — “messy”. Biodiversity data often needs a lot of patient cleaning by end-users before it’s ready for analysis. To assist end-users, GBIF and other aggregators attach “flags” to each record in the database where an automated check has found a problem. These checks find the most obvious problems amongst the many possible data compilation errors. End-users often have much more work to do after the flags have been dealt with.

In 2017, Pensoft employed a data specialist to proofread the online datasets that are referenced in manuscripts submitted to Pensoft’s journals as data papers. The results of the data-checking are sent to the data paper’s authors, who then edit the datasets. This process has substantially improved many datasets (including those already made available through GBIF) and made them more suitable for digital re-use. At blog publication time, more than 200 datasets have been checked in this way.

Note that a Pensoft data audit does not check the accuracy of the data, for example, whether the authority for a species name is correct, or whether the latitude/longitude for a collecting locality agrees with the verbal description of that locality. For a more or less complete list of what does get checked, see the Data checklist at the bottom of this blog post. These checks are aimed at ensuring that datasets are correctly organised, consistently formatted and easy to move from one digital application to another. The next reader of a digital dataset is likely to be a computer program, not a human. It is essential that the data are structured and formatted, so that they are easily processed by that program and by other programs in the pipeline between the data compiler and the next human user of the data.

Pensoft’s data-checking workflow was previously offered only to authors of data paper manuscripts. It is now available to data compilers generally, with three levels of service:

  • Basic: the compiler gets a detailed report on what needs fixing
  • Standard: minor problems are fixed in the dataset and reported
  • Premium: all detected problems are fixed in collaboration with the data compiler and a report is provided

Because datasets vary so much in size and content, it is not possible to set a price in advance for basic, standard and premium data-checking. To get a quote for a dataset, send an email with a small sample of the data topublishing@pensoft.net.


Data checklist

Minor problems:

  • dataset not UTF-8 encoded
  • blank or broken records
  • characters other than letters, numbers, punctuation and plain whitespace
  • more than one version (the simplest or most correct one) for each character
  • unnecessary whitespace
  • Windows carriage returns (retained if required)
  • encoding errors (e.g. “Dum?ril” instead of “Duméril”)
  • missing data with a variety of representations (blank, “-“, “NA”, “?” etc)

Major problems:

  • unintended shifts of data items between fields
  • incorrect or inconsistent formatting of data items (e.g. dates)
  • different representations of the same data item (pseudo-duplication)
  • for Darwin Core datasets, incorrect use of Darwin Core fields
  • data items that are invalid or inappropriate for a field
  • data items that should be split between fields
  • data items referring to unexplained entities (e.g. “habitat is type A”)
  • truncated data items
  • disagreements between fields within a record
  • missing, but expected, data items
  • incorrectly associated data items (e.g. two country codes for the same country)
  • duplicate records, or partial duplicate records where not needed

For details of the methods used, see the author’s online resources:

***

Find more for Pensoft’s data audit workflow provided for data papers submitted to Pensoft journals on Pensoft’s blog.

Vegetation Classification and Survey (VCS), the new journal of the Int’l Association for Vegetation Science

The journal is to launch with a big editorial and several diverse, high-quality papers over the next months

In summer 2019 IAVS decided to start a new, third association-owned journal, Vegetation Classification and Survey (VCS), next to Journal of Vegetation Science (JVS) and Applied Vegetation Science (AVS).

Vegetation Classification and Survey (VCS) is an international, peer-reviewed journal of plant community ecology published on behalf of the International Association for Vegetation Science (IAVS) together with its sister journals, Journal of Vegetation Science (JVS) and Applied Vegetation Science (AVS). It is devoted to vegetation survey and classification at any organizational and spatial scale and without restriction to certain methodological approaches.

The journal publishes original papers that develop new vegetation typologies as well as applied studies that use such typologies, for example, in vegetation mapping, ecosystem modelling, nature conservation, land use management or monitoring. Particularly encouraged are methodological studies that design and compare tools for vegetation classification and mapping, such as algorithms, databases and nomenclatural principles. Papers dealing with conceptual and theoretical bases of vegetation survey and classification are also welcome. While large-scale studies are preferred, regional studies will be considered when filling important knowledge gaps or presenting new methods. VCS also contains Permanent Collections on “Ecoinformatics” and “Phytosociological Nomenclature”.

VCS is published by the innovative publisher Pensoft as a gold open access journal. Thanks to support from IAVS, we can offer particularly attractive article processing charges (APCs) for submissions during the first two years. Moreover, there are significant reductions for IAVS members, members of the Editorial Team and authors from low-income countries or with other financial constraints (learn more about APCs here).

Article submissions are welcomed at: https://vcs.pensoft.net/

Post by Jürgen Dengler, Idoia Biurrun, Florian Jansen & Wolfgang Willner, originally published on Vegetation Science Blog: Official blog ot the IAVS journals.

###

Follow Vegetation Classification and Survey on Twitter and Facebook.

Viticulture Data Journal: Non-conventional papers foster Open Science & sustainability

Non-conventional, yet pivotal research results: data, models, methods, software, data analytics pipelines and visualisation methods, related to the field of viticulture, find a place in a newly launched, open-access and peer-reviewed Viticulture Data Journal.

Non-conventional, yet pivotal research results: data, models, methods, software, data analytics pipelines and visualisation methods, related to the field of viticulture, find a place in a newly launched, open-access and peer-reviewed Viticulture Data Journal (VDJ). The journal went live with the publication of an introductory editorial and a data paper.

The publishing venue is one of the fruits borne during the collaboration between scholarly publisher and technology provider Pensoft, its self-developed ARPHA Platform and the EU project AGINFRA+, whose mission is to provide a sustainable channel and data infrastructure for the use of cooperating, but not fully connected user communities working within the agricultural and food sciences. 

The novel journal brings together a wide range of topics related to the field of viticulture: from genetic research, food safety of viticultural products to climate change adaptation of grapevine varieties through grape specific research. Amongst these are:

  • Phenotyping and genotyping
  • Vine growth and development
  • Vine ecophysiology
  • Berry yield and composition
  • Genetic resources and breeding
  • Vine adaptation to climate change, abiotic and biotic stress
  • Vine propagation
  • Rootstock and clonal evaluation
  • Effects of field practices (pruning, fertilization etc.) on vine growth and quality
  • Sustainable viticulture and environmental impact
  • Ampelography
  • Plant pathology, diseases and pests of grapevine
  • Microbiology and microbiological risk assessment
  • Food safety related to table grapes, raisins, wine, etc.

With the help of the ARPHA Platform’s signature writing tool, authors are able to use a set of pre-defined, yet flexible manuscript templates: Data Paper, Methods, Emerging Techniques, Applied Study, Software Description, R Package and Commentary. Furthermore, thanks to the advanced collaborative virtual environment provided by the tool, authors, but also reviewers, editors and other invited contributors enjoy the convenience of working within the same consolidated online file all the way from the authoring and peer review stages to copy editing and publication.

“The Viticulture Data Journal was created to respond to the major technological and sociological changes that have influenced the entire process of scholarly communication towards Open Science,”

explain the editors.

“The act of scientific publishing is actually the moment when the long effort of researchers comes to light and can be assessed and used by other researchers and the wider public. Therefore, it is little wonder that the main arena of transition from Open Access to Open Science was actually the field of academic publishing,”

they add.

***

The first research publication made available in VDJ is a data paper by the research team from Agricultural University of Athens: Dr Katerina Biniari, Ioannis Daskalakis, Despoina Bouza and Dr Maritina Stavrakaki. In their study, they assess and compare both the qualitative and quantitative characters of the grape cultivars ‘Mavrodafni’ and ‘Renio’, grown in different regions of the Protected Designation of Origin Mavrodafni Patras (Greece). The associated dataset, containing the mechanical properties, the polyphenolic content and the antioxidant capacity of skin extracts and must of berries of the two cultivars, is available to download as supplementary material from the article.  

***

During the AGINFRA+ project, ARPHA has been extended to be used from the AGINFRA+ Virtual Research Environment (VRE), which would allow the authors to use the VRE as an additional gate to the AWT and the journal, as well as to benefit from the integration of AWT with several other services offered by the AGINFRA+ platform. The AGINFRA+ platform has been designed as a Gateway providing online access through a one-stop endpoint to services, aiming at the integration of the traditional narrative of research articles with their underlying data, software code and workflows.

***

Viticulture Data Journal is indexed by Altmetric, CrossRef, Dimensions, EBSCOhost, Google Scholar, Mendeley, Microsoft Academic, Naviga (Suweco), OCLC WorldCat, OpenAIRE, OpenCitations, ReadCube, Ulrichsweb™, Unpaywall; and archived at CLOCKSS and Zenodo

***

Follow Viticulture Data Journal on Twitter and Facebook.

Strategic collaboration agreement signed between ScienceOpen and Pensoft

The research discovery platform ScienceOpen and Pensoft Publishers have entered into a strategic collaboration partnership with the aim of strengthening the companies’ identities as the leaders of innovative content dissemination.

The research discovery platform ScienceOpen and Pensoft Publishers have entered into a strategic collaboration partnership with the aim of strengthening the companies’ identities as the leaders of innovative content dissemination. The new cooperation will focus on the unified indexation, the integration of Pensoft’s ARPHA Platform content into ScienceOpen and the utilization of novel streams of scientific communication for the published materials.

Pensoft is an independent academic publishing company, well known worldwide for bringing novelty through its cutting-edge publishing tools and for its commitment to open access practices. In 2013, Pensoft launched the first ever, end-to-end, XML-based, authoring, reviewing and publishing workflow, now upgraded to the ARPHA Publishing Platform. As of today, ARPHA hosts over 50 open access, peer-reviewed scholarly journals: the whole Pensoft portfolio in addition to titles owned by learned societies, university presses and research institutions.

As part of the strategic collaboration, all Pensoft content and journals hosted on ARPHA are indexed in the ScienceOpen’s research and discovery environment, which puts them into thematic context of over 60 million articles and books. In addition, thousands of articles across more than 20 journals were integrated into a “Pensoft Biodiversity” Collection. Combined this way, the content benefits from the special infrastructure of ScienceOpen Collections, which supports thematic groups of articles and books equipped with a unique landing page, a built-in search engine and an overview of the featured content. The Collections can be reviewed, recommended and shared by users, which facilitates academic debate and increases the discoverability of the research.

The Pensoft Biodiversity collection is available from: https://www.scienceopen.com/collection/PensoftBiodiversity

“It is certainly great news and a much-anticipated milestone for Pensoft, ARPHA and our long-year partners and supporters from ScienceOpen to have brought our collaboration to a new level by indexing the whole ARPHA-hosted content at ScienceOpen,” comments Pensoft’s and ARPHA’s CEO and founder Prof. Lyubomir Penev. “Most of all, the integration between ARPHA and ScienceOpen at an infrastructural level means that we will be able to offer this incredible service and increased visibility to newcoming journals right away. On the other hand, by streaming fresh and valuable publicly accessible content to the ScienceOpen database, these journals will be further adding to the growth of science in the open.”

Stephanie Dawson, CEO of ScienceOpen says, “I am particularly excited to add new high-quality, open access biodiversity content from Pensoft Publishers to the ScienceOpen discovery environment as we have a very active community of researchers on ScienceOpen creating and sharing Collections in this field. We are looking forward to working with Pensoft’s innovative journals to support their open science goals.”

The collaboration reflects not only the commitment of both Pensoft and ScienceOpen to new methods of knowledge dissemination, but also the joint mission to champion open science through innovation. The two companies will cooperate at a strategic level in order to increase the international outreach of their content and services, and to make them even more accessible to the broad community.

###

About ScienceOpen:

From promotional collections to Open Access hosting and full publishing packages, ScienceOpen provides next-generation services to academic publishers embedded in an interactive discovery platform. ScienceOpen was founded in 2013 in Berlin and Boston by Alexander Grossmann and Tibor Tscheke to accelerate research communication.

How to import occurrence records into manuscripts from GBIF, BOLD, iDigBio and PlutoF

On October 20, 2015, we published a blog post about the novel functionalities in ARPHA that allows streamlined import of specimen or occurrence records into taxonomic manuscripts.

Recently, this process was reflected in the “Tips and Tricks” section of the ARPHA authoring tool. Here, we’ll list the individual workflows:

Based on our earlier post, we will now go through our latest updates and highlight the new features that have been added since then.

Repositories and data indexing platforms, such as GBIF, BOLD systems, iDigBio, or PlutoF, hold, among other types of data, specimen or occurrence records. It is now possible to directly import specimen or occurrence records into ARPHA taxonomic manuscripts from these platforms [see Fig. 1]. We’ll refer to specimen or occurrence records as simply occurrence records for the rest of this post.

Import_specimen_workflow_
[Fig. 1] Workflow for directly importing occurrence records into a taxonomic manuscript.
Until now, when users of the ARPHA writing tool wanted to include occurrence records as materials in a manuscript, they would have had to format the occurrences as an Excel sheet that is uploaded to the Biodiversity Data Journal, or enter the data manually. While the “upload from Excel” approach significantly simplifies the process of importing materials, it still requires a transposition step – the data which is stored in a database needs to be reformatted to the specific Excel format. With the introduction of the new import feature, occurrence data that is stored at GBIF, BOLD systems, iDigBio, or PlutoF, can be directly inserted into the manuscript by simply entering a relevant record identifier.

The functionality shows up when one creates a new “Taxon treatment” in a taxonomic manuscript in the ARPHA Writing Tool. To import records, the author needs to:

  1. Locate an occurrence record or records in one of the supported data portals;
  2. Note the ID(s) of the records that ought to be imported into the manuscript (see Tips and Tricks for screenshots);
  3. Enter the ID(s) of the occurrence record(s) in a form that is to be seen in the “Materials” section of the species treatment;
  4. Select a particular database from a list, and then simply clicks ‘Add’ to import the occurrence directly into the manuscript.

In the case of BOLD Systems, the author may also select a given Barcode Identification Number (BIN; for a treatment of BIN’s read below), which then pulls all occurrences in the corresponding BIN.

We will illustrate this workflow by creating a fictitious treatment of the red moss, Sphagnum capillifolium, in a test manuscript. We have started a taxonomic manuscript in ARPHA and know that the occurrence records belonging to S. capillifolium can be found on iDigBio. What we need to do is to locate the ID of the occurrence record in the iDigBio webpage. In the case of iDigBio, the ARPHA system supports import via a Universally Unique Identifier (UUID). We have already created a treatment for S. capillifolium and clicked on the pencil to edit materials [Fig. 2].

Figure-61-01
[Fig. 2] Edit materials
In this example, type or paste the UUID (b9ff7774-4a5d-47af-a2ea-bdf3ecc78885), select the iDigBio source and click ‘Add’. This will pull the occurrence record for S. capillifolium from iDigBio and insert it as a material in the current paper [Fig. 3].

taxon-treatments- 3
[Fig. 3] Materials after they have been imported
This workflow can be used for a number of purposes. An interesting future application is the rapid re-description of species, but even more exciting is the description of new species from BIN’s. BIN’s (Barcode Identification Numbers) delimit Operational Taxonomic Units (OTU’s), created algorithmically at BOLD Systems. If a taxonomist decides that an OTU is indeed a new species, then he/she can import all the type information associated with that OTU for the purposes of describing it as a new species.

Not having to retype or copy/paste species occurrence records, the authors save a lot of efforts. Moreover, they automatically import them in a structured Darwin Core format, which can easily be downloaded from the article text into structured data by anyone who needs the data for reuse.

Another important aspect of the workflow is that it will serve as a platform for peer-review, publication and curation of raw data, that is of unpublished individual data records coming from collections or observations stored at GBIF, BOLD, iDigBio and PlutoF. Taxonomists are used to publish only records of specimens they or their co-authors have personally studied. In a sense, the workflow will serve as a “cleaning filter” for portions of data that are passed through the publishing process. Thereafter, the published records can be used to curate raw data at collections, e.g. put correct identifications, assign newly described species names to specimens belonging to the respective BIN and so on.

 

Additional Information:

The work has been partially supported by the EC-FP7 EU BON project (ENV 308454, Building the European Biodiversity Observation Network) and the ITN Horizon 2020 project BIG4 (Biosystematics, informatics and genomics of the big 4 insect groups: training tomorrow’s researchers and entrepreneurs), under Marie Sklodovska-Curie grant agreement No. 642241.