The dynamic open-science project collection of BiCIKL, titled “Towards interlinked FAIR biodiversity knowledge: The BiCIKL perspective” (doi: 10.3897/rio.coll.105), continues to grow, as the project progresses into its third year and its results accumulate ever so exponentially.
Following the publication of three important BiCIKL deliverables: the project’s Data Management Plan, its Visual identity package and a report, describing the newly built workﬂow and tools for data extraction, conversion and indexing and the user applications from OpenBiodiv, there are currently 30 research outcomes in the BiCIKL collection that have been shared publicly to the world, rather than merely submitted to the European Commission.
Shortly after the BiCIKL project started in 2021, a project-branded collection was launched in the open-science scholarly journal Research Ideas and Outcomes(RIO). There, the partners have been publishing – and thus preserving – conclusive research papers, as well as early and interim scientific outputs.
The publications so far also include the BiCIKL grant proposal, which earned the support of the European Commission in 2021; conference abstracts, submitted by the partners to two consecutive TDWG conferences; a project report that summarises recommendations on interoperability among infrastructures, as concluded from a hackathon organised by BiCIKL; and two Guidelines papers, aiming to trigger a culture change in the way data is shared, used and reused in the biodiversity field.
At the time of writing, the top three of the most read papers in the BiCIKL collection is completed by the grant proposal and the second Guidelines paper, where the partners – based on their extensive and versatile experience – present recommendations about the use of annotations and persistent identifiers in taxonomy and biodiversity publishing.
What one might find quite odd when browsing the BiCIKL collection is that each publication is marked with its own publication source, even though all contributions are clearly already accessible from RIO Journal.
This is because one of the unique features of RIOallows for consortia to use their project collection as a one-stop access point for all scientific results, regardless of their publication venue, by means of linking to the original source via metadata. Additionally, projects may also upload their documents in their original format and layout, thanks to the integration between RIO and ARPHA Preprints. This is in fact how BiCIKL chose to share their latest deliverables using the very same files they submitted to the Commission.
“In line with the mission of BiCIKL and our consortium’s dedication to FAIRness in science, we wanted to keep our project’s progress and results fully transparent and easily accessible and reusable to anyone, anywhere,”
explains Prof Lyubomir Penev, BiCIKL’s Project Coordinator and founder and CEO of Pensoft.
“This is why we opted to collate the outcomes of BiCIKL in one place – starting from the grant proposal itself, and then progressively adding workshop reports, recommendations, research papers and what not. By the time BiCIKL concludes, not only will we be ready to refer back to any step along the way that we have just walked together, but also rest assured that what we have achieved and learnt remains at the fingertips of those we have done it for and those who come after them,” he adds.
As an experienced science communicator and open-science publisher, Pensoft is joining this promising project on its mission within its acronym: Trust, Integrity, and Efficiency in Research, through next-level Reproducibility
Recent years have seen perceptions of a “reproducibility crisis” grow in various disciplines. Scientists see poor levels of reproducibility as a severe threat to scientific self-correction, the efficiency of research processes, and societal trust in research results.
Now, a newly launched Horizon Europe-funded project: TIER2 brings together 10 major European organisations and proponents of open science to dig deeper into the issues surrounding reproducibility in research work with the aim to improve practices and policies across diverse scientific fields.
In its capacity as an experienced science communicator and open-science publisher, Pensoft is joining this promising project on its mission within its acronym: Trust, Integrity, and Efficiency in Research, through next-level Reproducibility (TIER2).
TIER2’s interdisciplinary, expert project team will use co-creative methods to work with researchers in social, life and computer sciences, research funders, and publishers to further understand and address the causes of poor reproducibility.
The project will produce and test new tools, connect initiatives, engage communities, and test novel interventions to increase reuse and overall quality of research results.
“It is very exciting to take part in such significant work for the benefit of scientific rigor and integrity. As an open-access publisher, the goals of Pensoft and TIER2 are very much aligned – increasing the trust and efficiency of the research apparatus on a large scale. We are looking forward to collaborating on this mutual goal.”
said Teodor Metodiev, TIER2 Principal Investigator for Pensoft.
TIER2 launched in early January 2023 and will be running until December 2025 with the support of EUR 2 millions in funding, provided by the European Union’s Horizon Europe program and the United Kingdom’s Research & Innovation.
TIER2 will study reproducibility in diverse contexts by selecting three broad research areas (i.e. social, life and computer sciences) and two cross-disciplinary stakeholder groups (i.e. research publishers and funders). Reaching different contexts will allow the project team to systematically investigate the causes and implications of the lack of reproducibility across the research spectrum. Together with curated co-creation communities of these groups, the project will design, implement, and assess systematic interventions – addressing critical levers of change (tools, skills, communities, incentives, and policies) in the process.
In 3 years’ time, TIER2-led activities will have significantly boosted knowledge on reproducibility, created valuable tools, engaged communities, and implemented interventions and policies across science. As a result, the reuse of resources and the quality of research results in the European research landscape and beyond will be improved and increased, and so will trust, integrity, and efficiency in research overall.
The website – including its design and software development – is itself one of Pensoft’s communication contributions to TIER2.
Stay up to date with the project’s activities and progress on Twitter: @TIER2Project.
The interdisciplinary TIER2 consortium comprises ten members from universities and research centers across Europe to bring together a range of expertise spanning open science, research integrity, AI, data analytics, policy research, science infrastructures, stakeholder engagement, and core knowledge in social, life, and computational sciences. They share a long history of successful cooperation and have extensive experience in completed EU projects, especially in the fields of Open Science, Research Integrity, and Science Policy.
All journals published by Pensoft – each using the publisher’s self-developed ARPHA Platform – provide extensive and transparent information about their costs and services in line with the Plan S principles.
In support of transparency and openness in scholarly publishing and academia, the scientific publisher and technology provider Pensoft joined the Journal Comparison Service (JCS) initiative by cOAlition S, an alliance of national funders and charitable bodies working to increase the volume of free-to-read research.
As a result, all journals published by Pensoft – each using the publisher’s self-developed ARPHA Platform – provide extensive and transparent information about their costs and services in line with the Plan S principles.
The JCS was launched to aid libraries and library consortia – the ones negotiating and participating in Open Access agreements with publishers – by providing them with everything they need to know in order to determine whether the prices charged by a certain journal are fair and corresponding to the quality of the service.
According to cOAlition S, an increasing number of libraries and library consortia from Europe, Africa, North America, and Australia have registered with the JCS over the past year since the launch of the portal in September 2021.
While access to the JCS is only open to librarians, individual researchers may also make use of the data provided by the participating publishers and their journals.
This is possible through an integration with the Journal Checker Tool, where researchers can simply enter the name of the journal of interest, their funder and affiliation (if applicable) to check whether the scholarly outlet complies with the Open Access policy of the author’s funder. A full list of all academic titles that provide data to the JCS is also publicly available. By being on the list means a journal and its publisher do not only support cOAlition S, but they also demonstrate that they stand for openness and transparency in scholarly publishing.
“We are delighted that Pensoft, along with a number of other publishers, have shared their price and service data through the Journal Comparison Service. Not only are such publishers demonstrating their commitment to open business models and cultures but are also helping to build understanding and trust within the research community.”
said Robert Kiley, Head of Strategy at cOAlition S.
About cOAlition S:
On 4 September 2018, a group of national research funding organisations, with the support of the European Commission and the European Research Council (ERC), announced the launch of cOAlition S, an initiative to make full and immediate Open Access to research publications a reality. It is built around Plan S, which consists of one target and 10 principles. Read more on the cOAlition S website.
About Plan S:
Plan S is an initiative for Open Access publishing that was launched in September 2018. The plan is supported by cOAlition S, an international consortium of research funding and performing organisations. Plan S requires that, from 2021, scientific publications that result from research funded by public grants must be published in compliant Open Access journals or platforms. Read more on the cOAlition S website.
Since its early days, RIO has enjoyed quite positive reactions from the open-minded academic community for its innovative approach to Open Science in practice: it provides a niche that had long been missing, namely the publication of early, intermediate and generally unconventional research outcomes from all around the research cycle (e.g. grant proposals, data management plans, project deliverables, reports, policy briefs, conference materials) in a cross-disciplinary scientific journal. In fact, several months after its launch, in 2016, the journal was acknowledged with the SPARC Innovator Award.
‘Alternative’ research publications
In times when posting a preprint was seen as a novel and rather bold practice across many fields, RIO facilitated much deeper dives into the research process, in order to unveil scientific knowledge and the process by which it is gathered, well before any final conclusions have been drawn. Long story short, to date, RIO has published 33 Research Ideas, 78 Grant Proposals, 16 Data Management Plans, 33 Workshop Reports and 5 PhD Project Plans, in addition to plenty of other early, interim and final non-traditional research outcomes, as well as conventional articles. Over time, RIO has kept adding additional article types to its list of publication types, with a few more expected in the near future.
What’s more, over the years, we’ve already observed how papers published in RIO successfully followed up on the continuity of the research process. For example, the Grant Proposal for the “Exploring the opportunities and challenges of implementing open research strategies within development institutions” project, funded by the International Development Research Centre (IDRC), was followed by the project’s Data Management Plan a year later.
Five years later, the figures reflecting the usage and engagement with the content published in RIO are evidently supportive of the value of having non-final and unconventional academic publications. For instance, the Grant Proposal for the COST ActionDNAqua-Net, a still ongoing project dedicated to the development of novel genetic tools for bioassessment and monitoring of aquatic ecosystems, is the article with the most total views in RIO’s publication record to date. In the category of sub-article elements, whose usage is also tracked at the journal, the most viewed figure belongs to a Project Report and illustrates a sample code meant to be used in future neuroimaging studies. Similarly, the most viewed table ever published in RIO is part of a Workshop Report that summarises ASAPbio‘s third workshop, dedicated to the technical aspects of services related to the promotion of preprints in the biomedical and other life science communities.
Response to societal challenges
A unique and defining staple for RIO since the very beginning has also been the pronounced engagement with the Sustainable Development Goals (SDGs), as formulated by the United Nations right around the time of RIO’s launch. In order to highlight the societal impact of published research, RIO lets authors map their articles to the SDGs relevant to their paper. Once published, the article displays the associated badge(s) next to its title. Readers of the journal can even search RIO’s content by SDG, in the same way they would filter articles by subject, publication types, date or funding agency. Next on the list for RIO is to add another level of granularity to the SDGs mapping. The practice has already been piloted by mapping relevant RIO articles to the ten targets under SDG14 (Life below water).
Taking transparency, responsibility and collaboration in academia and scholarly publishing up another notch, RIO requires for reviews to be publicly available. In addition, the journal supports post-publication reviews, where peers are free to post their review anytime. In turn, RIO registers each review with its own DOI via CrossRef, in order to recognise the valuable input and let the reviewers easily refer to their contributions. A fine example is a Review Article exploring the biodiversity-related issues and challenges across Southeast Asia, which currently has a total of three public peer reviews, one of which is provided two years after the publication of the paper.
Public, transparent and perpetual peer review, pre- and/or post-publication
What’s more striking about peer review at RIO, however, is that it is not always mandatory. Given that the journal publishes many article types that have already been scrutinised by a legitimate authority – for instance, Grant Proposals that have previously been evaluated by a funder or defended PhD Theses – it only makes sense to avoid withholding these publications and duplicating associated evaluation efforts. On such occasions, all an author needs to do is provide a statement about the review status of their paper, which will be made public alongside the article.
On the other hand, where the article type of a manuscript requires pre-publication review, to avoid potential delays caused by the review process and editorial decisions, RIO encourages the authors to post their pre-review manuscript as a preprint on the recently launched ARPHA Preprints platform, subject to a quick editorial screening, which would only take a few days.
Further, RIO has now abandoned the practice of burdening the journal’s editors with the time-consuming task of finding reviewers, and instead requiring the submitting author to invite suitable reviewers upon submission, who are then immediately and automatically invited by the system. While significantly expediting the editorial work on a manuscript, this practice doesn’t compromise the quality of peer review in the slightest, since the reviews go public, while the final decision about the acceptance of the paper lies with the editor, who is also overlooking the process and able to intervene and invite additional reviewers anytime, if necessary.
Project-driven knowledge hub
The most significant novelty at RIO, however, is perhaps the newly assumed role of the journal as “a project-driven knowledge hub“, targeting specifically the needs of research projects, conference organisers and institutions. For them, RIO provides a one-stop source for the outputs of their scientists, in order to comply with the requirements of their funders or management, or simply to facilitate the discoverability, reusability and citability of their academic outputs and to highlight their interconnectedness.
Unlike typical permanent article collections, already widely used in scholarly publishing, with RIO, collection owners can take advantage of the unique opportunity to add a wide range of research outputs, including such published elsewhere, in order to provide even greater context to the assembled research outputs in their project- or institution-branded article collection (see the Horizon 2020 Project Path2Integrity‘s project collection as an example).
For example, a project coordinator could open a collection under the brand of the project, and start by publishing the Grant Proposal, followed shortly by Data and Software Management Plans and Workshop Reports. Thus, even at this early point in the project’s development, the funder – and with them everyone else – would already have strong evidence of the project’s dedication to transparency and active science communication. Later on, the project’s participants would all be able to easily add to the project’s collection by either submitting their diverse research outputs straight to RIO and having it accepted by the collection lead editor, or providing metadata and link to their publication from elsewhere, even preprints. If the document is published outside of RIO, its metadata, i.e. author names and affiliations, article title and publication date, show up in the collection, while a click on the item will lead to the original publication. As the project progresses, the team behind it could add more and more outputs (e.g. Project Reports, Guidelines and Policy Briefs), continuously updating the public and the relevant stakeholders about the development of their work. Eventually, the collection will be able to provide a comprehensive and fully transparent report of the project from start to finish.
RIO updated its article collection approach to evolve into a “project-driven knowledge hub”, where a project coordinator, institution or conference organiser can create and centrally manage a collection under their own logo.
In 2015, Research Ideas and Outcomes (RIO) was launched to streamline dissemination of scientific knowledge throughout the research process, recognised to begin with the inception of a research idea, followed by the submission of a grant proposal and progressing to, for example, data / software management plans and mid-stage project reports, before concluding with the well-known research and review paper.
In order to really expedite and facilitate access to scientific knowledge, the hurdles for engagement with the process need to be minimized for readers, authors, reviewers and editors alike. RIO aims to lay the groundwork for constructive scientific feedback and dialogue that would then lead to the elaboration and refinement of the research work well in its early stage.
Recently, RIO published its 300th article – about a software for analyzing time series data from a microclimate research site in the Alps – and at that occasion, the RIO team wrote an editorial summarizing how the articles published in RIO so far facilitate engagement with the respective research processes. One of the observations in this regard was that while providing access to the various stages of the research cycle is necessary for meaningful engagement, there is a need for the various outcomes to be packed together, so that we can provide a more complete context for individual published outcomes.
Read the new editorial celebrating RIO’s 5th anniversary and looking back on 300 publications.
RIO introduced updates to its article collection approach to evolve into a “project-driven knowledge hub”, where a project coordinator, research institution or conference organiser can create and centrally manage a collection under their own logo, so that authors can much more easily contribute. Further, research outputs published elsewhere – including preprints – are also allowed, so that the collection displays each part of the ‘puzzle’ within its context. In this case, the metadata of the paper, i.e. title, authors and publication date, are displayed in the article list within the collection, and link to the original source.
Apart from allowing the inclusion of the whole diversity of research outcomes published in RIO or elsewhere, what particularly appeals to projects, conferences and institutions is the simplicity of opening and managing a self-branded collection at RIO. All they need to do is pay a one-time fee to cover the setup and maintenance of the collection, whereas an option with an unlimited number of publications is also available. Then, authors can add their work – subject to approval by the collection’s editor and the journal’s editorial office – by either starting a new manuscript at RIO and then assigning it to an existing collection; pasting the DOI of a publication available from elsewhere; or posting an author-formatted PDF document to ARPHA Preprints, as it has been submitted to the external evaluator (e.g. funding agency). In the latter two cases, the authors are charged nothing, in order to support greater transparency and contextuality within the research process.
Find more information about how to edit a collection at RIO and the associated benefits and responsibilities on RIO’s website.
Another thing we have revised at RIO is the peer review policy and workflow, which are now further clarified and tailored to the specificity of each type of research outcome.
Having moved to entirely author-initiated peer review, where the system automatically invites reviewers suggested by the author upon submission of a paper, RIO has also clearly defined which article types are subject to mandatory pre-publication peer review or not (see the full list). In the latter case, RIO no longer prompts the invitation of reviewers. Within their collections, owners and guest editors can decide on the peer review mode, guided by RIO’s existing policies.
While pre-publication peer review is not always mandatory, all papers are subject to editorial evaluation and also remain available in perpetuity for post-submission review. In both cases, reviews are public and disclose the name of their author by default. In turn, RIO registers each review with its own DOI via CrossRef, in order to recognise the valuable input and let the reviewers easily refer to their contributions.
For article types where peer review is mandatory (e.g. Research Idea, Review article, Research Article, Data Paper), authors are requested to invite a minimum of three suitable reviewers upon the submission of the paper, who are then automatically invited by the system. While significantly expediting the editorial work on a manuscript, this practice doesn’t compromise the quality of peer review in the slightest, since the editor is still overlooking the process and able to invite additional reviewers anytime, if necessary.
For article types where peer review is not mandatory (e.g. Grant Proposal, Data Management Plan, Project Report and various conference materials), all an author needs to do is provide a statement about the review status of their paper, which will be made public alongside the article. Given that such papers have usually already been scrutinised by a legitimate authority (e.g. funding agency or conference committee), it only makes sense to not withhold their publication and duplicate academic efforts.
Additionally, where the article type of a manuscript requires pre-publication review, RIO encourages the authors to click a checkbox during the submission and post their pre-review manuscript as a preprint on ARPHA Preprints, subject to a quick editorial screening, which would only take a few days.
Looking at today’s ravaging COVID-19 (Coronavirus) pandemic, which, at the time of writing, has spread to over 220 countries; its continuously rising death toll and widespread fear, on the outside, it may feel like scientists and decision-makers are scratching their heads more than ever in the face of the unknown. In reality, however, we get to witness an unprecedented global community gradually waking up to the realisation of the only possible solution: collaboration.
On one hand, we have nationwide collective actions, including cancelled travel plans and mass gatherings; social distancing; and lockdowns, that have already proved successful at changing what the World Health Organisation (WHO) has determined as “the course of a rapidly escalating and deadly epidemic” in Hong Kong, Singapore and China. On the other hand, we have the world’s best scientists and laboratories all steering their expertise and resources towards the better understanding of the virus and, ultimately, developing a vaccine for mass production as quickly as possible.
While there is little doubt that the best specialists in the world will eventually invent an efficient vaccine – just like they did following the Western African Ebola virus epidemic (2013–2016) and on several other similar occasions in the years before – the question at hand is rather when this is going to happen and how many human lives it is going to cost?
Again, it all comes down to collective efforts. It only makes sense that if research teams and labs around the globe join their efforts and expertise, thereby avoiding duplicate work, their endeavours will bear fruit sooner rather than later. Similarly to employees from across the world, who have been demonstrating their ability to perform their day-to-day tasks and responsibilities from the safety of their homes just as efficiently as they would have done from their conventional offices, in today’s high-tech, online-friendly reality, no more should scientists be restricted by physical and geographical barriers either.
“Observations, prevention and impact of COVID-19”: Special Collection in RIO Journal
To inspire and facilitate collaboration across the world, the SPARC-recognised Open Science innovator Research Ideas and Outcomes(RIO Journal) decided to bring together scientific findings in an easy to discover, read, cite and build on collection of publications.
Furthermore, due to its revolutionary approach to publishing, where early and brief research outcomes (i.e. ideas, raw data, software descriptions, posters, presentations, case studies and many others) are all considered as precious scientific gems, hence deserving a formal publication in a renowned academic journal, RIO places a special focus on these contributions.
Accepted manuscripts that shall deal with research relevant to the COVID-19 pandemic across disciplines, including medicine, ethics, politics, economics etc. at a local, regional, national or international scale; and also meant to encourage crucial discussions, will be published free of charge in recognition of the emergency of the current situation. Especially encouraged are submissions focused on the long-term effects of COVID-19.
Furthermore, thanks to the technologically advanced infrastructure and services it provides, in addition to a long list of indexers and databases where publications are registered, the manuscripts submitted to RIO Journal are not only rapidly processed and published, but once they get online, they immediately become easy to discover, cite and built on by any researcher, anywhere in the world.
On top of that, Pensoft’s targeted and manually provided science communication services make sure that published research of social value reaches the wider audience, including key decision-makers and journalists, by means of press releases and social media promotion.
More info about RIO’s globally unique features, visit the journal’s website. Follow RIO Journal on Twitter and Facebook.
To avoid publication of openly accessible, yet unusable datasets, fated to result in irreproducible and inoperable biological diversity research at some point down the road, Pensoft takes care for auditing data described in data paper manuscripts upon their submission to applicable journals in the publisher’s portfolio, including Biodiversity Data Journal, ZooKeys, PhytoKeys, MycoKeys and many others.
Once the dataset is clean and the paper is published, biodiversity data, such as taxa, occurrence records, observations, specimens and related information, become FAIR (findable, accessible, interoperable and reusable), so that they can be merged, reformatted and incorporated into novel and visionary projects, regardless of whether they are accessed by a human researcher or a data-mining computation.
As part of the pre-review technical evaluation of a data paper submitted to a Pensoft journal, the associated datasets are subjected to data audit meant to identify any issues that could make the data inoperable. This check is conducted regardless of whether the dataset are provided as supplementary material within the data paper manuscript or linked from the Global Biodiversity Information Facility (GBIF) or another external repository. The features that undergo the audit can be found in a data quality checklist made available from the website of each journal alongside key recommendations for submitting authors.
Once the check is complete, the submitting author receives an audit report providing improvement recommendations, similarly to the commentaries he/she would receive following the peer review stage of the data paper. In case there are major issues with the dataset, the data paper can be rejected prior to assignment to a subject editor, but resubmitted after the necessary corrections are applied. At this step, authors who have already published their data via an external repository are also reminded to correct those accordingly.
“It all started back in 2010, when we joined forces with GBIF on a quite advanced idea in the domain of biodiversity: a data paper workflow as a means to recognise both the scientific value of rich metadata and the efforts of the the data collectors and curators. Together we figured that those data could be published most efficiently as citable academic papers,” says Pensoft’s founder and Managing director Prof. Lyubomir Penev.
“From there, with the kind help and support of Dr Robert Mesibov, the concept evolved into a data audit workflow, meant to ‘proofread’ the data in those data papers the way a copy editor would go through the text,” he adds.
“The data auditing we do is not a check on whether a scientific name is properly spelled, or a bibliographic reference is correct, or a locality has the correct latitude and longitude”, explains Dr Mesibov. “Instead, we aim to ensure that there are no broken or duplicated records, disagreements between fields, misuses of the Darwin Corerecommendations, or any of the many technical issues, such as character encoding errors, that can be an obstacle to data processing.”
At Pensoft, the publication of openly accessible, easy to access, find, re-use and archive data is seen as a crucial responsibility of researchers aiming to deliver high-quality and viable scientific output intended to stand the test of time and serve the public good.
Through their new collaboration, the partners encourage publication of dynamic additional research outcomes to support reusability and reproducibility in science
In a new partnership between open-access Biodiversity Data Journal (BDJ) and workflow software development platform Profeza, authors submitting their research to the scholarly journal will be invited to prepare a Reuse Recipe Document via CREDIT Suite to encourage reusability and reproducibility in science. Once published, their articles will feature a special widget linking to additional research output, such as raw, experimental repetitions, null or negative results, protocols and datasets.
A Reuse Recipe Document is a collection of additional research outputs, which could serve as a guidelines to another researcher trying to reproduce or build on the previously published work. In contrast to a research article, it is a dynamic ‘evolving’ research item, which can be later updated and also tracked back in time, thanks to a revision history feature.
Both the Recipe Document and the Reproducible Links, which connect subsequent outputs to the original publication, are assigned with their own DOIs, so that reuse instances can be easily captured, recognised, tracked and rewarded with increased citability.
With these events appearing on both the original author’s and any reuser’s ORCID, the former can easily gain further credibility for his/her work because of his/her work’s enhanced reproducibility, while the latter increases his/her own by showcasing how he/she has put what he/she has cited into use.
Furthermore, the transparency and interconnectivity between the separate works allow for promoting intra- and inter-disciplinary collaboration between researchers.
“At BDJ, we strongly encourage our authors to use CREDIT Suite to submit any additional research outputs that could help fellow scientists speed up progress in biodiversity knowledge through reproducibility and reusability,” says Prof. Lyubomir Penev, founder of the journal and its scholarly publisher – Pensoft. “Our new partnership with Profeza is in itself a sign that collaboration and integrity in academia is the way to good open science practices.”
“Our partnership with Pensoft is a great step towards gathering crucial feedback and insight concerning reproducibility and continuity in research. This is now possible with Reuse Recipe Documents, which allow for authors and reusers to engage and team up with each other,” says Sheevendra, Co-Founder of Profeza.
This new trial between the two high-tech innovators and Open Science proponents presents an important step forward to making research publications not only easier to find and access, but also more inviting to fellow scientists seeking new collaborations and platforms for voicing their ideas and expertise.
Currently, there are 168 and 948 article records fed to ScienceOpen straight from RIO and Check List respectively.
While the articles’ underlying data, such as author names, citations, keywords, journals and more, are automatically harvested and analyzed by ScienceOpen, so that research items can be easily interlinked, readers are encouraged to further provide context to the research items. The user-friendly intuitive interface invites them to add their comments, recommendations or open post-publication peer reviews, and even create their own topical collections regardless of affiliations and journals.
To make sure users land on the most relevant articles in what feels like the blink of an eye compared to traditional methods, ScienceOpen also accommodates an advanced multi-layer search engine relying on a total of 20 smart filters and six sorting parameters.
“We have long worked closely with ScienceOpen, as it only makes sense given our shared vision for the future of academia, so the present trial project happened very naturally,” says Prof. Lyubomir Penev, founder and CEO of ARPHA and its developer – scholarly publisher and technology provider Pensoft. “Nowadays, we are well aware that scientific findings are of little merit if ‘living’ in a vacuum. Therefore, we need research articles to be as discoverable as possible, and, no less importantly, to be open to feedback and further work.”
“We are thrilled to add this new content to the ScienceOpen as we have both strong researcher communities in zoology and in scholarly communications within our broadly interdisciplinary content. The ARPHA platform is a natural fit to deliver rich metadata to our discovery services and we are very much looking forward to working with their team,” says Stephanie Dawson, CEO of ScienceOpen.
ScienceOpen is an independent start-up company based in Berlin and Boston, which explores new ways to open up information for the scholarly community. It provides a freely accessible search and discovery platform that puts research in context. Smart filters, topical collections and expert input from the academic community help users to find the most relevant articles in their field and beyond.
In the heat of this year’s Peer Review Week, themed “Recognition for Review”, we would like to express how and why we are so proud to be part of it and Publons’ initiative Sentinels of Science, meant to recognize the true guardians of quality science, or in other words, the peer reviewers.
Being a high-tech and modern publishing solution, developed by Pensoft with the mindset that to adapt to the future, means to innovate, ARPHA itself was set to take the quite stagnant current peer review practice forward from day one.
Author-organised, pre-submission review, available to all journals that make use of our ARPHA Writing Tool, which is our way to take the common get-a-friend-to-proofread-your-work practice to a whole new, transparent and technologically facilitated level. The review happens in real time with the author and the reviewers being able to work together in the ARPHA online environment. It is not mandatory, but we encourage it strongly. All pre-submission reviews provided on authors’ request in RIO can be published along with the article, bearing DOI and citation details.
Pre-submission technical and editorial check is another benefit, provided by the journal’s editorial office to those who are using the ARPHA Writing Tool. If necessary, it can take up several rounds, until the manuscript is improved to the level appropriate for direct submission to the journal.
The community-sourced, post-publication, open peer review is the next review stage provided to all articles published in RIO and all other ARPHA journals.
In addition, RIO also provides journal-organised, post-publication open peer review upon author’s request. In all other ARPHA journals this review stage happens mandatory before publication.
To facilitate peer review in any journal published on the platform, ARPHA consolidates every review automatically into a single online file, which makes it possible for reviewers to comment in real time, even during the authoring process. Once posted, the whole peer review history is archived along with the associated files.
To recognize peer review even further, ARPHA registers automatically each of our peer reviewers, along with their work, on Publons, thanks to the integration of all Pensoft journals with the platform, created to credit reviewers and their contributions.
With this vision of peer review, we simply could not stay clear of the aspiring Sentinels of Science initiative, started by Publons. It only made sense for us to step in, which logically led to the ARPHA logo appearing in the Gold star sponsors list.