All processes fit into a broad S-shaped envelope extending from the briefest to the most enduring biological events. For the first time, we have the first simple model that depicts the scope and scale of biology.
As biology is progressing into a digital age, it is creating new opportunities for discovery.
Increasingly, information from investigations into aspects of biology from ecology to molecular biology is available in a digital form. Older ‘legacy’ information is being digitized. Together, the digital information is accumulated in databases from which it can be harvested and examined with an increasing array of algorithmic and visualization tools.
That information also must make its way to trustworthy repositories to guarantee the permanent access to the data in a polished and fully suited for re-use state.
The first layer in the infrastructure is the one that gathers all old and new information, whether it be about the migrations of ocean mammals, the sequence of bases in ribosomal RNA, or the known locations of particular species of ciliated protozoa.
This is achieved by compiling information about the processes conducted by all living organisms. The processes occur at all levels of organization, from sub-molecular transactions, such as those that underpin nervous impulses, to those within and among plants, animals, fungi, protists and prokaryotes. Further, they are also the actions and reactions of individuals and communities; but also the sum of the interactions that make up an ecosystem; and finally, the consequences of the biosphere as a whole system.
In the Nature’s Envelope, information on sizes of participants and durations of processes from all levels of organization are plotted on a grid. The grid uses a logarithmic (base 10) scale, which has about 21 orders of magnitude of size and 35 orders of magnitude of time. Information on processes ranging from the subatomic, through molecular, cellular, tissue, organismic, species, communities to ecosystems is assigned to the appropriate decadal blocks.
The extremes of life processes are determined by the smallest and largest entities to participate, and the briefest and most enduring processes.
The briefest event to be included is the transfer of energy from a photon to a photosynthetic pigment as the photon passes through a chlorophyll molecule several nanometres in width at a speed of 300,000 km per second. That transaction is conducted in about 10-17 seconds. As it involves the smallest subatomic particles, it defines the lower left corner of the grid.
The most enduring is the process of evolution that has been progressing for almost 4 billion years. The influence of the latter has created the biosphere (the largest living object) and affects the gas content of the atmosphere. This process established the upper right extreme of the grid.
All biological processes fit into a broad S-shaped envelope that includes about half of the decadal blocks in the grid. The envelope drawn round the initial examples is Nature’s Envelope.
By the time authors – who have acknowledged third-party financial support in their research papers submitted to a journal using the Pensoft-developed publishing platform: ARPHA – open their inboxes to the congratulatory message that their work has just been published and made available to the wide world, a similar notification will have also reached their research funder.
This automated workflow is already in effect at all journals (co-)published by Pensoft and those published under their own imprint on the ARPHA Platform, as a result of the new partnership with the OA Switchboard: a community-driven initiative with the mission to serve as a central information exchange hub between stakeholders about open access publications, while making things simpler for everyone involved.
All the submitting author needs to do to ensure that their research funder receives a notification about the publication is to select the supporting agency or the scientific project (e.g. a project supported by Horizon Europe) in the manuscript submission form, using a handy drop-down menu. In either case, the message will be sent to the funding body as soon as the paper is published in the respective journal.
“At Pensoft, we are delighted to announce our integration with the OA Switchboard, as this workflow is yet another excellent practice in scholarly publishing that supports transparency in research. Needless to say, funding and financing are cornerstones in scientific work and scholarship, so it is equally important to ensure funding bodies are provided with full, prompt and convenient reports about their own input.”
comments Prof Lyubomir Penev, CEO and founder of Pensoft and ARPHA.
“Research funders are one of the three key stakeholder groups in OA Switchboard and are represented in our founding partners. They seek support in demonstrating the extent and impact of their research funding and delivering on their commitment to OA. It is great to see Pensoft has started their integration with OA Switchboard with a focus on this specific group, fulfilling an important need,”
adds Yvonne Campfens, Executive Director of the OA Switchboard.
About the OA Switchboard:
A global not-for-profit and independent intermediary established in 2020, the OA Switchboard provides a central hub for research funders, institutions and publishers to exchange OA-related publication-level information. Connecting parties and systems, and streamlining communication and the neutral exchange of metadata, the OA Switchboard provides direct, indirect and community benefits: simplicity and transparency, collaboration and interoperability, and efficiency and cost-effectiveness.
Pensoft is an independent academic publishing company, well known worldwide for its novel cutting-edge publishing tools, workflows and methods for text and data publishing of journals, books and conference materials.
All journals (co-)published by Pensoft are hosted on Pensoft’s full-featured ARPHA Publishing Platform and published in a way that ensures their content is as FAIR as possible, meaning that it is effortlessly readable, discoverable, harvestable, citable and reusable by both humans and machines.
Did the boy bite the cat, or was it the other way around?
When processing a sentence with several objects, one has to establish ‘who did what to whom’. When a sentence cannot be interpreted by recalling an image from memory, we rely on voluntary imagination to construct a novel mental image in our mind.
In a previous study, the team of Dr. Andrey Vyshedskiy, a neuroscientist from Boston University, USA, hypothesized that this voluntary imagination ability has fundamental importance for combinatorial language acquisition. To test the hypothesis, the researchers designed a voluntary imagination intervention and administered it to 6,454 children with language deficiencies (age 2 to 12 years).
In that three-year study, published in 2021, the scientists concluded that children, who were engaged with the voluntary imagination intervention, showed 2.2-fold improvement in combinatorial language comprehension compared to children with similar language deficiencies. These findings suggested that language can be improved by training voluntary imagination and confirmed the importance of the visuospatial component of language.
In his latest work, now published in the open-science scholarly journal Research Ideas and Outcomes (RIO), Dr. Vyshedskiy builds on these experimental findings to address the question of language evolution and suggest that evolutionary acquisition of language was driven primarily by improvements of voluntary imagination, rather than the speech apparatus.
Dr. Vyshedskiy proposes that this step-wise development of voluntary imagination – and not the speech apparatus per se – was the key factor underlying the acquisition of modern combinatorial language.
There are several additional lines of evidence suggesting dissociation of articulate speech and voluntary imagination.
Firstly, there is significant genetic and archeological evidence that modern speech apparatus was acquired 600,000 years ago, which is quite a long time before acquisition of modern voluntary imagination 70,000 years ago.
Secondly, mirroring phylogenetic sequences, typical children develop articulate speech by their second year, two years before they acquire the voluntary imagination necessary to comprehend spatial prepositions, recursion, and complex fairy tales.
Thirdly, speech is not an obligatory component of combinatorial language at all. If early humans had voluntary imagination, they could have invented sign language. All formal sign languages include spatial prepositions and other recursive elements. This has been evidenced in the 1970s, when the largest natural experiment of language origin to date reported on 400 Nicaraguan deaf children from two schools who spontaneously invented a new combinatorial sign language in just a few generations. This means that the capacities of the speech apparatus could not have been a limiting factor in the acquisition of modern combinatorial language at all.
Fourthly, articulate sounds can be generated by gray parrots and thousands of other songbird species. However, these birds do not acquire combinatorial language. So, evolution of sound articulation is independent from and also a simpler process than improving voluntary imagination.
In conclusion, on the basis of children studies, neurological observations, archeological findings, combinatorial sign language invention by Nicaraguan deaf children, and variety of sound boxes in birds, Dr. Vyshedskiy argues that the evolution of hominin speech apparatus must have followed (rather than led to) the improvements in voluntary imagination.
Contrary to the common assumption, it is voluntary imagination rather than speech that appears to define the pace of combinatorial language evolution.
Vyshedskiy A (2022) Language evolution is not limited to speech acquisition: a large study of language development in children with language deficits highlights the importance of the voluntary imagination component of language. Research Ideas and Outcomes 8: e86401. https://doi.org/10.3897/rio.8.e86401
Ask any scientist — for every “Eureka!” moment, there’s a lot of less-than-glamorous work behind the scenes. Making discoveries about everything from a new species of dinosaur to insights about climate change entails some slogging through seemingly endless data and measurements that can be mind-numbing in large doses.
Community science shares the burden with volunteers who help out, for even just a few minutes, on collecting data and putting it into a format that scientists can use. But the question remains how useful these data actually are for scientists.
A new study, authored by a combination of high school students, undergrads and grad students, and professional scientists showed that when museum-goers did a community science activity in an exhibit, the data they produced were largely accurate, supporting the argument that community science is a viable way to tackle big research projects.
“We were able to combine a small piece of the Field Museum’s vast collections, their scientific knowledge and exhibit creation expertise, the observational skills of biology interns at Northeastern Illinois University (USA), led by our collaborator Tom Campbell, and our Roosevelt University student’s data science expertise. The creation of this set of high-quality data was a true community effort!”
The study focuses on an activity in an exhibition at the Field Museum, in which visitors could partake in a community science project. In the community science activity, museumgoers used a large digital touchscreen to measure the microscopic leaves photographs of plants called liverworts.
These tiny plants, the size of an eyelash, are sensitive to climate change, and they can act like a canary in a coal mine to let scientists know about how climate change is affecting a region. It’s helpful for scientists to know what kinds of liverworts are present in an area, but since the plants are so tiny, it’s hard to tell them apart. The sizes of their leaves (or rather, lobes — these are some of the most ancient land plants on Earth, and they evolved before true leaves had formed) can hint at their species. But it would take ages for any one scientist to measure all the leaves of the specimens in the Field’s collection. Enter the community scientists.
“Drawing a fine line to measure the lobe of a liverwort for a few hours can be mentally strenuous, so it’s great to have community scientists take a few minutes out of their day using fresh eyes to help measure a plant leaf. A few community scientists who’ve helped with classifying acknowledged how exciting it is knowing they are playing a helping hand in scientific discovery,”
says Heaven Wade, a research assistant at the Field Museum who began working on the MicroPlants project as an undergraduate intern.
Community scientists using the digital platform measured thousands of microscopic liverwort leaves over the course of two years.
“At the beginning, we needed to find a way to sort the high quality measurements out from the rest. We didn’t know if there would be kids drawing pictures on the touchscreen instead of measuring leaves or if they’d be able to follow the tutorial as well as the adults did. We also needed to be able to automate a method to determine the accuracy of these higher quality measurements,”
To answer these questions, Pivarski worked with her students at Roosevelt University to analyze the data. They compared measurements taken by the community scientists with measurements done by experts on a couple “test” lobes; based on that proof of concept, they went on to analyze the thousands of other leaf measurements. The results were surprising.
“We were amazed at how wonderfully children did at this task; it was counter to our initial expectations. The majority of measurements were high quality. This allowed my students to create an automated process that produced an accurate set of MicroPlant measurements from the larger dataset,”
The researchers say that the study supports the argument that community science is valuable not just as a teaching tool to get people interested in science, but as a valid means of data collection.
Pivarski M, von Konrat M, Campbell T, Qazi-Lampert AT, Trouille L, Wade H, Davis A, Aburahmeh S, Aguilar J, Alb C, Alferes K, Barker E, Bitikofer K, Boulware KJ, Bruton C, Cao S, Corona Jr. A, Christian C, Demiri K, Evans D, Evans NM, Flavin C, Gillis J, Gogol V, Heublein E, Huang E, Hutchinson J, Jackson C, Jackson OR, Johnson L, Kirihara M, Kivarkis H, Kowalczyk A, Labontu A, Levi B, Lyu I, Martin-Eberhardt S, Mata G, Martinec JL, McDonald B, Mira M, Nguyen M, Nguyen P, Nolimal S, Reese V, Ritchie W, Rodriguez J, Rodriguez Y, Shuler J, Silvestre J, Simpson G, Somarriba G, Ssozi R, Suwa T, Syring C, Thirthamattur N, Thompson K, Vaughn C, Viramontes MR, Wong CS, Wszolek L (2022) People-Powered Research and Experiential Learning: Unravelling Hidden Biodiversity. Research Ideas and Outcomes 8: e83853. https://doi.org/10.3897/rio.8.e83853
So far, science has described more than 2 million species, and millions more await discovery. While species have value in themselves, many also deliver important ecosystem services to humanity, such as insects that pollinate our crops.
Meanwhile, as we lack a standardized system to quantify the value of different species, it is too easy to jump to the conclusion that they are practically worthless. As a result, humanity has been quick to justify actions that diminish populations and even imperil biodiversity at large.
In a study, published in the scholarly open-science journal Research Ideas and Outcomes, a team of Estonian and Swedish scientists propose to formalize the value of all species through a conceptual species ‘stock market’ (SSM). Much like the regular stock market, the SSM is to act as a unified basis for instantaneous valuation of all items in its holdings.
However, other aspects of the SSM would be starkly different from the regular stock market. Ownership, transactions, and trading will take new forms. Indeed, species have no owners, and ‘trade’ would not be about transfer of ownership rights among shareholders. Instead, the concept of ‘selling’ would comprise processes that erase species from some specific area – such as war, deforestation, or pollution.
Conversely, taking some action that benefits biodiversity – as estimated through individuals of species – would be akin to buying on the species stock market. Buying, too, has a price tag on it, but this price should probably be thought of in goodwill terms. Here, ‘money’ represents an investment towards increased biodiversity.
Interestingly, the SSM revolves around the notion of digital species. These are representations of described and undescribed species concluded to exist based on DNA sequences and elaborated by including all we know about their habitat, ecology, distribution, interactions with other species, and functional traits.
For the SSM to function as described, those DNA sequences and metadata need to be sourced from global scientific and societal resources, including natural history collections, sequence databases, and life science data portals. Digital species might be managed further by incorporating data records of non-sequenced individuals, notably observations, older material in collections, and data from publications.
The study proposes that the SSM is orchestrated by the international associations of taxonomists and economists.
“Non-trivial complications are foreseen when implementing the SSM in practice, but we argue that the most realistic and tangible way out of the looming biodiversity crisis is to put a price tag on species and thereby a cost to actions that compromise them,”
In their Research Idea, published in Research Ideas and Outcomes (RIO Journal), Swiss-Dutch research team present a promising machine-learning ecosystem to unite experts around the world and make up for lacking expert staff
Guest blog post by Luc Willemse, Senior collection manager at Naturalis Biodiversity Centre (Leiden, Netherlands)
Imagine the workday of a curator in a national natural history museum. Having spent several decades learning about a specific subgroup of grasshoppers, that person is now busy working on the identification and organisation of the holdings of the institution. To do this, the curator needs to study in detail a huge number of undescribed grasshoppers collected from all sorts of habitats around the world.
The problem here, however, is that a curator at a smaller natural history institution – is usually responsible for all insects kept at the museum, ranging from butterflies to beetles, flies and so on. In total, we know of around 1 million described insect species worldwide. Meanwhile, another 3,000 are being added each year, while many more are redescribed, as a result of further study and new discoveries. Becoming a specialist for grasshoppers was already a laborious activity that took decades, how about knowing all insects of the world? That’s simply impossible.
Then, how could we expect from one person to sort and update all collections at a museum: an activity that is the cornerstone of biodiversity research? A part of the solution, hiring and training additional staff, is costly and time-consuming, especially when we know that experts on certain species groups are already scarce on a global scale.
We believe that automated image recognition holds the key to reliable and sustainable practises at natural history institutions.
Today, image recognition tools integrated in mobile apps are already being used even by citizen scientists to identify plants and animals in the field. Based on an image taken by a smartphone, those tools identify specimens on the fly and estimate the accuracy of their results. What’s more is the fact that those identifications have proven to be almost as accurate as those done by humans. This gives us hope that we could help curators at museums worldwide take better and more timely care of the collections they are responsible for.
However, specimen identification for the use of natural history institutions is still much more complex than the tools used in the field. After all, the information they store and should be able to provide is meant to serve as a knowledge hub for educational and reference purposes for present and future generations of researchers around the globe.
This is why we propose a sustainable system where images, knowledge, trained recognition models and tools are exchanged between institutes, and where an international collaboration between museums from all sizes is crucial. The aim is to have a system that will benefit the entire community of natural history collections in providing further access to their invaluable collections.
We propose four elements to this system:
A central library of already trained image recognition models (algorithms) needs to be created. It will be openly accessible, so any other institute can profit from models trained by others.
A central library of datasets accessing images of collection specimens that have recently been identified by experts. This will provide an indispensable source of images for training new algorithms.
A digital workbench that provides an easy-to-use interface for inexperienced users to customise the algorithms and datasets to the particular needs in their own collections.
As the entire system depends on international collaboration as well as sharing of algorithms and datasets, a user forum is essential to discuss issues, coordinate, evaluate, test or implement novel technologies.
How would this work on a daily basis for curators? We provide two examples of use cases.
First, let’s zoom in to a case where a curator needs to identify a box of insects, for example bush crickets, to a lower taxonomic level. Here, he/she would take an image of the box and split it into segments of individual specimens. Then, image recognition will identify the bush crickets to a lower taxonomic level. The result, which we present in the table below – will be used to update object-level registration or to physically rearrange specimens into more accurate boxes. This entire step can also be done by non-specialist staff.
Another example is to incorporate image recognition tools into digitisation processes that include imaging specimens. In this case, image recognition tools can be used on the fly to check or confirm the identifications and thus improve data quality.
Using image recognition tools to identify specimens in museum collections is likely to become common practice in the future. It is a technical tool that will enable the community to share available taxonomic expertise.
Using image recognition tools creates the possibility to identify species groups for which there is very limited to none in-house expertise. Such practises would substantially reduce costs and time spent per treated item.
Image recognition applications carry metadata like version numbers and/or datasets used for training. Additionally, such an approach would make identification more transparent than the one carried out by humans whose expertise is, by design, in no way standardised or transparent.
Greeff M, Caspers M, Kalkman V, Willemse L, Sunderland BD, Bánki O, Hogeweg L (2022) Sharing taxonomic expertise between natural history collections using image recognition. Research Ideas and Outcomes 8: e79187. https://doi.org/10.3897/rio.8.e79187
In a world first, the Natural History Museum, London, has collaborated with economic consultants, Frontier Economics Ltd, to explore the economic and societal value of digitising natural history collections and concluded that digitisation has the potential to see a seven to tenfold return on investment. Whilst significant progress is already being made at the Museum, additional investment is needed in order to unlock the full potential of the Museum’s vast collections – more than 80 million objects. The project’s report is published in the open science scientific journal Research Ideas and Outcomes (RIO Journal).
The societal benefits of digitising natural history collections extends to global advancements in food security, biodiversity conservation, medicine discovery, minerals exploration, and beyond. Brand new, rigorous economic report predicts investing in digitising natural history museum collections could also result in a tenfold return. The Natural History Museum, London, has so far made over 4.9 million digitised specimens available freely online – over 28 billion records have been downloaded over 429,000 download events over the past six years.
Digitisation at the Natural History Museum, London
Digitisation is the process of creating and sharing the data associated with Museum specimens. To digitise a specimen, all its related information is added to an online database. This typically includes where and when it was collected and who found it, and can include photographs, scans and other molecular data if available. Natural history collections are a unique record of biodiversity dating back hundreds of years, and geodiversity dating back millennia. Creating and sharing data this way enables science that would have otherwise been impossible, and we accelerate the rate at which important discoveries are made from our collections.
The Natural History Museum’s collection of 80 million items is one of the largest and most historically and geographically diverse in the world. By unlocking the collection online, the Museum provides free and open access for global researchers, scientists, artists and more. Since 2015, the Museum has made 4.9 million specimens available on the Museum’s Data Portal, which have seen more than 28 billion downloads over 427,000 download events.
This means the Museum has digitised about 6% of its collections to date. Because digitisation is expensive, costing tens of millions of pounds, it is difficult to make a case for further investment without better understanding the value of this digitisation and its benefits.
In 2021, the Museum decided to explore the economic impacts of collections data in more depth, and commissioned Frontier Economics to undertake modelling, resulting in this project report, now made publicly available in the open-science journal Research Ideas and Outcomes (RIO Journal), and confirming benefits in excess of £2 billion over 30 years. While the methods in this report are relevant to collections globally, this modelling focuses on benefits to the UK, and is intended to support the Museum’s own digitisation work, as well as a current scoping study funded by the Arts & Humanities Research Council about the case for digitising all UK natural science collections as a research infrastructure.
How digitisation impacts scientific research?
The data from museum collections accelerates scientific research, which in turn creates benefits for society and the economy across a wide range of sectors. Frontier Economics Ltd have looked at the impact of collections data in five of these sectors: biodiversity conservation, invasive species, medicines discovery, agricultural research and development and mineral exploration.
The new analyses attempt to estimate the economic value of these benefits using a range of approaches, with the results in broad agreement that the benefits of digitisation are at least ten times greater than the costs. This represents a compelling case for investment in museum digital infrastructure without which the many benefits will not be realised.
Other benefits could include improvements to the resilience of agricultural crops by better understanding their wild relatives, research into invasive species which can cause significant damage to ecosystems and crops, and improving the accuracy of mining.
Finally, there are other impacts that such work could have on how science is conducted itself. The very act of digitising specimens means that researchers anywhere on the planet can access these collections, saving time and money that may have been spent as scientists travelled to see specific objects.
Popov D, Roychoudhury P, Hardy H, Livermore L, Norris K (2021) The Value of Digitising Natural History Collections. Research Ideas and Outcomes 7: e78844. https://doi.org/10.3897/rio.7.e78844
Blog post by Dr. Marco Cirillo, Heart Failure Surgery Unit Director at the Cardiovascular Department in Poliambulanza Foundation Hospital (Brescia, Italy)
“Good morning, madam,” said the doctor greeting the patient who was entering his office.
“Good morning, Doctor,” she replied.
“So, how are you?” he asked her, motioning for her to sit in one of the two chairs in front of his desk.
“Well, it’s not bad.”
The doctor looked at her carefully.
“So, this first dose of chemo… Did you tolerate it well, right?”
“Yes, Doctor. I passed it…”
“Troubles? Nausea? Vomiting? Other problems?”
“No, Doctor. Nothing,” she replied.
The doctor continued to watch her carefully. After her last answer he got up and went to sit next to her in the other chair that was in front of his desk. He took her hand and asked her again:
“So, madam: how are you?“
The patient shook his hand as if in silent thanks.
“Doctor, you are a good doctor.”
“I’m here to understand what you need, madam, what can I do for you.”
The patient thought a little longer before speaking.
“So, Doctor: the chemo didn’t bother me much, maybe because it’s the first one. Except that… In short, what was difficult was waiting together with the others, all talking about their tumor, where they have it, what chemo they are at, what happened to them, then the hairless ones with the turban on their heads, and how much hemoglobin you have, and what your husband said, and if they recognized you without hair…”
“I understand, madam. But it’s also a way to exorcise it, isn’t it? A way to share this bad experience, to not feel alone…”
She looked him directly in the eyes.
“Doctor, we are not all the same. These things bother me. Seeing how I will be in a month scares me. It doesn’t solace me to know that someone is sicker than me. And knowing that someone is better terrifies me…”
The doctor nodded his head.
“I don’t want to think about my illness and when I come here, I necessarily think about it. I have to think about it. At home I do many things, I see many people, I may not think about it. But when I come here… Then for days I see these scenes in front of me, as if I’ve never left… Believe me, I do not simply ignore the disease, I know what I have and what awaits me. But if I could, I would avoid everything in between, between me and my illness. Do you understand?“
“Of course, madam. I understand. For others it is the same thing.”
They went silent for a while.
Then, the doctor said:
“If you had a choice, ma’am, what would you want? What would make you bear it all better?”
She answered immediately, as if she had the answer ready.
“If I could, I would like to fall asleep and wake up when it’s all over! Don’t see the others, don’t even see the hospital, don’t hear what the nurses say, don’t see the drip, don’t feel the needle entering, don’t see the drop of poison that I have to let into my body to try to survive… Don’t feel the time passing so slowly, as slow as the drop of the drip, a time ‘lost’ that is part of the little time I still have left… I am forced to hope that this time will pass quickly, but at the same time I know that it is not convenient for me to pass quickly, because even this time of treatment is taken away from my life. From what remains of it…“
The doctor released her hand and leaned back in his chair.
The lady asked him:
“Did I say something wrong?”
“No, madam, on the contrary,” said the doctor. “You told me something wonderful.”
“Ah, really? It sounds trivial to me…”
“No, what a patient says when he talks about himself and his illness is never trivial. You gave me a very good idea, madam.”
“Sure! What you ask can be done.”
“I can set up a study in which to administer chemo during sleep and analyze the results,” the doctor said, then corrected himself by translating his words into more direct language. “Sorry: I can make you sleep during the treatment, maybe set the treatment during night, so it doesn’t alter your days. And then you will wake up when it’s all over. That wouldn’t prevent some side effects…”
“…but it would prevent me from living consciously at the time of treatment,” the patient completed.
“Sure,” the doctor confirmed.
“Like the Sleeping Beauty…” the patient said. “You know the tale, don’t you?”
“Sure, who doesn’t know it.”
“The fairy godmothers cannot avoid the evil witch’s curse, so they make her fall asleep instead of die. Waiting for a solution,” the patient sighed deeply. “So, Doctor, if you can eliminate the evil that hangs over me, do it. Otherwise, let me sleep before the spinning wheel stings me.”
The doctor looked at her with a grateful look. He had always felt that not only did he do something for the patients every day, but the patients also did something for him every day.
“Would you do this for me, Doctor?”
The doctor smiled.
“Of course, ma’am. For you and for all the people who want it. Just give me some time to organize this.”
“Take your time” the lady said enthusiastically, but soon after she added with a wink: “No, on the contrary: hurry up, I wouldn’t want to waste any more time…”
This project aims to extend the concept of “care” by approaching the patient and his/her needs: it is not the patient who has to adapt to the hospital’s schemes, its timing, its protocols, but it is the hospital that must serve the patients, to “take care” after their problem in its multidimensionality.
The disease derails the life of the patient in a decisive way. We must as far as possible try to “sew in” the disease element into their everyday life, if we want them to experience it as something that is part of normal life. This can make them tolerate it better and perhaps improve the chances of overcoming it.
Certainly, there are some practical limitations related to this study. Arranging the administration during sleep requires many “beds”; it requires specialized nursing staff; if it is carried at home, it also needs allocating specialists for home visits.
It is true, however, that home care for cancer patients is already very common in advanced healthcare systems. Economic investment and funding of cancer research and treatment remain at the top, along with cardiovascular diseases, in all healthcare systems.
Cancer Centers nowadays abound around the world and are increasing in numbers. Comprehensive Cancer Centers, which are the largest in America, carry out transdisciplinary research, recognizing the importance of integrating different knowledge together for more effective treatment. The assistance and therapeutic network, the shared protocols, the sector research in Oncology already boast a very high level today. The coordination between centers makes use of all that assistance know-how. If I have to think of a medical field in which research, assistance, network of knowledge and uniformity of treatment are the most coordinated and efficient, this field is undoubtedly the oncology one.
Eligible submissions enjoy a 50% discount off APCs in 2021
Since its launch in 2015, RIO Journal has been mapping its articles to the Sustainable Development Goals (SDGs) of the United Nations. The articles published so far span the entire research cycle, a broad range of research fields and all SDGs, which can also be used as a search filter. However, the distribution of RIO articles across SDGs is uneven, as detailed in a recent editorial: for instance, more than 100 articles addressed SDG9 (Industry, innovation & infrastructure), while only one publication has been mapped to SDG1 (No poverty) so far.
Even though there might be logical explanations for this phenomenon, including funding biases or specific scholarly communication tendencies in some research fields, RIO’s team remains dedicated to its role as a harbinger of innovative open science practices and socially engaged research, and is eager to support the open publication of research on all SDGs.
So, RIO Journal is now inviting research outcomes – early, interim or final – addressing the four least represented SDGs in RIO’s content to date (with the current number indicated in parentheses):
The call will remain open until the end of 2021, where all accepted papers will enjoy a 50% discount on their publication charges (APCs), regardless of how many contributions RIO receives in the meantime. Eligible submissions encompass all article types generally accepted in RIO, as long as the journal’s editorial team confirms that they belong to the assigned SDG category.
As also highlighted in the editorial, RIO is currently experimenting with a more fine-grained mapping of its publications to the individual targets under each SDG. This was piloted with SDG 14 (Life below water). For instance, Target 14.a (Marine Biodiversity contributes to Economic Development of small/developing nations) is currently covered by 17 RIO articles. If you would like to get involved with mapping RIO articles to the Targets under other SDGs, please get in touch.
You can find more about RIO’s rationale behind introducing the SDGs mapping in the latest editorial or in this earlier blog post.
Since its early days, RIO has enjoyed quite positive reactions from the open-minded academic community for its innovative approach to Open Science in practice: it provides a niche that had long been missing, namely the publication of early, intermediate and generally unconventional research outcomes from all around the research cycle (e.g. grant proposals, data management plans, project deliverables, reports, policy briefs, conference materials) in a cross-disciplinary scientific journal. In fact, several months after its launch, in 2016, the journal was acknowledged with the SPARC Innovator Award.
‘Alternative’ research publications
In times when posting a preprint was seen as a novel and rather bold practice across many fields, RIO facilitated much deeper dives into the research process, in order to unveil scientific knowledge and the process by which it is gathered, well before any final conclusions have been drawn. Long story short, to date, RIO has published 33 Research Ideas, 78 Grant Proposals, 16 Data Management Plans, 33 Workshop Reports and 5 PhD Project Plans, in addition to plenty of other early, interim and final non-traditional research outcomes, as well as conventional articles. Over time, RIO has kept adding additional article types to its list of publication types, with a few more expected in the near future.
What’s more, over the years, we’ve already observed how papers published in RIO successfully followed up on the continuity of the research process. For example, the Grant Proposal for the “Exploring the opportunities and challenges of implementing open research strategies within development institutions” project, funded by the International Development Research Centre (IDRC), was followed by the project’s Data Management Plan a year later.
Five years later, the figures reflecting the usage and engagement with the content published in RIO are evidently supportive of the value of having non-final and unconventional academic publications. For instance, the Grant Proposal for the COST ActionDNAqua-Net, a still ongoing project dedicated to the development of novel genetic tools for bioassessment and monitoring of aquatic ecosystems, is the article with the most total views in RIO’s publication record to date. In the category of sub-article elements, whose usage is also tracked at the journal, the most viewed figure belongs to a Project Report and illustrates a sample code meant to be used in future neuroimaging studies. Similarly, the most viewed table ever published in RIO is part of a Workshop Report that summarises ASAPbio‘s third workshop, dedicated to the technical aspects of services related to the promotion of preprints in the biomedical and other life science communities.
Response to societal challenges
A unique and defining staple for RIO since the very beginning has also been the pronounced engagement with the Sustainable Development Goals (SDGs), as formulated by the United Nations right around the time of RIO’s launch. In order to highlight the societal impact of published research, RIO lets authors map their articles to the SDGs relevant to their paper. Once published, the article displays the associated badge(s) next to its title. Readers of the journal can even search RIO’s content by SDG, in the same way they would filter articles by subject, publication types, date or funding agency. Next on the list for RIO is to add another level of granularity to the SDGs mapping. The practice has already been piloted by mapping relevant RIO articles to the ten targets under SDG14 (Life below water).
Taking transparency, responsibility and collaboration in academia and scholarly publishing up another notch, RIO requires for reviews to be publicly available. In addition, the journal supports post-publication reviews, where peers are free to post their review anytime. In turn, RIO registers each review with its own DOI via CrossRef, in order to recognise the valuable input and let the reviewers easily refer to their contributions. A fine example is a Review Article exploring the biodiversity-related issues and challenges across Southeast Asia, which currently has a total of three public peer reviews, one of which is provided two years after the publication of the paper.
Public, transparent and perpetual peer review, pre- and/or post-publication
What’s more striking about peer review at RIO, however, is that it is not always mandatory. Given that the journal publishes many article types that have already been scrutinised by a legitimate authority – for instance, Grant Proposals that have previously been evaluated by a funder or defended PhD Theses – it only makes sense to avoid withholding these publications and duplicating associated evaluation efforts. On such occasions, all an author needs to do is provide a statement about the review status of their paper, which will be made public alongside the article.
On the other hand, where the article type of a manuscript requires pre-publication review, to avoid potential delays caused by the review process and editorial decisions, RIO encourages the authors to post their pre-review manuscript as a preprint on the recently launched ARPHA Preprints platform, subject to a quick editorial screening, which would only take a few days.
Further, RIO has now abandoned the practice of burdening the journal’s editors with the time-consuming task of finding reviewers, and instead requiring the submitting author to invite suitable reviewers upon submission, who are then immediately and automatically invited by the system. While significantly expediting the editorial work on a manuscript, this practice doesn’t compromise the quality of peer review in the slightest, since the reviews go public, while the final decision about the acceptance of the paper lies with the editor, who is also overlooking the process and able to intervene and invite additional reviewers anytime, if necessary.
Project-driven knowledge hub
The most significant novelty at RIO, however, is perhaps the newly assumed role of the journal as “a project-driven knowledge hub“, targeting specifically the needs of research projects, conference organisers and institutions. For them, RIO provides a one-stop source for the outputs of their scientists, in order to comply with the requirements of their funders or management, or simply to facilitate the discoverability, reusability and citability of their academic outputs and to highlight their interconnectedness.
Unlike typical permanent article collections, already widely used in scholarly publishing, with RIO, collection owners can take advantage of the unique opportunity to add a wide range of research outputs, including such published elsewhere, in order to provide even greater context to the assembled research outputs in their project- or institution-branded article collection (see the Horizon 2020 Project Path2Integrity‘s project collection as an example).
For example, a project coordinator could open a collection under the brand of the project, and start by publishing the Grant Proposal, followed shortly by Data and Software Management Plans and Workshop Reports. Thus, even at this early point in the project’s development, the funder – and with them everyone else – would already have strong evidence of the project’s dedication to transparency and active science communication. Later on, the project’s participants would all be able to easily add to the project’s collection by either submitting their diverse research outputs straight to RIO and having it accepted by the collection lead editor, or providing metadata and link to their publication from elsewhere, even preprints. If the document is published outside of RIO, its metadata, i.e. author names and affiliations, article title and publication date, show up in the collection, while a click on the item will lead to the original publication. As the project progresses, the team behind it could add more and more outputs (e.g. Project Reports, Guidelines and Policy Briefs), continuously updating the public and the relevant stakeholders about the development of their work. Eventually, the collection will be able to provide a comprehensive and fully transparent report of the project from start to finish.