Digital Revolution to Fix Scientific Publishing & Speed Up Discoveries

Scientific, Technical, and Medical (STM) publishing is big business. It generates $19 billion in revenue per year, the majority of which is earned by a few powerful publishers that enjoy profit margins of up to 40 percent. science publishing Inflated subscriptions sold to academic libraries keep them moving ahead because the librarians feel they have no choice but to buy. These companies add little value to the actual publishing product but they are entrenched.

Many forces are now at work to change the status quo which has existed for more than 100 years.

The primitive publishing model employed by these publishers is actually a detriment to science. Research paid for by taxpayers is often restricted behind pay walls, major breakthroughs that could potentially save lives languish in articles whose publication is delayed for no reason. In some cases, published findings that have passed a traditional peer review process are subsequently found to be fraudulent.

Performed behind a smokescreen of anonymity, these publishers are the master puppeteers pulling the strings of scientific research to no one’s benefit but their own. The issue is that science publishing is not an esoteric academic silo with no impact on the world. Scientific findings ultimately affect every human being on the planet. Something has to change.

Until now, despite moving from paper to pixels, the publication of scientific research had not harnessed the full potential of the internet. Most journals still use a system that’s rooted in the pre-web era of print and its associated rhythms and rules. Although solving some of these problems requires a significant cultural shift in academia, many of them are solvable using existing technologies that are already standard in other industries and in the culture of the web at large.

Problem 1 – Delays

It normally takes months – even years – before a submitted article is published. This delay is not only frustrating for researchers, but it limits scientific progress and there is no defensible reason for it.

Part of the delay is caused by journals rejecting papers they don’t find interesting enough, but the major factor delaying publication is the academic peer review process. Traditionally, journals only publish articles after they have been approved by other researchers in the field. The idea that an article can only be published after this lengthy process is completely outmoded given the immediacy of the Internet and the culture of social media and blogs. We can, and should, post new insights immediately.

Solutions

Pre-print servers such as arXiv, where authors can upload manuscripts directly, are already popular in physics and are now starting to be used in biology, as well. There is now a new breed of journal that takes this idea one step further, and arranges formal, invited peer review for articles that have been published online before review, thereby allowing access to information usually months before a traditional journal.

Problem 2 – Anonymity

Another issue with peer review, aside from the enormous delays it causes, is its anonymity. Most journals use a system where the peer reviewers know who wrote the article, but the articles’ authors and readers do not know who reviewed it. All sorts of potential for abuses arises from smoky rooms. Expert peer reviewers are by default working in the same area which may also make them competitors, creating incentives to be overly critical, or even to deliberately try to hold back a study that competes with their own work. That never serves the interest of science and is completely at odds with the increasing transparency seen in most industries and across the Internet these days.

Solutions

In 2000, publisher BioMed Central started publishing the names of reviewers for the articles in their medical series of journals, and since then, a growing number of journals have made reviewers and/or referee reports public. This helps foster a culture of transparency and dialogue, which are fundamental to good science. 

Problem 3 – The file drawer effect

Scientists try to publish in the top journals in their field to compete for a small number of jobs. These journals, by design, exercise a very selective approach to what they publish — what appears to be the most “exciting” work. The tragedy is that people are very poor judges of what will ultimately prove to be important — and often more radical but important findings may be discounted or ignored for decades before being “discovered.”

As a side effect, scientists don’t publish work that will not directly advance their career. Think of experiments that didn’t show the expected outcome, half-finished projects by people who left the lab, or even small – yet interesting – findings that are not part of their main research project.

Collectively, these unpublished studies and data form a big body of knowledge that can advance scientific knowledge and represent a significant percentage of all research and billions of dollars of research money. Any insight that has been gleaned should be published.

Solutions

A small number of journals do encourage the publication of negative results, and even allow “research notes,” which can describe just a single experiment rather than a complex study. Researchers can also upload slide decks to Slideshare, and deposit data in repositories such as Figshare, or topic-specific databases.

Problem 4: Lack of available research data

The underlying data behind published studies are also typically kept hidden while researchers try to build their careers by maximizing the number of new discoveries they can get out of the data they produced. When it is released, it is often so poorly presented that it is impossible for anyone else to reuse it or understand it, and the code used to analyse it is missing. Taken together, this leads to bias in the scientific record, makes it hard for others to try to reproduce new discoveries or build on their findings, limits the reproducibility of research findings and slows down scientific discovery. If you make a claim, you should be forced to prove it by showing your full data set, something that would have greatly benefitted the recently retracted STAP papers published in Nature and proved beneficial when the data was included in the attempted replication of this work.

Solutions

Journals like F1000Research and PLOS are now requiring authors to include the underlying data and the analysis code with each published article. With an increasing number of repositories available for scientific data — topic-specific databases, institutional repositories, or services like Figshare or DataVerse — there is no shortage of places to deposit research data.

But even with available tech and incentives, not all researchers like the idea of sharing their data: What if someone uses their data to make their own discoveries and scoop them in the next step of the project? This mindset is especially pervasive in the biomedical sciences, which are connected to the huge industries of medtech, biotech, and pharma. There are few commercial applications for cutting-edge mathematical research so those fields don’t suffer from the same pressure for secrecy.

More problems left to solve

The culture of scientific publishing is complex. Some problems need technical solutions, but others require a cultural change within academia.

Just a few examples of solutions that are still needed:

  • Better incentives for publishing all research results, even the less exciting ones (whether as an article or deposited as data).
  • Education and training on how best to disseminate all of these research results.
  • Better metrics and incentives for research reproducibility.
  • Standardized formats for data. (This is already in place for common types of data, like genetic information or molecular structures, but not for many others.)
  • A federated search system across different data repositories.
  • Full adoption of unique reference IDs for scientists.
  • Journal submission systems that make the inclusion of all data straightforward.
  • Tech companies, publishers, academics, funders and other stakeholders all need to work together to find these solutions. The first steps have been made and we’re looking forward to seeing scientific publishing become faster, fairer and transparent.

     Editor’s note: Daniel Marovitz is CEO of Faculty of 1000. Prior to that, he was the CEO and co-founder of buzzumi, a cloud, enterprise software company.

    Read more at techcrunch.com

    Trackback from your site.

    Leave a comment

    Save my name, email, and website in this browser for the next time I comment.
    Share via