Millions of scientific articles are published throughout the year from different parts of the globe. It is difficult for a group of reviewers to keep track of all the papers for their impact. The immediate thought would be to automate the process but how can you teach an algorithm what impact is? Identifying the need for a change in the scientific ecosystem that judges a research paper’s merit based on citation-metrics James W. Weis, an MIT Media Lab research affiliate and Joseph Jacobson, a professor of media arts and sciences and head of the Media Lab’s Molecular Machines research group, built DELPHI. The AI framework is a solution for researchers to overcome the bias and hurdles related to citation based metrics which can often be imprecise, inconsistent, and easily manipulated. DELPHI, which stands for Dynamic Early-warning by Learning to Predict High Impact, is a framework that provides an early-warning signal for high impact research by drawing from patterns discovered in previous scientific publications.
According to the paper published by James W. Weis and Joseph Jacobson in Nature Biotechnology, various citation-based metrics are currently employed for scientific research papers, such as citation count, h-index, and journal impact. However, these metrics are far from adequate to measure the quality of a paper, which leads to mediocre academic hiring, training, and, more importantly, delayed impact; thus, the use of results can impede decisions on academic advancement and financial support.
DELPHI provides a pathway to sortout impactful research that can change the course of scientific development in the future:
- The framework prototype identified 50 research papers that will be in the top 5% of papers and will make a significant impact in the future.
- In a blinded retrospective study, the framework was correctly able to identify 19/20 seminal biotechnologies of the given time period.
- The performance and scalability of the prototype were tested by Weis and Jacobson on time-structured publication graphs drawn from 42 biotechnology-related journals, 7.8 million individual nodes, 201 million relationships and 3.8 billion calculated metrics from 1980 to 2019.
DELPHI determines which research is of high impact to make predictions in the early years of publication. It significantly outperforms models which are solely trained with latest publications. Concentrically, interesting is that a considerable number of high-impact publications have small citation counts when they are launched in the early years, these ‘hidden gems’ cannot be discovered using simple metrics and this is where DELPHI comes to the rescue.
Till date, there has been no framework that has combined this approach of learning from the past for identifying and funding the future potential of technology. On the other hand, DELPHI uses a machine learning framework to analyse a range of features that are calculated over time and are highly influential for work in science. The DELPHI model is trained using metrics and a biotechnology-focused database over a five-year period beginning with the year of publication.It can be utilized to create diversified, impact-optimized portfolios to assist in funding.
The degree of predictability of DELPHI’s early-warning signal strengthens with time; for example, when using less than 2 year of data DELPHI is 87 percent accurate in comparison to 77 percent when using less than 1 year of data. With exposure to more data over the years, DELPHI can become more accurate in identifying high impactful publications.
Since scientific innovation is a complex process the DELPH model was created in a time structured manner. Temporal dynamics are utilised by DELPH to identify high-impact articles in biotechnology publications. It’s inclusion of a diverse set of metrics allows it to act as an early-warning system in early publications which have huge impacts even if citations are low.
DELPH has made it possible to assist in funding strategies that involve identifying gap inefficiencies, which can be filled with scientific opportunities that could improve connections with high impact. Besides the qualitative and quantitative diversification of research programmes, there are intriguing opportunities in market research. According to Weis and Jacobson, a large number of similar research programmes often impedes scientific progress. DELPHI can form a portfolio to create a diverse programme of effective scientific impact. This idea can be accomplished through DELPHI by creating a portfolio consisting of various research programmes, all of which has optimised risk-reward characteristics. For this new approach to grant allocation, risk could be empirically assessed, for example, by comparing the researchers’ publication records to date.
As implied by the researchers, the finding in the paper is only a stepping stone towards the use of machine-enhanced scientific studies. As such, DELPHI should be seen as a part of a scientific research toolkit, to be employed in conjunction with ordinary or perceptual research and analysis—rather than taking the place of it.