lily - stock.adobe.com
Executive interview: Pros and cons of AI in academic research
We speak to Jill Luber, chief technology officer at academic publisher Elsevier, about how large language models can support researchers
“AI [artificial intelligence] can create a 30-page research paper for you out of thin air based on fake science,” warns Jill Luber, chief technology officer at Elsevier. As a publisher of scientific and medical journals and papers, she says: “We’ve seen a major increase in “fabricated science” and it is our job to protect the publishing world from that.”
It is a topic Luber recently spoke about at the London Book Fair. “It is very worrying, and we’ve seen a major increase in fabricated science,” she says.
“Creating and publishing science that’s not real clearly has implications because there’s a level of trust with what you read in scientific journals, and if we start to erode that trust, I think that’s when we’re in real trouble.”
Elsevier is around 145 years old, and over that time, it has experienced three major technological disrupters: the printing press, digital content via the World Wide Web, and now AI.
Putting the fake science warning to one side, Luber believes AI has a significant role to play in supporting researchers to make sense of vast volumes of information contained in academic research papers. For Luber, among the major benefits of using AI tools to support research is that it is able to look through all the text in the literature Elsevier holds.
“AI can bubble up concepts and link together different articles,” she says.
Without AI’s help, this would normally take a human hours and hours, and may even be impossible for someone to do.
And with regards to fake science, she says: “AI can help us find and stop fake publications and fake articles before they get into our journals.”
AI gives researchers the ability to dig through the content, understanding information it contains, finding connections and surfacing existing concepts. Just as significantly, Luber says, it also reveals what she calls “white space”, the information that is not covered in the research papers.
Before AI, researchers used keyword searches to surface relevant pieces of research, as Luber explains: “Within the digital world, we did have very strong search algorithms that we could index entire sets of data. You would type in keywords for concepts you’re looking for and the search engine would look across all of the literature based on those keywords, and then surface up the information as a list of hits.”
AI provides researchers with the ability to move beyond keyword searches and instead search whole concepts and neighbouring concepts – not just keywords.
This helps researchers identify the veracity of any research articles surfaced by the AI engine. “What’s really important in science is reproducibility,” she says. “We have the ability now to look through all our content and find the research that has been reproduced.”
There is a higher level of trust associated with those articles where the research is reproducible, versus the research that no one has been able to reproduce.
Read more about AI guardrails
- AI governance provides guardrails for faster innovation: Dataiku’s field chief data officer for Asia-Pacific and Japan discusses how implementing AI governance can accelerate innovation while mitigating the risks of shadow AI.
- Why OpenClaw agents are the next big enterprise challenge: As users flock to deploy OpenClaw agents for everything from gig work to shopping, IT leaders warn that bringing these autonomous systems into the enterprise will require strict guardrails and a mix of AI models.
While there are clearly plenty of benefits of using AI to support research, Luber notes that there is a big risk that a large language model can hallucinate and provide erroneous information. There is also the ever-present danger of bias. These have a direct impact on the quality and integrity of the research that can be done using AI tools. In fact, there is also a very real risk that researchers may trust the output produced by the AI tool, rather than investigate further into the insights that can now be so easily presented to them.
When researchers are using AI tools to analyse legitimate research: “If you’re using the model just to ask the question, there is a real risk of hallucinations. But we are seeing some models trained on specific science and health domains and they are getting better at answering domain-specific questions.”
Like many people working in AI, Luber recognises the importance of human oversight. This is analogous to the peer review human oversight that is well-established in academic publishing.
Elsevier’s primary AI tool is LeapSpace, using human evaluation, where different domain experts test the quality and accuracy of the outputs the models generate based on the questions asked. Luber says the evaluation looks at whether the correct information is being captured and, significantly, if the output is actually harmful. “We use human evaluation to continue to help us tweak the LLMs and the products that use them,” she adds.
Vibe coding
Within the technology function at Elsevier, Luber says AI is used in software development. However, before any code is released into production, there is a human review.
Discussing how Elsevier is using AI to support software development, she says: “It does make it easier, but I’m also finding new needs for my technology team. There’s this new concept of AI engineer emerging.”
She says the business is encouraging vibe coding, enabling people with no technical background to create dashboards, web pages and applications.
However, the software development team has an important role to play in vibe coding. “What we’re finding is you can only get so far, then you’re going to need the intervention of the technology team to harden some of the processes and dashboard.”
As an example, she says the finance team creates lots of reports on an ongoing basis. “With AI tools, they can build a new automated agentic workflow themselves that can create these reports. However, these AI agents need to be more productionised: it’s safe; it's secure; it has the right access, it’s up and running and we understand the cost. That’s when you really need an engineer to intervene and help bring it up to production-level quality.”
As a result, Elsevier now has software engineers dedicated to the finance function.
“This is a new need for my team to support some of the areas of the business that are taking advantage of AI to make their jobs better,” adds Luber. “But they still need our support to do that.”
While vibe coding does change the number of software developers needed, Luber feels that software engineers are used in places the businesses previously did not require their expertise.
“Vibe coding is actually really cool and it helps me do my job better,” she says. “Even if you don’t know how to write the code, you can still create the processes to use it.”
