
Systematic reviews are built on the principles of rigor, transparency, and replicability. However, many current AI solutions don’t meet these principles. This risks a surge in unreliable, biased and low-quality reviews.
At Cochrane, we are committed to addressing this challenge with an approach that is measured and responsible. Here, we set out some of the work we are doing to ensure AI is used responsibly in evidence synthesis.
Using AI to support our review authors
Within Cochrane, we have a history of implementing automation solutions in our review process. For example, we have been using machine learning to identify randomized controlled trials since 2016. And we are now investing in the integration of emerging technologies, like generative AI.
We are primed to use AI and automation, thanks to our wealth of high-quality, structured data from systematic reviews and included studies. Recent innovations include:
- dynamic analysis reporting in RevMan, which allows authors to insert live results directly into their reviews while they’re writing and updating them
- a new feature in CENTRAL, our clinical trials database, that flags retracted publications to authors
In the future, we are planning to increase the proportion of reviews supported by Cochrane’s Evidence Pipeline, a service that combines automation and crowd verification to help authors identify relevant studies. In addition, we are leveraging annotations on Patient/Population, Intervention, Comparison, Outcome (PICO) to inform decisions about new intervention review proposals. Alongside the technical advances, we are working on further guidance and training to improve AI literacy across our organization.
Helping to develop guidance
Colleagues in Cochrane are also co-leading RAISE (Responsible AI in Evidence Synthesis), an international initiative to standardize recommendations for responsible AI use in evidence synthesis. In June 2025, RAISE released an updated three-paper collection:
- RAISE 1: Recommendations for practice across roles in the evidence synthesis ecosystem to enhance collaboration and communication
- RAISE 2: Guidance on building and evaluating AI tools
- RAISE 3: Guidance on selecting and using AI tools
Our authors can use AI if it upholds the principles of research integrity. AI and automation must be disclosed, have human oversight, and authors are accountable for the final content. Authors must also justify why they’re using AI and demonstrate that it will not compromise methodological rigor or integrity.
A recent webinar on recommendations and guidance on responsible AI use in evidence synthesis provides more information about RAISE and how Cochrane authors should use it.
Engaging with AI developers
RAISE includes frameworks to guide AI tool developers on how to approach evaluations and clarifies what information they should make public for users. Information on the system itself, clear terms and conditions, evaluations, and details on the strengths, limitations, potential biases, and generalizability are expected to be transparent and publicly available.
Without this, organizations and authors can’t make informed decisions about using an AI tool. We are collaborating with some AI tool developers, such as Covidence, to understand how this could work in practice.
Collaborating with other organizations
Cochrane is part of the AI Methods Group, in collaboration with Campbell, JBI, and the Collaboration for Environmental Evidence. The group is attempting to align the four organization’s’ approaches to implementing responsible AI, and the first step is developing a shared position statement on AI use in systematic reviews. We are also listening to our communities on how to improve AI literacy and support authors and editors.
We are collaborating in the Wellcome-funded DESTinY project, which focuses on developing AI-driven tools for evidence synthesis in climate and health. Though this work is sector-specific, it serves as a model for exploring how to engage interest-holders, communicate effectively, and embed AI tools across our global network.
Finally, Cochrane is co-leading the Evidence Synthesis Infrastructure Collaborative (ESIC) alongside Campbell and JBI. Also funded by Wellcome, this initiative has developed a roadmap for a global, user-centered evidence synthesis infrastructure, which includes costed solutions for responsible and safe AI. This roadmap is currently under consultation before it’s finalized in July 2025.
Addressing areas of uncertainty
A common question we hear is, “if I use AI in my systematic review, what level of 'correctness' should I aim for?”. Currently, there is no consensus.
We assume humans are 100% accurate, but that's rarely the case due to factors like expertise, experience, and fatigue. This raises important questions. What level of error is acceptable when using AI? And does it change for different types of evidence synthesis?
As part of the DESTinY project, a survey has been launched to understand community expectations. We encourage anyone with an interest in this area to complete the survey by 2 July. This will inform future work and how we build and evaluate the next generation of AI-driven evidence synthesis tools.
In addition, Cochrane Evidence Synthesis and Methods has published the first articles in its AI in evidence synthesis special issue. This brings together papers which bridge the gap between demonstrating AI’s potential and its responsible implementation in evidence synthesis.
Ultimately, our goal is to ensure AI enhances – not replaces – human judgment in the review process. By working on how AI is used within and across the sector, we are committed to making Cochrane evidence more timely, usable, and equitable.