Yesterday, the leading global evidence organizations, Cochrane, the Campbell Collaboration, JBI, and the Collaboration for Environmental Evidence issued a comprehensive position statement on the responsible use of artificial intelligence (AI) in evidence synthesis. This isn't just another guideline; it's a blueprint for how we, as builders and users of AI tools, must navigate the double-edged sword of automation in systematic reviews (SRs) and meta-analyses. In an era where AI promises to slash screening times by 10x or more, the statement cuts through the hype: "AI and automation in evidence synthesis should be used with human oversight. Any use of AI or automation that makes or suggests judgements should be fully and transparently reported in the evidence synthesis report."
As the team behind EviSynth—an open-source platform designed by researchers for researchers—we've been living this ethos since day one. Our AI Peer Review feature doesn't just suggest inclusions or exclusions based on your PICO criteria; it logs every inference, every override, and every team vote in a living audit trail. But with this new mandate echoing across the field—from Cochrane's call for "clear expectations for evidence synthesists, including transparent reporting [and] assuming responsibility" to JBI's emphasis on methodological soundness—we felt compelled to unpack how EviSynth is already aligned. And where we're headed next. Because in evidence synthesis, trust isn't earned through speed alone; it's forged in the unblinking light of reproducibility.