Dr. Aisha Rahman refreshes her systematic review dashboard Monday morning to find three new RCTs published over the weekend. Her immunotherapy meta-analysis, published six months ago, might already be outdated. Traditional systematic reviews take 12-18 months to complete, yet in fast-moving fields like oncology, 70% of clinical practice guidelines lag behind emerging evidence by two years. Living systematic reviews promise continuous updates, but they demand resources most teams simply don't have.
The evidence synthesis community stands at a crossroads. Large language models can now extract data with 96% accuracy for straightforward variables, yet struggle with precisely the elements meta-analysts care most about—study design nuances and causal inference details. A 2025 systematic review of automated meta-analysis evolution found that while 57% of tools address data extraction, only 17% tackle advanced synthesis stages, and just 2% attempt full-process automation.
At EviSynth, we've spent the past year testing these tools in real systematic reviews. This post presents what actually works, where AI still fails, and the practical path forward for living evidence synthesis.