# I just peer reviewed a paper on healthcare AI, and I wasn’t allowed to use AI.

> I just completed my first peer review — on a healthcare AI paper — under journal rules that explicitly forbid AI assistance. Peer review is slow, strained, and imperfect. And it now actively forbids the tools that could make it better. Will this model survive the next decade?

URL: https://www.ch-healthtech.com/insights/i-just-peer-reviewed-paper-healthcare-ai-i-wasnt-allowed-use-ai
Markdown: https://www.ch-healthtech.com/insights/i-just-peer-reviewed-paper-healthcare-ai-i-wasnt-allowed-use-ai.md
Published: 2026-01-20
Updated: 2026-05-06
Author: Christian Hein
Tags: technology/artificial-intelligence, technology/digital-health, function/regulatory-compliance, function/innovation-management, leadership/transformation-leadership, geography/europe

---


## TL;DR

I just peer reviewed a paper on healthcare AI under rules that explicitly forbid AI assistance. The irony is hard to miss. Peer review is slow (multiple model generations between submission and publication), strained (unpaid volunteers, shrinking pool, exploding submissions), and imperfect (significant errors get through, the reproducibility crisis persists). And now it actively forbids the tools that could make it faster, more thorough, and more rigorous. The field of healthcare AI moves at extraordinary speed. The process meant to validate it moves at 1990s pace.

I just peer reviewed a paper on healthcare AI, and I wasn’t allowed to use AI.

Reviewing cutting-edge research in a field I work in daily felt like a milestone. I left academia after my biotech degree to go into business, so I never experienced this process from the inside.

But one thing surprised me: AI assistance is explicitly forbidden in the peer review process, per the terms of the large, well-respected publishing house.

I understand the concern. Confidentiality of unpublished work. Preserving independent human judgment. Fear of outsourcing critical thinking (or potentially an entire process) to an algorithm.

And yet.

The paper I reviewed was about AI in healthcare. I was forbidden from using AI to evaluate it more rigorously.

This feels emblematic of a deeper problem.

Peer review is slow. Months from submission to publication is common. In AI, that’s multiple model generations. So a lot of papers end up benchmarking GPT-3.5 or early GPT-4, not because the authors are careless, but because the review cycle makes “current” results expire. We keep validating yesterday’s tools, and underestimating what’s already possible.

Peer review is strained. Reviewers are unpaid volunteers squeezed between day jobs and growing review requests. The pool is shrinking while submissions explode.

Peer review is imperfect. Studies show it misses significant errors. The reproducibility crisis persists. We treat the process as sacred, but the outcomes suggest otherwise.

And now, peer review actively forbids the tools that could make it faster, more thorough, and more rigorous.

I’m not arguing we hand the keys to an AI and walk away. Human judgment matters. Domain expertise matters. But a blanket prohibition on AI assistance in 2025?

The field of healthcare AI moves at extraordinary speed. The process meant to validate it moves at 1990s pace.

Will this model survive the next decade? Or are we watching an institution resist the very innovation it’s supposed to evaluate?

Curious what others think, especially those who’ve been in the peer review system longer than I have.

## Key takeaways

- Peer review in 2025 explicitly forbids AI assistance, even when the paper under review is about AI in healthcare.
- The concerns are real: confidentiality, independent judgment, the risk of outsourcing critical thinking. The blanket ban is the wrong response.
- Peer review is slow. Months from submission to publication means most AI papers end up benchmarking models that are already two generations old.
- Peer review is strained. Reviewers are unpaid volunteers, the pool is shrinking, submissions are exploding.
- Peer review is imperfect. Significant errors get through. The reproducibility crisis is well-documented.
- Healthcare AI moves at extraordinary speed. The validation process moves at 1990s pace. This gap is unsustainable.

