← Back to Payloads
AI/ML2026-04-27

Peer Reviewer: Simulate Academic Peer Review With AI Before You Submit

Submitting to Nature or ICML without running your paper past a peer reviewer first is like deploying to production without testing. Peer Reviewer constructs reviewer personas from Zotero libraries and gives your paper a credible pre-submission review.
Quick Access
Install command
$ mrt install academic
Browse related skills
Peer Reviewer: Simulate Academic Peer Review With AI Before You Submit

TL;DR

Peer Reviewer uses a multi-agent system — a Deconstructor, Evaluator, and Synthesizer — to simulate the academic peer review process. It draws on a Zotero library of reference papers to construct credible reviewer personas before you submit to any venue.

The 10-Second Pitch

  • **Multi-agent review pipeline** — three specialized agents: Deconstructor (finds weaknesses), Evaluator (scores against venue standards), Synthesizer (writes the review)
  • **Zotero integration** — reads your library to construct reviewer personas with relevant expertise
  • **Venue-specific review criteria** — configurable against any conference or journal's review rubric
  • **Gap analysis** — identifies missing citations, related work gaps, and methodological concerns
  • **Rebuttal drafting** — generates anticipated reviewer objections and suggested responses

Setup in 3 Steps

1. **Install and connect Zotero:** Set your Zotero API key and library ID in `~/.peer-reviewer/config.yaml`.

2. **Configure the venue:**

venue: ICML 2026

review_criteria:

  • novelty
  • technical soundness
  • clarity
  • reproducibility

3. **Run the review:**

peer-reviewer review --paper ./paper.pdf --venue ICML --output review-draft.md

**Prompt to test it:**

Review this paper draft against ICML standards and give me a structured review with strengths, weaknesses, and specific suggestions.

How the Multi-Agent System Works

The three agents work in sequence:

1. **Deconstructor Agent** — reads the paper and your Zotero library, identifies the top 5 most relevant reference papers, and extracts what they do better or differently.

2. **Evaluator Agent** — scores the paper against venue criteria using a rubric. Identifies which claims are well-supported vs. speculative.

3. **Synthesizer Agent** — produces a formatted review that reads like a real reviewer report.

Pros / Cons

ProsCons
Surfaces weaknesses before reviewers doLLM-written reviews lack expert human depth
Multi-agent architecture is credibleZotero library quality determines reviewer persona accuracy

Verdict

This is a genuinely useful pre-submission tool for academic researchers. No one wants their paper rejected for fixable clarity issues or missing citations when a weekend of AI-assisted review would have caught them. Use it as a first pass before sending to colleagues — not as a substitute for human expert review.

Configurable against any venueSome fabrication risk — verify citation claims independently