
**TL;DR:** `PR-Reviewer.LangChain` is a TIER 4 automated code review tool that specializes in LangChain pipelines — catching Chain-of-Thought bugs, RAG retrieval failures, prompt injection vulnerabilities, and vector store misconfigurations. Catches what generic SAST scanners miss.
mrt install "PR-Reviewer.LangChain"
Create `langchain-review.config.json`:
{
"chain_types": ["retrieval_qa", "conversational_retrieval", "agent"],
"embedding_model": "text-embedding-3-large",
"vector_store": "pinecone",
"scan_prompt_injection": true,
"min_similarity_threshold": 0.75,
"max_chunk_size": 512
}
claude -- blueprint pr-reviewer-langchain --target ./src/chains --output review-report.json
Sample output:
{
"files_scanned": 23,
"critical_findings": 1,
"warnings": 4,
"findings": [
{
"type": "prompt_injection",
"file": "src/chains/qa_chain.py",
"line": 47,
"severity": "critical",
"message": "User input directly concatenated into system prompt without sanitization"
}
]
}
Review this LangChain project for:
1. Prompt injection vulnerabilities (indirect injection via retrieved context)
2. Retrieval chain misconfigurations (chunk size, top_k, similarity threshold)
3. Chain-of-thought wiring issues (missing output parsers, broken memory)
4. Vector store configuration mismatches (embedding model vs. vector DB)
Output as structured JSON with severity and line numbers.
| Pros | Cons |
|---|---|
| LangChain-specific rules catch what generic SAST misses | Only works with LangChain — not for custom LLM apps |
| Prompt injection detection is context-aware, not just pattern matching | Requires Python project structure to be recognized |
|---|
| RAG config validation prevents retrieval drift in production | Chunk size rules are model-dependent — need tuning per embedding model |
|---|
| TIER 4 means deep chain orchestration understanding | No Go/Rust LangChain support yet (early-stage ecosystem) |
|---|
| Generates per-file diff summaries for PR comments | Integration with GitHub PR review requires webhook setup |
|---|