A Claude tool that scores your text for AI-detection risk and tells you, line by line, what to change. Works in claude.ai web, the Claude desktop app, and Claude for Chrome. Free, open source, no signup.
This is the connector URL. You'll paste it into Claude in the next step.
All three options give you the same five tools in Claude. The first is the easiest β most people should pick it.
Paste a URL. Best for everyone.
humanizer_* tools become available in any chat.Free plan note: claude.ai Free is limited to one custom connector. If you already have one, you'd need to remove it or upgrade to add this one.
For users at the 1-connector free-tier limit.
Download the skill, then upload it to Claude:
humanizer-mcp.zip Β· 5 KBThe skill embeds the same banned-word list and scoring rubric, but Claude estimates the math instead of computing it. Lower precision; zero infrastructure.
For developers β text never leaves your machine.
Claude Code:
claude mcp add humanizer -- uvx humanizer-mcp
Edit claude_desktop_config.json and add this under mcpServers:
{
"mcpServers": {
"humanizer": {
"command": "uvx",
"args": ["humanizer-mcp"]
}
}
}
For a step-by-step Windows walkthrough with screenshots and troubleshooting, see the 3-page Windows guide PDF.
In any Claude chat, paste a paragraph and ask in plain language. The skill activates automatically when you mention AI detection.
You (works with either install path)
Claude (using MCP / Connector)
Risk Score: 50/100 (MEDIUM)
For 46 words, that's a hit rate of one flagged term every ~5 words β heavy. The math says MEDIUM, but the texture is unmistakably AI.
AI vocabulary (9 hits in 46 words): landscape, leverage, multifaceted, navigate, paradigm, foster, comprehensive, streamline, bolster β almost a full bingo card.
AI phrases (2 hits): "in today's rapidly evolving" and "it's important to note" β both textbook GPT throat-clearing.
Structural issues: burstiness 0.293 (sentences cluster around 9β19 words), no first-person voice, zero stance. The prose asserts nothing β it just gestures at the idea that companies should do business things.
A humanized version, for reference:
Markets shift fast, and most companies are still running on tools built for the last decade. The winners aren't the ones who buy more software β they're the ones who actually rewire how their teams work. That's the real problem.
Three sentences, 36 words, takes a position, drops every flagged term. That would score in the single digits.
β The MCP returns structured tool output (a Python dict). Claude reformats it into prose with bold headings, opinionated commentary, and often a humanized rewrite at the end. The score itself is deterministic β paste the same paragraph and you'll get exactly 50 every time. The wrapping prose varies.
Claude (using Skill)
β The Skill is markdown-only; its output contract tells Claude to produce this exact structured shape every time. Less conversational, more scannable. Score is estimated rather than computed β Claude eyeballs the math instead of running it β so the number can wiggle Β±5 across runs.
β This is a real, deterministic 50/100 β paste the same paragraph and you'll get the same score (the math is fixed). The exact prose Claude wraps around it varies β Claude reformats the structured tool output into conversational language. If you'd rather get terse structured output, ask for it: "score this and give me a bulleted breakdown."
Other prompts that work:
The Connector and Local options give you the same deterministic Python-computed scoring. The Skill is a lightweight prompt-only version.
| Capability | Connector / Local | Skill |
|---|---|---|
| Banned-word list | β embedded in code | β embedded in markdown |
| Phrase pattern detection | β regex | β Claude eyeballs it |
| 0β100 risk score | β deterministic | β estimated |
| Burstiness math | β computed | β estimated |
| Contraction ratio | β computed | β estimated |
| Em-dash count | β exact | ~ approximate |
| Same input β same output | β | β varies by run |
| Default output style | ~ prose (Claude reformats) | β strict structured |
The 0β100 score combines eight signals, each weighted by how predictive it is of AI-generated text:
delve, crucial, leverage, myriadβ¦).it's important to note, in the ever-evolvingβ¦).Each signal adds to the score independently; the total is clamped to 100 and bucketed into LOW (β€ 20), MEDIUM (21β50), or HIGH (51+).
The server doesn't rewrite β it diagnoses and prescribes. The LLM driving your chat does the rewrite. That's the point: a planner, not a black-box laundering service.
Yes. The Connector path works on every plan including Free, with one caveat: Free is limited to a single custom connector. If you already have one, the Skill option doesn't share that limit.
No. The MCP server is stateless and writes nothing to disk or any external service. The hosted instance at humanizer-api.analyticadss.com processes requests in memory and discards them. For absolute privacy, install locally β text never leaves your machine.
Both. The humanizer_humanize_text tool returns a humanized version of your text β vocabulary swapped, AI phrases removed, contractions added, em-dashes cleaned up. Claude then polishes the mechanical pass for context (varying sentence length, adding voice, smoothing edits) and gives you the final rewrite alongside before/after scores. Just say "humanize this" and you'll get the rewritten text back.
It targets the same statistical signals those detectors use, so consistently scoring LOW here generally means lower readings on commercial detectors too β but no tool can guarantee bypass. The honest framing: this is a writing assistant, not a laundering service. If your goal is to defraud a teacher, it's not the right tool.
Connector users: nothing to do, the hosted instance always runs the latest. Skill users: re-download the zip and re-upload. Local users: uvx always pulls the latest, so you get updates automatically.
Open source on GitHub under MIT. Issues, PRs, and forks welcome.