← Journal
Essay April 26, 2026 7 min read

The Claude / Piper experiment is a craft story, not a privacy story

Why your writing doesn't sound like you

Voice isn’t a vibe. Claude just proved it. What the Kelsey Piper experiment really means if you’re drafting fiction with AI.

Quick answer
In April 2026, Vox journalist Kelsey Piper demonstrated that Claude Opus 4.7 can identify a writer by name from 125 words of unpublished prose. Coverage has framed this as a privacy story. For writers using AI to draft fiction or long-form non-fiction, the more important implication is about craft. The experiment proves that voice is a measurable fingerprint, not an aesthetic abstraction. That means voice can be objectively preserved or destroyed by a tool, and the difference is now demonstrable.

There’s a story about Claude that’s been everywhere this week: it identified a writer from 125 unpublished words. It’s been read as a privacy story. For people who write, the more useful read is that it’s a craft story.

Earlier this week, journalist Kelsey Piper, writing for The Argument, pasted 125 words of an unpublished political column into Claude Opus 4.7 and got her own name back. She’d logged out, tested via the API, run it again on a friend’s laptop. She varied the genre too: a school progress report about her child’s Pokémon essays, an unpublished review of a 1942 wartime comedy. Claude named her every time. ChatGPT and Gemini failed the same task. (Read her full account.)

Privacy is the frame the news has settled on. There’s a different frame to read it through if you write.

What the Piper experiment rules out

The Piper experiment matters because of how she designed it. The four methods she used systematically rule out every alternative explanation except stylometry.

Each method closed off a different escape route for the model. Logging out and switching to incognito killed the obvious “Claude knows me from my account” answer. Switching to the raw API ruled out browser fingerprinting. Repeating the test on a friend’s laptop ruled out IP-based identification. By the time those four were exhausted, the only remaining channel through which the model could know her was the prose itself.

Then she varied the genre. A political column might overlap with her public corpus. A school progress report about a child’s Pokémon essays does not. A review of a 1942 wartime comedy is not in her published register at all. The model still returned her name. That detail is the one that does the work. It means Claude isn’t matching topics or subject matter. It’s reading the way her sentences are built.

Which is stylometry. And stylometry is now a working part of frontier language models, whether anyone intended that or not. ChatGPT and Gemini, run on the same task, guessed wrong. The capability is uneven across the field. The fact that one model already has it changes the shape of the question for writers anyway.

What does the privacy framing of this story miss?

The privacy reading treats the writer as the subject of surveillance. The craft reading treats prose itself as a measurable artefact, which is the more important implication if you write for a living.

The privacy reading of the Piper experiment is real and worth taking seriously. Models can identify writers from short, unpublished, off-genre prose, and the threshold will only drop as models improve. Fair concern. Useful coverage.

But the privacy reading treats the writer as the subject of surveillance. For working writers, that’s the secondary problem. The primary problem is what the experiment proves about prose itself.

Voice is real. Not in the way we wave at it when we talk about craft over a drink. In the boring measurable way. You can be identified from a few hundred words of your own writing. So can I. So can the writer you most admire.

Which I think most working writers already suspected. But it’s one thing to suspect and another to see Claude do it.

If voice is measurable, voice is preservable. And if voice is preservable, the question stops being aesthetic taste and becomes craft engineering. That’s the story that matters if you’re drafting fiction or long-form non-fiction with AI. Privacy is a perimeter problem. Voice is a craft problem. The craft problem is the one nobody else in this discourse is writing about.

Why is voice a fingerprint, not an aesthetic?

Voice is a multi-axis stylometric fingerprint that includes word choice, sentence shape, paragraph structure, rhythm, and dozens of other axes. It is distinct enough for a frontier language model to identify a writer from 125 words of off-genre prose.

Most AI writing tools treat voice as an aesthetic variable. You pick formal or casual, literary or commercial, sparse or maximalist. The dropdown is the voice. This framing has been slowly wrong for a while, but the Piper experiment makes it provably wrong.

Voice is a multi-axis fingerprint. It includes word choice, sentence shape, the ratio of declaratives to dependent clauses, paragraph openings, paragraph closings, the rhythm of how sentences shorten or lengthen under emotional pressure, the writer’s relationship to abstraction versus concrete imagery, the placement and frequency of parenthetical asides. Claude Opus 4.7 can read enough of those axes from 125 words to narrow a candidate down to a single writer in a public corpus. That’s stylometry, and stylometry is now a working part of frontier language models, whether anyone intended that or not.

Treat that finding as the design constraint it is. If voice is measurable, it’s preservable. And once it’s a measurable constraint, the tools that handle it well will separate hard from the tools that pretend voice is a setting.

This is the same fingerprint we wrote about in how to find your writing voice. The Piper experiment is the proof that the fingerprint is real and measurable, not just rhetorically convenient.

What happens when AI writes “in your voice”?

AI writing tools that don’t constrain voice produce technically competent prose that no longer matches the writer’s fingerprint. The drift is measurable, demonstrable, and now provable through the same stylometric analysis the Piper experiment exposed.

I tried Piper’s experiment on a passage from my own novel a couple of nights ago. I asked Claude to describe the voice in 200 words I’d written. The description was startlingly specific. It told me my prose has an instinct for off-key similes that collapse mid-air, and quoted one back at me: a character’s hand gesture indicating “either a volcano or aggressive udder milking.” I had not noticed that about my own writing.

Then I asked it to generate a new scene in that voice. The result was technically competent and not mine. The shortening was gone. Adverbs were back. A kind of generic literary cadence had moved into the middle of the prose like a stranger.

Which is, I suspect, what most writers mean when they say AI drafts don’t sound like them. The feeling isn’t imaginary. The fingerprint is gone, and now we have a way to see it going.

How should writers think about this going forward?

Voice is a craft engineering problem now, not a vibes argument. The category of AI writing tools worth using is the one that treats voice as a binding constraint on every output, not as a setting to be specified once and forgotten.

Two practical implications.

First, voice is no longer something writers have to argue for. It’s measurable. When a tool sands the voice out of your draft, you can demonstrate it, and you should expect the tools you use to take that demonstration seriously.

Second, the question of “how do I keep AI from flattening my voice” stops being a vibes argument and becomes a craft problem. Solvable. Worth the effort. Different tools approach it differently, with different rates of success. The category that matters going forward is whether a tool treats voice as a constraint or as a setting. Constraints govern every output. Settings get ignored after the third paragraph.

Privacy is a real concern in the Piper story. Voice as a measurable craft constraint is the bigger one for anyone making things from prose. The next year of AI writing tools will be sorted along that axis, whether the tools know it or not. Writers will sort them.

Common questions about AI writing and voice

Can AI identify a writer from their prose?
As of April 2026, frontier large language models including Claude Opus 4.7 can identify prolific public writers by name from as few as 125 words of unpublished prose. The capability is uneven across models, with ChatGPT and Gemini performing notably worse on the same task, and depends on the writer having enough public corpus for the model to have learned their fingerprint. The required word count is likely to drop as models improve.
Why doesn’t AI writing sound like me?
Because most AI writing tools treat voice as an aesthetic toggle (formal, casual, literary) rather than a multi-axis stylometric fingerprint. When the model generates new prose, it defaults to its own central voice unless specifically constrained, and it loses the writer’s signature within a paragraph or two. Prompt-level instructions help for short generations but decay across long projects.
How do I get AI to preserve my voice when drafting fiction?
The most reliable approach is to feed the model long-form samples of your own writing, extract a structured description of the stylistic patterns (rhythm, syntax, diction, structural tendencies, dialogue register), and apply that description as a binding constraint on every generation rather than as a one-time prompt. Tools that compile this into the drafting pipeline hold voice longest.
What is stylometry and why does it matter for AI writing?
Stylometry is the statistical analysis of literary style, traditionally used in academic authorship attribution and forensic linguistics. The Piper experiment showed that frontier large language models now perform stylometry implicitly, capable of identifying writers from short prose samples. For writers using AI, the implication is that voice is no longer a subjective quality to be evoked. It is a measurable signal that can be tracked, preserved, or lost.
A note from the person behind bookmoth
If voice is measurable, it should be preservable
That sentence is the entire reason bookmoth exists. Most AI writing tools treat voice as a setting. bookmoth treats it as a binding constraint, compiled from your own writing samples and applied to every chapter the tool drafts for you. If you’ve ever read AI-generated prose and felt the voice flatten out into something competent and faceless, that’s the problem bookmoth was built to fix.
See a portrait of your voice →