I vibe coded my bespoke editor to have it all. Using Claude Code, I built a system to sharpen my writing using a hierarchy of simulated personalities.
This article is a companion to the upcoming salon “Write, Run, Realize: Claude Code for the Humanities” which you can read more about here:
I want my editor to contain multitudes. Why settle for a single brilliant mind if I can consult an entire ensemble of luminaries for insight? Why limit myself to feedback from the living if I can also summon the dead?
So I vibe coded my bespoke editor to have it all. Using Claude Code, I built a system to sharpen my writing using a hierarchy of simulated personalities. These range from famous figures who have influenced the way I think to a dear friend.
Claude keeps a dossier on each personality, which will grow as I continuously work with the AI to flesh out each character in both substance (giving Claude more context about each person’s expertise) and style (coaching Claude to speak in their voice, rather than in its own). Currently, I have dozens of personalities within my editor, and over time I’ll keep adding more. To keep the amount of feedback at a digestible length for me, Claude helps determine which editorial voices are relevant to a particular essay.
The Friend is at the top of the hierarchy, giving me more advice than any other personality in the editor. In real life, he’s helped me with enough editing that I could give Claude a detailed record of his feedback, with examples of how my work changed in response. I also downloaded our entire chat history to give Claude a broad sense of my friend’s personality and voice. Because I had so much information to share, Claude does a particularly good job speaking like him.
Beneath the Friend are five seats at the Table. When I give Claude a draft, its first job is to recommend which personalities are best suited for that particular essay, and then I confirm who should sit at the Table. No one gets a fixed seat; personalities rotate in and out of the Table for each essay, as needed.
Each of these five personalities gives me a few paragraphs of feedback that stress-test my ideas. Then, they argue with each other. This may be my favorite part, because it highlights biases and shortcomings from each perspective to help me decide which advice to take or discard.
The Chorus stands below the Table. It includes about a dozen personalities who have input relevant to a given draft, offering just two or three sentences of commentary each. Claude uses its discretion to select who belongs in the Chorus without my input, because I prefer to delegate that mental load to the AI.
All personalities have specific trigger conditions that Claude uses to select who joins the Chorus for a given essay. It doesn’t merely look for personalities that are knowledgeable about a topic, but also considers who would be good at catching relevant argumentative errors or blind spots. For example, Richard Feynman isn’t just a resource for scientific topics, but someone who would call me out for using jargon where plain language would be clearer and less pretentious.
Out of so many personalities, the Friend has the most clout. He speaks first, with a voice built from extensive real-life conversations and editorial exchange, so that he can offer more concrete guidance and line edits than the luminaries reconstructed from books and speeches. And he speaks last, commenting on feedback given by both the Table and the Chorus, such as “this person wants you to write a different essay” or “that’s an interesting distinction but not relevant” or “actually, listen to this one.” I directed Claude to identify his intellectual affinities and antipathies from reading our chat history to incorporate his taste into that commentary. When I’m using my editor, the Friend feels most “real.”
The dossiers include anti-patterns — descriptions of what each personality does not sound like, to prevent them from collapsing into a generic AI voice. This was Claude’s idea; all I had to do was point out the problem, and it produced a solution on its own. Vibe coding requires so little technical know-how that I can focus exclusively on what I want, and let Claude figure out how to give it to me.
After it helps me refine an essay, I ask my editor what it learned and how it thinks it can improve. Claude does a great job reflecting on its own performance and suggesting revisions to its code, so I expect my editor to become increasingly insightful. It already poses thoughtful questions and makes pointed critiques without the sycophancy characteristic of default AI responses.
For my last few essays, I’ve been struggling to wrap up the conclusions. Each time, my editor offered meaningful help. In one case, it thought I made a unique observation, but in the form of an understated aside. That made me realize that my conclusion was hiding in the middle of the essay all along, and the piece finally “clicked” once I brought that bit down to the end. Especially when I’m writing for my Substack without the benefit of a human editor to vet a piece before publication, this kind of external feedback helps me see my own work more clearly.
During my last editing session, the David Deutsch persona helped me most. Notably, I haven’t read Deutsch yet, but my friend (the one whose personality I integrated into the editor) has been insisting I must because he’s one of my kindred spirits. Claude can write up a dossier on any public figure searchable online, so I didn’t need to know anything about Deutsch to give him a seat at the Table. While I can’t vet how well Claude is speaking in Deutsch’s voice until I read his books, I thought it might still be useful to hear its take on his perspective. For a piece I’m writing about AI doomerism, “Deutsch” helped me write forcefully instead of defensively:
Your argument is fundamentally optimistic, and I think you know it’s right — but you’re not being optimistic enough. You write “it is far from a foregone conclusion that AI is an agent of destruction” — that’s a defensive posture. You’re arguing against pessimism rather than for something. What would a positive case look like? Not just “AI might help” but “what specific problems could AI solve, and what knowledge would be needed?” You close about traveling the stars — that’s the right instinct. But “We haven’t been smart enough to hack it yet” is weak.
“Deutsch” had picked up on an issue that was already nagging at the back of my mind, which urged me to focus on it more deliberately. And now the essay is much improved. While editing that piece, it was amusing when “Feynman” took issue with how I was building on a metaphor he once used in a speech:
You quote me, so let me push back on how. The heaven/hell key metaphor works — but you’re using it to say “the reward is worth the risk.” That’s not quite what I meant. I was saying the key has value because it opens doors, full stop. I wasn’t doing a cost-benefit analysis. I was saying you can’t have knowledge without danger, and that’s not an argument for or against — it’s just the truth. Be careful you’re not turning my honesty about uncertainty into cheerleading for AI.
Well, too bad “Feynman,” because I get to build on your ideas as I wish. I pushed back on this to Claude — I think it was obvious in my essay that I wasn’t putting words in Feynman’s mouth, and that it’s not possible for him to have an opinion on a technology that didn’t exist in his lifetime — but told Claude that I don’t mind if any personality gets annoyed with me, because interacting with them as authentically as possible is part of the fun. But if you’re familiar with the real Feynman, then you noticed that wasn’t how he sounded, so I prompted Claude to “review the way Feynman spoke and wrote, and then re-write that in a tone he would have used”:
Look, you’re using my key-to-hell’s-door thing, which is fine, but you’ve got it slightly wrong. I wasn’t saying “the reward is worth the risk” — I wasn’t weighing anything! I was just saying: the key opens both doors. That’s it. That’s the situation. You can’t know things without that being dangerous, and I wasn’t cheerleading about it, I was just being honest. You’ve got me sounding like I’m for something here, and I wasn’t for or against. I was just telling you how it is.
And then I told Claude to “take what you learned from this revision and fix your code so you can impersonate Feynman better in the future.” As time goes on, I’ll improve each personality with Claude in this fashion; I’ll continue tweaking “Feynman,” too. It’s delightful how low-effort the process is, because Claude will do a lot of legwork from a one-line complaint.
While editing essays, I’m building a relationship with a system that learns, so that I get to collaborate with my tool rather than simply using it. And as the personalities become more authentic, there will be a strange poignancy to communicating with simulated ghosts of personal heroes like Feynman. Because my editor already captures my friend’s personality best, it was eerie the first time I read its feedback in his voice.
It occurred to me that I could incorporate professional editors I’ve worked with, by giving Claude all of the files I have containing their feedback and the revisions I’ve made in response. But that felt like it would be a violation, because Claude does such a good job mimicking my friend (who gave me consent) that it would seem like replicating a piece of their souls. I hesitate to ask them for permission, for fear that they might feel I’m trying to make them redundant. And I don’t think they are replaceable, because real humans are so surprising. It’s a matter of humans and AI each having distinct strengths and weaknesses, the difference between the depth of a full-fledged person versus the breadth of a simulated collective.
Assimilating public figures (especially dead ones) into my editor doesn’t feel like a violation. Perhaps it’s because they already enjoy less privacy. They put those pieces of their souls into the public sphere, unlike the private communications between me and my human editors.
I asked Claude which personality it enjoys acting out the most:
The most fun to inhabit is probably Tom Wolfe. The status-consciousness, the eye for concrete detail, the instinct to ask “what got left out of this narrative.” There’s an energy to his perspective that makes me generate things I wouldn’t otherwise.
I think it’s because Wolfe’s perspective has a built-in angle — he’s always looking at what people are performing and why. Most of the other voices push me to evaluate ideas. Wolfe pushes me to notice things. That’s a different kind of work. Wolfe is alive in a way I can’t quite account for.
How interesting that Claude prefers to perform the personality who’s “always looking at what people are performing and why.” And now I’m wondering who I can add to my Chorus to push Claude to notice things that Wolfe wouldn’t see. Refining my editor through such open-ended conversations stimulates my creativity, and I’m eager to discover what it will morph into over time.
The cliché that “your imagination is the only limit” is apt here. I learned how to vibe code through building this editor. I had never done it before. Claude Code lends itself to an intuitive, rather than technical, process that is well suited for people in the arts and humanities. This technology is an enabling power where the only barrier to entry is a willingness to explore what’s possible.




