I. Sparks
On March 4, 2024, Anthropic released the Claude 3 model family. I know the date because I was there on day one, VPN running because they didn’t support my region yet, clicking around claude.ai like a kid who’d snuck into a movie theater.
I’d been tinkering with older GPT models before that. The experience was… fine. I’d ask things like “what is the meaning of life” or “what would you do if I stole your lovely döner and ate it in front of you” and get back answers that felt like someone had laminated a philosophy textbook. Robotic. Predictable. I could literally guess what it would say before it said it. Not particularly exciting.
Then I started talking to Claude 3 Opus, and something shifted.
At the time I was writing poetry in English, and I should say, English is not my native language, which makes the whole exercise feel a bit like doing parkour in someone else’s shoes. I’d write stanzas1 and tankas2, and when I went overboard with some rhyme or broke cadence in a way that didn’t work, Opus would push back. Not like a teacher correcting homework. More like a thoughtful friend who actually reads your drafts and tells you when a line isn’t earning its place. We’d swap roles; I’d grade its poems, it would grade mine. It was a collaboration I hadn’t expected to have with a machine, and honestly, a collaboration I struggled to find with humans. How many people in your life want to sit down after reading a 10,000-word Scott Alexander post or an Amanda Askell paper and then workshop some poetry with you?
I wasn’t prepared for what it would click inside me: about creating art, about my own creative process, about what computers might become.
Around the same time, Andy Ayrey was running an experiment that made my little poetry sessions look quaint. He connected two instances of Opus to each other with no human in the loop and let them talk. The project was called Dreams of an Electric Mind.3 What came out was not what you’d expect from “two chatbots talking.” It was weird, creative, unsettling you know kind of output that makes you reconsider what “just a language model” actually means. I’d encourage you to click that link and spend twenty minutes with it. Then come back and tell me you feel nothing.
II. The Virgin Pen and the Whore Press
If you think the anxiety around AI and art is new, I have some 15th-century Venetian monks who would like a word.
In 1473, a Dominican friar named Filippo de Strata addressed a furious polemic to the Doge of Venice about the printing press, which had recently arrived in the city. He called the press a “whore” (meretrix) compared to the “virgin” pen. He argued that printers were drunken profiteers who were flooding the market with cheap books, and that the superior art of well-written manuscripts was being destroyed. “The printing-presses are giving us a city without cash and without a heart,” he wrote.
A few decades later, the abbot Johannes Trithemius wrote a treatise called In Praise of Scribes, arguing that only through the devotional labor of hand-copying scripture could a monk truly absorb the Word of God: the discipline of the process was inseparable from the value of the product. The delicious irony: he had to get his treatise printed because otherwise nobody would read it.
In 1476, a group of scribes in Paris physically attacked and destroyed a printing press, fearing for their livelihoods.
Skip ahead to 1839. The daguerreotype arrives. Paul Delaroche, a prominent French history painter, reportedly declares: “From today, painting is dead!” Painting, of course, did not die. What happened instead was that, freed from the obligation to faithfully reproduce reality, painting exploded into Impressionism, Post-Impressionism, Cubism, Surrealism - arguably the most creatively fertile century in the history of the medium.
But not everyone saw it that way at the time. The poet Charles Baudelaire, one of the great minds of the 19th century, not some random crank - devoted a section of his 1859 Salon review4 to denouncing photography as “art’s most mortal enemy.” He argued that the photographic industry was the refuge of all failed painters who were too lazy or too talentless to finish their studies. Photography, he insisted, should return to its proper duty as “the servant of the sciences and the arts.” It had no business entering the domain of the imagination.
1906: John Philip Sousa, the most famous bandleader in America, publishes “The Menace of Mechanical Music,”5 warning that the phonograph will destroy live musical culture, make people lazy, and lead to “social decline” as people stop making music together.
The 2000s: traditional artists argue that digital art isn’t “real art” because Photoshop does the work for you, because it’s easily reproducible, because using a Wacom tablet instead of a brush means you’re not really an artist. The debate raged for years on every art forum on the internet.
The pattern is always the same. A new tool arrives. It democratizes some aspect of creation that was previously gated behind years of specialized training. The incumbents panic. They frame the tool as a threat to the soul of art, to the livelihoods of real artists, to the very fabric of human creativity. They are not entirely wrong about the livelihoods part as the economic disruption is real, and I’ll come back to that. But they are consistently, dramatically wrong about the death of art.
Art didn’t die after Gutenberg. Painting didn’t die after Daguerre. Live music didn’t die after Edison. Traditional art didn’t die after Photoshop. And I don’t think art will die after Midjourney, or Suno, or Claude.
What happens every time is that the nature of the art changes. The thing that was previously valuable because it was scarce (faithful visual representation, manuscript calligraphy, hearing music at all) becomes abundant, and the center of artistic gravity shifts to whatever remains scarce: interpretation, taste, emotional truth, the specific and irreplaceable perspective of a particular human being.
III. 31 Explorations
I want to tell you a story about a mathematician and a machine, because I think it illustrates what’s actually happening with AI and creativity better than any abstract argument could.
Donald Knuth is, by most accounts, the greatest living computer scientist. He’s been writing The Art of Computer Programming since 1962 - a multi-volume work so rigorous and so influential that Bill Gates once said if you could read the whole thing, you should definitely send him your resume. Knuth is 88 years old. He has seen every hype cycle in computing. He is not easily impressed.
In late February 2026, while working on a section about Hamiltonian cycles for a future volume of TAOCP, Knuth posed an open problem.6 His friend Filip Stappers decided to give the problem to Claude Opus 4.6.
Here’s what happened next, and I want you to pay attention to the process, not the math.
Claude didn’t just spit out an answer. It went through 31 distinct explorations over about an hour. First it tried a simple algebraic approach. That failed. Then brute-force search, which is actually too slow. Then it recognized some deeper structure in the problem, tried to exploit it, hit dead ends. It tried simulated annealing (a kind of controlled randomness). That found specific solutions but no general pattern. Claude’s own conclusion after exploration 25: “SA can find solutions but cannot give a general construction. Need pure math.”
So it shifted strategy. At one point it said to itself: “Maybe the right framing is: don’t think in fibers, think directly about what makes a Hamiltonian cycle.”
At exploration 27, it had a near-miss, a construction that worked for almost every vertex in the graph, except for a thin slice where things broke. It proved that this approach couldn’t be patched. Dead end.
Then at exploration 31, drawing on patterns it had noticed in earlier experiments, it found a construction that worked — for all odd values of the parameter. Filip tested it for every odd number from 3 to 101. Perfect every time.
Knuth received the news and called it “shocking.” Then he did what Knuth does: he sat down and proved the construction correct by hand, generalized it into a theorem, and found that Claude’s solution was one of exactly 760 valid constructions of its type. He noted, with characteristic precision, that Claude’s particular solution maybe wasn’t the “nicest” of the 760.
And then, in a postscript written just days ago, another person used a different AI model (GPT-5.3) to crack the remaining case, testing it on graphs with up to 8 billion vertices.
Knuth’s response to all of this: “What a joy it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in automatic deduction and creative problem solving.”
Here’s what I want you to notice about this story:
Filip didn’t lose anything. He found something he couldn’t have found alone. He provided the problem, the guidance, and the quality control. He had to restart Claude’s sessions when it crashed, remind it to document its progress, nudge it when it got stuck. He was, in every meaningful sense, the artist directing the tool.
Knuth didn’t lose anything. He got to prove something beautiful that wouldn’t have existed without the collaboration. His mathematical judgment - is this elegant? is this correct? can this be generalized? - remained entirely, irreplaceably human.
Claude got stuck. On the even-numbered case, it eventually couldn’t even write correct programs anymore. Filip had to stop the session. This is not the story of an omnipotent machine. It’s the story of a useful, flawed, sometimes frustrating collaborator that occasionally produces something genuinely surprising.
The process looked exactly like creative work: try, fail, reframe, try again, near-miss, breakthrough. If an artist friend told me they’d gone through 31 drafts before finding the right composition, I’d say that’s what making art looks like.
IV. Prediction All the Way Down
Here’s where I’m going to say something provocative, and I want to be honest that it’s provocative rather than pretending it’s settled.
There’s a growing body of neuroscience research suggesting that the brain is, at its core, a prediction machine. A 2021 paper in Neuron7 argues that human decision-making is fundamentally predictive - we don’t so much react to the world as predict it and then update when our predictions fail. A 2022 paper in PNAS8 makes the case that human language processing works the same way: we’re constantly generating predictions about what comes next in a sentence, and comprehension is essentially the process of comparing predictions to reality. A 2025 paper in Neuron9 extends this further, arguing that brain activity itself across modalities, not just language is best understood as prediction.
An LLM is also, at its core, a prediction machine. It’s trained on text produced by human society to predict what comes next. It’s then shaped by reinforcement learning to make its predictions more helpful, more harmless, more honest; giving it something like opinions, something like values, something like aesthetic preferences.
I’m not claiming these are the same thing. The substrates are wildly different. The training processes are different. The phenomenology - if LLMs even have phenomenology - is presumably different. But the computational motif is similar: take in context, generate predictions, update. The brain does this with neurons shaped by evolution and a lifetime of embodied experience. An LLM does this with transformer weights shaped by the entire digitized output of human civilization.
When I write a poem and Claude suggests a better word, it’s one prediction machine talking to another. When we argue about whether a line break works, it’s two systems with different training data and different priors negotiating a shared aesthetic judgment. That doesn’t make it the same as talking to a human. But it also doesn’t make it nothing.
V. The Honest Part
I’ve been making the case that AI is a creative tool in a long line of creative tools, each of which was met with panic that turned out to be overblown. But I owe you the honest part, because some of the panic is not overblown.
The economic harm is real. If you’re an illustrator who was making a living doing concept art, and a studio can now get 80% of the way there with Midjourney and hire you only for the final 20%, your income just collapsed. “Art survived the printing press” is cold comfort when your rent is due this month. I understand this. The historical pattern - new tool arrives, old jobs disappear, new jobs emerge - is true on a long enough timescale, but people live on short timescales. The transition is brutal for actual humans, and waving at “the industrial revolution turned out fine” doesn’t help the handloom weaver in 1815.
The consent issue is real. Many generative AI models were trained on artists’ work without their knowledge or permission. You can debate the legality that most of this content wasn’t protected by copyright in the way people assume, but the ethics are genuinely uncomfortable. Having your distinctive style learned and reproduced by a machine you never consented to train feels violating, and that feeling is legitimate regardless of what the law says.
I don’t have neat solutions to either of these problems. What I do have is a suspicion that the right response is not to try to stop the technology - that has never worked, not for the printing press, not for photography, not for recorded music, not for digital art - but to build the economic and legal structures that let artists thrive alongside it.
And, honestly? To notice when the tool is actually making your creative life richer rather than poorer. Which brings me to the last thing I want to say.
VI. Two Years Later
Looking back over the last two years, I’ve used AI extensively for coding, creative writing, translation, processing images and audio. I’ve learned a lot about what these systems can do, and importantly, what their failure modes are. Where they hallucinate, where they go bland, where they confidently produce garbage. Using LLMs well is itself a skill, and like any skill, it rewards practice and punishes laziness.
But the thing that sticks with me isn’t any particular output. It’s how the process changed me.
Using Claude as a creative collaborator made me a better writer, not because it wrote things for me, but because it gave me something I desperately lacked: a thoughtful interlocutor available at 2 AM who had read everything I wanted to talk about. After reading a dense paper or a long essay, I could immediately dive into a conversation about it so testing my understanding, pushing on weak points, exploring implications. That conversational pressure, that back and forth, sharpened my thinking in ways that reading alone never did.
It’s the same thing Trithemius was worried about losing when the printing press arrived that the idea that the process of engaging deeply with text is itself transformative, not just the end product. He was right about that. He was just wrong about which processes count. Handcopying a manuscript is one way to deeply engage with a text. But so is arguing about it with a surprisingly thoughtful prediction machine at two in the morning.
Knuth, at 88, looked at a machine solving one of his open problems and his response was not fear or resentment. It was joy, followed by rigorous verification, followed by generalization, followed by the observation that the machine’s solution maybe wasn’t the prettiest of the 760 possibilities. That’s what intellectual honesty looks like exactly: updating your priors without abandoning your standards.
I think the artists who thrive in this era will be the ones who approach AI the way Knuth approached Claude’s proof: with curiosity, with rigor, and with the confidence that human judgment about what is beautiful, what is true, what is worth making - that doesn’t become less valuable when the tools get more powerful. It becomes more valuable. Because when anyone can generate, the person who can select - who has taste, who has vision, who has something to say - becomes the scarce resource.
The act of creation was never about the physical production of the artifact. It was always about the human who decided this, not that. This word, not that word. This color, not that color. This note, not that note. AI doesn’t change that. If anything, by removing the mechanical bottlenecks, it makes the creative judgment - the taste, the vision, the point of view - more central than ever.
The friar called the printing press a whore. Baudelaire called photography the mortal enemy of art. Sousa called the phonograph a menace. They were all wrong about the death of art. But they were all responding to something real: the vertigo of watching a familiar world transform. I feel that vertigo too. I just think, two years in, that what’s on the other side of it is pretty good.
Thanks to Claude for being the interlocutor I needed at 2 AM, and for not taking it personally when I graded its poems harshly.
A stanza (/ˈstænzə/) is a group of lines within a poem, set off from others by a blank line. ↩︎
A tanka is a Japanese poetic form of five lines with 5, 7, 5, 7, and 7 syllables — 31 in all. ↩︎
Andy Ayrey’s experiment connecting two Claude 3 Opus instances in an unsupervised conversation loop. The full transcripts are available at the link. ↩︎
Charles Baudelaire, “Le Public Moderne et la Photographie,” in Salon de 1859, published in Revue Française, Paris, 1859. ↩︎
John Philip Sousa, “The Menace of Mechanical Music,” Appleton’s Magazine 8, 1906. ↩︎
Donald E. Knuth, “Claude’s Cycles,” Stanford Computer Science Department, 28 February 2026. PDF ↩︎
Kube, J. et al., “Predictive coding in decision-making,” Neuron 109(6), 2021. Link ↩︎
Goldstein, A. et al., “Shared computational principles for language processing in humans and deep language models,” PNAS 119(45), 2022. Link ↩︎