I want to start by acknowledging that this is a topic that directly affects people’s livelihoods. Real people are losing real work to generative AI right now, and that matters. I’m not going to pretend this is purely an abstract or anonymous philosophical debate. Also I have enjoyed every Sanderson book I’ve read and have no beef with him, he’s simply a target of his own making here by communicating clearly.
That said, I’ve been struggling with this topic because I can’t find a clean position. The conversation around AI and art tends toward extremes: either it’s theft and should be banned, or it’s a tool like any other and everyone should embrace it. I’m not comfortable on either end. There are too many layers and angles, and I think flattening them into a simple take does a disservice to everyone involved.
The clearest version of the anti-AI argument I’ve encountered comes from Brandon Sanderson. His thesis, roughly: the struggle is the art. The book you write isn’t really the product, it’s a “receipt” proving you did the work. You become an artist by writing bad books until you write good ones. The process of creation changes you, and that transformation is the actual art. LLMs can’t grow, can’t struggle, can’t be changed by what they make. So they can’t make art.
It’s a thoughtful position. But I think it’s also circular. He’s defined art as the process of struggle, but the audience doesn’t experience your struggle. They experience the output. Nobody listening to an album knows or cares whether it took a week or three years to record it. They care if it moves them. When I read Mistborn (which I enjoyed!), I’m not feeling Sanderson’s growth journey from White Sand Prime through six unpublished novels that I never read. I’m feeling the story he eventually learned to tell.
“Put in the work” is real advice and I believe in it deeply. But the work is how you get good, not why the result matters to anyone else. Those are different things. Conflating them feels like asking the audience to subsidize your growth journey.
Subsidy
And maybe that’s what some of the anger is actually about. AI threatens the subsidy.
The middle tier of creative work: background music, stock photography, commercial illustration, session gigs was never really about profound artistic growth. It was a way to pay the mortgage while developing your craft on nights and weekends. You do the pedestrian work that keeps the lights on, and that buys you time to make the art you actually care about. AI competes in that middle tier directly, and it’s winning.
That’s a real economic disruption, and I don’t want to minimize it. But framing it as “AI can’t make art because it doesn’t struggle” is a philosophical dodge of an economic problem.
That model isn’t ancient. It’s maybe 50-80 years old. The session musician, the stock photographer, the commercial illustrator working on their novel at night, these are 20th century inventions. Before that, you had patrons, or you were wealthy, or you just didn’t make art professionally. The “starving artist” is a well-known trope, but the “starving artist who does commercial work to fund their real art” is a much more recent arrangement. But there were also far fewer artists, with a lot more gatekeeping, so I’m not arguing that everything was great before then either.
“I did it myself”
There’s also the provenance argument, that AI is trained on copyrighted work without consent or compensation. And that’s a real concern. But virtually all musicians learned to play and write by listening to and studying other musicians. There’s no system to track that provenance or pay royalties unless it’s a nearly-direct copy. The line between “learned from” and “trained on” is blurrier than it feels.
That said, I don’t want to dismiss the emotional weight here. Feeding your art and creativity into a machine with no credit—while some corporation profits from it—is a tough hit to the ego, not just the bank account. That’s a legitimately hard thing to get past, and I hope we find a better solution for it. The current arrangement feels extractive in ways that don’t sit right, even if I can’t articulate exactly where the line should be.
Sanderson said “I did it myself” referencing his first novel that he hand-wrote on paper. This feels cringeworthy to me, because in no way is he doing it himself. That first novel had thousands of contributors, from his parents and teachers to stories he read, conversations he had about it, movies he watched and so on.
This connects to something my thoughts keep coming back to: we’re always in the middle. Most people like to think of their place in a creative effort as the beginning or the end; the origin of something new, or the final word on something complete. But nobody starts from zero. The most original ideas are still cued by experiences. The most original inventions are still spurred by problems. Your inputs came from somewhere.
And it goes the other direction too. If we write the book, people still need to read it. If we compose the song, someone still needs to hear it. Our outputs are someone else’s inputs, often without permission, credit, or compensation. The chain keeps going.
Sanderson’s framing puts the artist at the center as the origin point of authentic creation, forged through struggle. But if we’re all in the middle, if every artist is just transforming their inputs into outputs that become someone else’s inputs, then the question of whether the transformer “struggled” feels less central. The chain of influence extends in both directions, through every artist who ever lived, and will continue through whatever comes next.
Starving Engineers
And then there’s the scope problem. Generated music is bad but generated code is fine? Generated paintings are theft but generated infographics are helpful? The reactions seem to track with how much cultural romance we attach to the craft. Software engineering has no “starving engineer” mythology. Nobody thinks I suffered for my art when I debugged a race condition. So when AI writes code, it’s a tool. When it writes songs, it’s an existential threat.
Photography is worth remembering here. In the 1800s, critics argued photography wasn’t art because it merely captured what already existed. Some said copyright should go to the subject, or even to God, not the photographer. It was too easy, just thoughtlessly press a button.
But over time, people figured out that taking a photo wasn’t a mundane task. Good photographers could be in the same place with the same equipment and consistently create images that moved people. The tool became a medium. Mastery emerged.
I think AI will follow a similar path. Right now most people are still tinkering, having mixed results. But we’re starting to see glimpses of people getting genuinely good at it, comfortable enough that they can do things most people can’t, or never thought of. They’ll convey ideas and emotions in new ways. They’ll be drawing on the collective contributions of thousands of generations of prior artists, just like every artist always has.
I don’t have a clean conclusion here, and I’m not sure anyone should right now. The displacement is real. The ethical questions around training data are real. The cultural anxiety about what counts as “real” art is real. I can’t join the strong positions on either side, because I think we’re very early in a journey that will outlive all of us.
What I am is cautiously optimistic. The history of art is full of new tools that were rejected as cheating until people learned to master them. The history of technology is full of painful transitions that looked like apocalypses at the time and turned out to be recalibrations. I suspect this is one of those. I hope so, anyway. We won’t know for a while yet.
