Just yesterday, the headlines informed us that Anthropic, creators of Claude, agreed to pay $1.5 billion to settle a class-action lawsuit filed by authors. At first glance, this seems like a huge win. Finally, creators getting a piece of the pie! A Big Tech company has to pay for scraping and pirating copyrighted content! Woohoo!
But as much as I like a party… I’m not sure this is the win we are hoping for.
And I’m quite confident it’s not the win we are needing.
My concern is that this settlement is not a fix, but a distraction — a cleverly timed bit of PR jiu-jitsu that gives the illusion of justice while obscuring a much, much deeper problem.
Yes, the settlement includes a requirement for Anthropic to “destroy” the pirated training data they used, but I am pretty sure the damage is already done.
Let’s say you stole a bunch of ingredients, baked a cake, and then promised to throw away the stolen flour and eggs and pay the miller and the chicken farmer. Alright… but, um, the cake is already out of the oven. It’s frosted. It’s being served. And you’re selling it. And making money on it.
But hell, this A.I. thing isn’t really even like that, because the A.I. cake is actually already using itself to make more new cakes and open entirely new bakeries.
You can’t un-bake a cake, especially one that can make its own new cakes.
OK, this metaphor isn’t perfect, but hopefully you see my point.
The model is trained. The output has already absorbed the pirated works. Unless I am missing something here, deleting source files won’t unlearn what the A.I. has already internalized.
If I’m right, this isn’t remediation — it’s theater.
Legal Action Is Always Looking Backwards
Current legal mechanisms are built to penalize after harm has already been done. We fine companies for misdeeds once the damage is baked in, reputational or otherwise. But whatever nasty sh*t they’ve done is already DONE. The chemicals are already in the water. The forever plastic is already in your brain. A remedial payment for your nuclear fallout cancer is NOT better than never getting cancer to begin with.
Also — fines are only a punishment if you’re broke.
Anthropic just raised another $13 billion four days ago.
This adds to my feeling that this settlement isn’t reason to celebrate. It’s a calculated expense — a rounding error on the balance sheet of a company playing the long game.
Please don’t get me wrong; from what I can tell, Anthropic is likely the most ethical of our current big tech A.I. options. But they are playing by the same structural rules of capitalism that we all are, which means the deck is always stacked in the favor of those with capital. Anthropic needs capital to play the game. And there are very few other organizations that have the kind of cash to play at this level of settlement.
In this diseased game, $1.5 billion today for untold billions tomorrow is actually a pretty good deal.
A False Sense of Victory for Creators
Yes, authors getting compensated matters. But unless this settlement becomes the foundation for real reform, it’s just a band-aid on a bullet wound. It risks giving creators the false sense that justice has been served, when in fact what just happened is this:
The future value of creative work was purchased for pennies on the dollar.
Anthropic (and other A.I. companies) are securing their right to continue profiting from the labor of artists, writers, and musicians forever — and even assigning a price tag to do so — while making one-time payments that feel fair because we’re not yet living in the future where A.I. is printing trillion-dollar GDP gains.
But these folks know what’s coming. They’re locking in the upside now, while everyone else is still arguing about consent and copyright.
If you think I’m misunderstanding what’s happening here, I hope you’ll get in touch and let me know what I’m missing.
But we also have another problem to contend with… and it’s even badder.
The Bigger Threat Isn’t Copyright. It’s Collapse.
The even bigger issue here is the massive economic displacement A.I. is already starting to unleash.
Middle-class jobs. White-collar careers. Creative professions. Knowledge work.
We are shockingly unprepared for what’s coming.
Nobody knows where all those displaced people will go. We haven’t built re-training pipelines. We haven’t even named the problem, much less organized around a solution. Meanwhile, the benefits of A.I. are flowing straight up the pyramid, into the hands of tech billionaires who already own the means of production and the training data.
This is not a small problem. This is a tsunami.
And our leaders here in the United States, in particular, are sitting on a proverbial beach taking selfies and literally rebranding things like a 3 year old playing with toys, blatantly ignoring the fact that there is NO economic safety net in place for the scale of disruption we’re about to experience.
We Need A Shared Wealth Fund
If we had actual wisdom in our leadership, here’s what they’d be pushing right now: a Shared Wealth Fund that captures a portion of A.I.-generated economic value and redistributes it to the public.
This isn’t a crazy idea, and it’s not “socialism” — it’s already happening… in one of our red-leaning states, even.
In Alaska there’s something called the Alaska Permanent Fund, which allows for a dividend paid to Alaskan residents so they can all share in the wealth of the state’s oil and mining revenues.
We need something like this, but national, connected to A.I. value creation, and scaled to the size of the A.I. revolution.
Every American gets a dividend.
Not welfare.
Not a handout.
This is a return on investment for being part of the society that enabled this technology to exist — through the taxes paid by our parents and grandparents, through public education, through taxpayer-funded tech infrastructure, through open internet protocols, and yes, through the creative works and labor that trained these machines.
This Fund would be future-focused, forward-leaning — it would be about what we can create together for the future, not about chasing “back-pay” and restitution.
This fund could bankroll:
- Massive re-training and re-skilling efforts
- Creative and entrepreneurial grants
- Public infrastructure modernization
- Universal dividends that restore some balance to our currently-insane income inequality (this is what’s happening in Alaska btw)
What’s most fascinating about this idea is that there are actually discussions currently happening about this exact kind of thing!
In April, Congressman Morgan McGarvey from Kentucky introduced an exploratory bill to create a U.S. Sovereign Wealth Fund that could eventually do something exactly like this. And in his bumbling way, Trump actually sees something in this realm as helpful too; he signed an executive order about it back in February.
But did you hear about either of these things?
Are these two plans even aware of each other?
Has anyone made the connection that there’s clearly a bipartisan-supported, existentially-important issue here?
I wasn’t aware of either of these efforts until I did the research for this article.
Right now, it seems to me this issue is almost entirely off the radar. Instead, we’re going to argue about who owns which paragraphs from which book while the real systemic wealth transfer is still happening in real time.
We’re Making A Future We Don’t Want
Anthropic’s $1.5B payout is not justice. It’s a down payment on a future they intend to own.
If we don’t build a new system that shares the wealth A.I. is going to create, we’re not just going to see some job losses — we’re likely going to watch what’s left of the middle class vanish entirely.
And if this happens we’ll only have ourselves to blame, because we were too busy watching the smokescreen to realize the real heist is happening right now in plain sight.

