Skip to content Skip to footer

Why A.I. HAS To Work (Who Owns The Robots?)

🚨 If you’d rather WATCH or LISTEN TO this article, you can!

📺 YouTube // 🎧 Spotify // 🍏 Apple

Hello Tomorrow is also now on Substack! If you’re on Substack, click here.


Today’s wrong question is: “Will A.I. work?”

I’ve been hearing versions of this a lot lately. Smart people — people I respect — dismissing A.I. as “just another tech innovation” that will destroy some jobs but will “absolutely” create others. Or some people talk about it as a “bubble” that will either fizzle out or initiate some sort of massive market crash.

But “Will A.I. work?” is still the wrong question. 

The better question is: “What happens if it doesn’t?”

Because here’s what I’ve come to believe after spending the last several episodes going deep on money, debt, and the global financial system: we don’t really have a choice about whether A.I. works.

A.I. has to work because we need it to.

The Hole We’ve Dug

Two weeks ago we talked about how money is a promise, and how the promises we make with government-created money only hold up if we create real value with them. I called these “apple-making” investments versus “apple-wasting” investments because we created a fictional town with 10 apples to learn about inflation. (If you haven’t read the Money episode, please do; it’s long but worth it, I promise.)

What are “apple-making investments?” Roads, schools, clean energy, healthcare — things that grow productive capacity and make the future substantively better.

What are “apple-wasting investments?” Tax cuts for people hoarding money (ahem, billionaires), endless wars, interest payments on old bad debt, excessive global military empires — that’s real resources consumed, real money created, zero new productive capacity.

Here’s the sad reality about the last several decades of the American economy: we have been spectacularly, historically, almost heroically good at apple-wasting.

The U.S. has spent roughly $8 trillion on post-9/11 wars. We’ve cut taxes on the billionaire class repeatedly while wages stagnated. We spend nearly $1 trillion every year on a bloated military industrial complex that hasn’t passed an audit. We’ve accumulated nearly $40 trillion in national debt, much of it funded by promises about future productivity that haven’t yet materialized.

Meanwhile, the petrodollar — the invisible architecture that lets us export dollars instead of dealing with the consequences — is showing serious cracks. All of it is pointing to a world where our free lunch is ending.

The bill is coming due.

The most plausible path out — without a catastrophic reckoning — is a genuine, significant, step-change in productivity. Higher taxes, spending cuts, debt restructuring… none of those are politically survivable at the scale we need. Which is why everyone, whether they admit it or not, is quietly betting on a productivity miracle.

Which brings us to A.I.

The Only Real Candidate

A.I. is the most plausible candidate we’ve got for this kind of productivity leap. Look around. What else is big enough?

Clean energy is transformative but it wouldn’t be a productivity multiplier across every sector simultaneously. Biotech is promising but too narrow. Robotics is real but probably too slow (and probably also dependent on A.I.).

As far as I can tell, A.I. is the only option in sight that has the potential to multiply productive capacity across essentially every domain at once: medicine, law, engineering, education, logistics, science, software, manufacturing! And it could do it simultaneously… and possibly within a decade.

That’s not hype. That’s just the actual scale of what general-purpose technologies do when they mature. We’ve seen a few things like this before. The printing press changed the entire world. The steam engine restructured the entire economy. Electricity was the operating system the 20th century ran on.

A.I. is that kind of general-purpose technology. 

The legitimate question isn’t whether it will be transformative. It’s how transformative it will be, how fast will that transformation happen, and — most importantly — who will reap the benefits.

That last part is where it gets complicated.

The Logical Endpoint

History teaches us some other important things about these kinds of productivity step-changes.

We know they work. We know they generate enormous new wealth. And we know that, usually, that wealth flows almost entirely to whoever owns the technology.

The industrial revolution was the greatest productivity leap in human history so far. It also created the most miserable working conditions humans had experienced in centuries — child labor, 16-hour days, dangerous factories, criminally-low wages — while a small class of owners accumulated unprecedented fortunes.

It took decades of labor organizing, regulation, and political struggle before the productivity gains of industrialization reached workers in the form of shorter hours, safer conditions, and rising wages.

The computing revolution of the last 40 years followed a similar pattern. Productivity gains were real and enormous. But wages for most workers decoupled from productivity in the 1970s and never reconnected. The gains went to capital. To shareholders. To the people who owned the software and the platforms and the data.

There are two examples. 

Third time’s the charm, right?

Let’s hope so, because this time the productivity step-change is potentially bigger than both previous ones combined. And we’re starting from a position of already historically extreme inequality.

If A.I. follows the historical pattern — gains flow to capital, workers get the disruption without the dividend — we won’t just get more inequality. We’ll get the complete economic collapse I talked about in “Who Buys Your Stuff, Robots?” because no one can afford anything. 

If A.I. eliminates or even substantially degrades tens of millions of jobs while the productivity gains accumulate at the top, the consumer base the entire economy depends on gets hollowed out, the billionaire’s foundation becomes a sinkhole, and the system destroys itself.

That’s not a distant theoretical doomsday scenario. That’s the logical endpoint of the current trajectory.

The Math Problem

Now let’s put this back in the context of finances.

The U.S. has nearly $40 trillion in national debt. We have a petrodollar system under pressure. We have a military empire that costs almost a trillion dollars a year to maintain — and if you were curious, that’s more than what the next nine countries on earth spend on their militaries… combined. Oh and by the way, the interest payments on our debt now exceed that insane defense budget.

The most plausible path out of genuine financial disaster is productivity growth. Real, sustained, significant productivity growth that expands the economy faster than the debt grows.

A.I. could deliver that. The models suggest it could add trillions in global GDP growth.

But — and this is the part that stresses me the F out — if those gains concentrate at the top, they won’t help with the actual problem. A billionaire’s wealth going from $100 billion to $500 billion doesn’t expand the consumer economy. It doesn’t pay down public debt. It doesn’t rebuild the middle class. It doesn’t grow any new apples at frickin’ all.

For the purpose of filling the hole we’ve dug, concentrated productivity gains at the top are as economically useless as NO productivity gains.

What we need are distributed productivity gains. Gains that flow through the economy broadly enough to actually expand the consumer base, increase tax revenues, and stabilize the financial system.

Which means the right question isn’t “Will A.I. work?”

The right question is: “Will A.I. work for everyone?”

Right now, nothing in our current system guarantees that it will. In fact, everything in the current incentives points the other way.

So the most important question of the next decade might actually be: “Who Owns The Robots?”

The Shared Wealth Fund

What could we possibly do about this?

My argument here is actually conservative, in the original sense of that word, meaning: it conserves the system rather than blowing it up.

The first thing to understand is that the public — you and me and our parents and their parents — we funded the research that made A.I. possible.

I’m not being rhetorical. This is literally true. The internet: DARPA. The foundational neural network research: public universities and government grants. The training data that cracked open modern deep learning: an NSF-funded researcher at Stanford. Voice recognition: we funded the thing that became Siri, and every A.I. assistant that’s followed

Decades of publicly funded basic research, handed to private companies to privatize the gains.

This has been the playbook of Capital for awhile, I get it. 

But we have to realize: A.I. is different

Past tech revolutions were sectoral — they killed specific job sectors while creating new adjacent ones. The printing press hurt scribes but created publishers. The railroads killed canals but built new industries… and entirely new cities. The pattern held for two centuries: disruption always also brought creation. Every previous disruption created an adjacent labor market to absorb the displaced. You lost your job, learned a new skill, crossed the street, got a different job. The previous sector may have died but you survived. None of that happened overnight, and it was still painful, but it has been working this way.

A.I. is the first disruption where the adjacent labor market is itself being automated simultaneously.

We’ve never seen a tech revolution that is this omni-sectoral — this one is coming for everything, everywhere, all at once. Well, white collar first, but robotics are coming faster than we think.

When we combine these facts, there’s one logical conclusion: the public needs to have a shared ownership claim on some portion of A.I. returns.

Alaska figured this out with oil. Every Alaskan resident receives an annual dividend from the Alaska Permanent Fund — a share of the state’s oil revenues. It’s not charity. It’s a dividend as a citizen from a commonly owned resource.

Norway built the world’s largest sovereign wealth fund on the same principle. Over a trillion dollars, owned by the Norwegian people, generating returns that fund public services and retirement for every citizen.

The precedent exists. The logic is sound. The only thing missing to do this with A.I. is the collective political will.

An A.I. dividend — some mechanism by which the productivity gains of A.I. are partially distributed to the citizens that funded the foundational research — isn’t “redistribution.” It’s a dividend on a public investment.

And more importantly, economically-speaking this might be the only thing that prevents the consumer collapse that makes the entire system unravel.

The math is simple: distributed gains keep the economy circulating. Concentrated gains don’t. If we want A.I. to actually fill the hole our leaders have dug we now need them to create a mechanism to distribute the gains.

A shared wealth fund is the least radical version of that mechanism I can possibly imagine. We all helped build this thing, and we should all own parts of it.

The Three Future Paths

Regular readers know I think about the future in terms of three paths.

Path Zero is the one we can’t let happen — where we just keep going on the current trajectory, incentives never change, extraction wins, and the system eventually destroys the host it depends on. The Don’t Look Up future. This path sucks, and we can’t allow it.

Path A is survivable — simpler, more local, lower complexity, but genuinely brutal for a lot of people on the way there.

Path B is our path to Star Trek — abundant clean energy, A.I. coordinating complexity at scale, a civilization organized around flourishing rather than extraction.

At this point, some people ask me: “But what if A.I. is a ‘bubble?’” Well, historically, general-purpose technologies have never been killed by their bubble. Railroads survived their bubble. The internet survived dot-com. Electricity, automobiles, radio — the financial fantasies collapsed but the “new tech” infrastructure remained. The technology always survives. Someone picks it up and keeps building.

So “Will A.I. work?” is actually the wrong question for another reason. It’s not just the wrong question because of the stakes. It’s also the wrong question because the answer is almost certainly “Yes.” At least in some form, to some degree, A.I. is going to work. The research and the capital and the infrastructure are too far along for the technology itself to disappear.

This means the real variable for our 3 different Paths was actually never the technology itself.

The real variables are distribution and ownership.

Who owns the gains when A.I. delivers? 

That is the question that decides which Path we get to be on. 

A.I. that works but concentrates gains at the top? That’s Path Zero. The consumer base collapses and the system destroys itself, with A.I. just accelerating the race to the bottom.

If we’re forced into simplification before the gains can be distributed, that’s Path A. The technology survives — it always does — but the disruption reshapes the economy faster than the dividend arrives. I know, for those of us who have time to listen to podcasts, simpler sounds really nice, but remember: this path is remarkably hard for potentially billions of people.

A.I. that works AND distributes gains broadly? That’s the accelerant for Path B. The productivity step-change actually solves the hole we’ve dug. The consumer base stays intact. The system doesn’t eat itself. This path also allows us to create infrastructure to help lift, I’m going to say, the rest of the globe out of poverty.

So here’s the reframe:

We don’t need A.I. to work.

We need A.I. to work for everyone.

Those aren’t the same thing. And right now, nothing in our current system guarantees the second one. In fact, everything in our current system — the incentive structures, the ownership patterns, the quarterly earnings obsession — all points the other way.

So it turns out, the people dismissing A.I. as “over-hyped” aren’t wrong that there’s hype, but they’re very wrong about the “over” part. The importance of A.I. in our current global story is actually under-hyped. 

We need A.I. to work, and we need it to work for everyone.

The technology is coming. The disruption is coming. The gains are coming.

The only question is: coming for whom?

Leadership Lens

My leader friends, here’s today’s Leadership Lens.

First: A.I. adoption isn’t optional, but your strategy matters enormously. Organizations that use A.I. as a cost-cutting tool — an excuse to reduce headcount and extract more margin — are accelerating the consumer collapse their businesses depend on. Use whatever influence you have at your level to build a strategy that uses A.I. to grow and expand, not shrink and cut.

Second: your workers are carrying ambient societal anxiety about this, whether they’re talking about it or not. The leaders who name it directly — who say out loud “here’s what’s happening, here’s what it means for us, here’s how we’re thinking about it” — will retain trust through the disruption. The ones who stay silent will lose it. You don’t need all the answers. You need the courage to have the conversation. If you want more on this topic, read last week’s article.

Third: Pay attention to what happens to the space A.I. creates in your organization. The temptation — and the institutional default — will be to immediately fill any recaptured time with more work. Resist that. The capacity A.I. frees up for your people isn’t a productivity bonus to be re-spent. It’s an opportunity to do the thing organizations are most starved for right now: sense-making. Understanding the moment. Building judgment. Protect that space. It’s where wisdom lives. And wisdom is the meta-skill your organization needs most for the next decade.

The Optimistic Rebellion

Here’s our Optimistic Rebellion for this week.

Find out where your congressional representative stands on A.I. and productivity distribution. Not on A.I. regulation in the abstract. Not on tech and innovation generally. Specifically: do they understand that the gains from A.I. need to circulate through the economy broadly, or do they think “the market will sort it out on its own?”

Most of them haven’t thought about it at all. That could be good! It means the window to shape their thinking is still open.

So call or email them. Ask what they know. Share this episode if you want. The shared wealth fund debate is coming, and we need our leaders to properly understand the stakes. Go to congress.gov/members/find-your-member, enter your zip code, find your reps, and click contact. I’ll put a sample email you can copy/paste below.

The way all this works with A.I. has not been decided.

But that’s not a reason for comfort. That’s a reason for urgency.

The promises that hold the world together are being renegotiated right now — monetarily, geopolitically, technologically. All at once. Whether we participate in that renegotiation or not, it’s happening. 

But to get a future we actually want means we must show up with a vision for what the new promises need to look like. That starts with us demanding that A.I. works for all of us. Not someday. Now… while we still have time to build the new systems that make this possible.


🏛️ COPY/PASTE LETTER FOR YOUR REPRESENTATIVE

Subject: A direct question about A.I. and who gets the gains

Dear [Representative Name],

I’m a constituent with a specific question.

The research that made A.I. possible was largely funded by taxpayers. DARPA built the internet. NSF grants funded the Stanford research behind modern deep learning. A $150 million government project became Siri. WE paid for the foundation, but private companies are set up to capture ALL the returns.

The U.S. is carrying nearly $40 trillion in national debt. Our interest payments now exceed our entire defense budget. The only realistic path out of this crisis is broad-based productivity growth, but it needs to flow through the whole economy, not just to the top. A.I. could deliver that. But ONLY if its gains are distributed.

My direct question: do you support any mechanism — a shared wealth fund, an A.I. dividend, a public equity stake — that ensures A.I. gains circulate through the economy rather than accumulate at the top?

Alaska has run this model with oil since 1982. Norway built the world’s largest sovereign wealth fund on the same principle. This is not radical. It’s practical — and arguably the only mechanism that prevents consumer collapse.

I am NOT asking where you stand on A.I. regulation in general. I am asking whether you understand that distribution is the most consequential A.I. policy question of this decade and whether you have a plan.

For more context, please check out this podcast: https://joshallan.com/2026/04/07/who-owns-the-robots/

Please respond with a specific answer, not a form letter.

A concerned citizen,

[Your name]

[Your city and zip code]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.