Chances are good you’ve heard of Sam Altman, the CEO of OpenAI (creators of ChatGPT).
A couple weeks ago, Sam published an inspiring article on his website called The Intelligence Age. There are a number of intriguing statements throughout, including:
- “I believe the future is going to be so bright that no one can do it justice by trying to write about it now”
- “Deep learning works, and we will solve the remaining problems.”
- “Astounding triumphs — fixing the climate, establishing a space colony, and the discovery of all of physics — will eventually become commonplace”
I especially appreciated his warning:
If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.
Frankly, I found the whole piece quite compelling… until the last paragraph.
There Sam says:
Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.
If you read the rest of the article (and I hope you do), you’ll see Sam spends a lot of time lauding prosperity and its benefits — especially, of course, the prosperity that will soon be birthed via the magic of AI.
But that first sentence of the last paragraph is terribly out of touch… and it’s also hiding a dangerous mindset about the future.
I don’t know of any specific research on today’s workers saying they’d rather be lamplighters, but what we do have is frankly an insane amount of data on just how many people are miserable at work.
Spoiler: it’s almost everyone.
The reports use different language to describe this crisis — disengagement, stress, or my favorite, a Deloitte study from 2014 (which I cite in my TEDx) which declares that 87.7% of people do not have passion for their work — but it’s all describing the same phenomenon.
Nearly every study you could reference over the past 20 years will tell you the same thing: people mostly hate work.
And then we have Bullshit Jobs.
David Graeber’s astonishingly-thorough anthropological treatise estimates that somewhere between 40-50% of people exist in some sort of purgatorial job-hell, where they are paid quite well to do jobs they know in their souls do not provide value to society.
In other words: if Graeber is to be believed, an absurdly huge number of today’s jobs ARE “trifling wastes of time.”
I can’t prove this, of course, but I would personally place a hefty bet that at least a few of the people self-identifying into the “disengaged,” “stressed,” or “I have a bullshit job” categories would actually VERY much prefer to be lamplighters.
Today’s workers are mostly under-appreciated and under-represented. They’re also grossly underpaid — in a study published by the Associated Press in June 2024, in half of the companies they surveyed it would take a middle-level worker nearly 200 years to make the amount of money their CEO made in 1.
So this is the scary part of Sam’s piece — and maybe his thinking in a larger sense, too.
A hundred years ago, our leading economists were also predicting greater prosperity on the horizon. In an essay written in 1930, John Maynard Keynes foretold that his grandchildren would only work 15 hours a week due to innovative new technologies.
But new technologies do not necessarily produce a better life. Sam seems to understand this, to a point — he says he understands that prosperity doesn’t automatically equate to happiness, and “there are plenty of miserable rich people” — but to my ears still ends up mostly blind to the size and scope of the real danger.
Right now, across the world, with AI we are playing a very familiar zero-sum game — a multipolar trap that is, at its core, an arms race. Sam seems to believe that the magical prosperity produced by AI will itself be enough to somehow flip our current trajectory on its head so we can “again focus on playing positive-sum games.”
But if history has anything to teach us, this will not be how it works. Sam’s narrative on this point is terribly shallow, full of magical thinking of the most insidious kind.
By glossing over the realities of the arms race at hand he’s sowing the seeds of a dangerous false hope, because this magical “flip” doesn’t happen in the real world. Instead, as we have seen with most other technological advances, with AI we will almost certainly run into a Jevons Paradox — where increases in AI won’t somehow magically reduce our demand for it, but simply increase our desire for it even further.
Put another way: an increase in “prosperity” will not be the thing that curbs our desire for “more.”
A new organizing story must come first.