Why I've stopped worrying about AI (for now)
Plenty of people will tell you it's going to end badly. I found some who disagree.
Last week in a mild panic I sent emails to my lawyer, my insurer and my mortgage broker with the same question: “in a future where artificial intelligence disrupts the economy to an extent that my wife and I can’t earn real money anymore, and house values plunge due to nobody in the country/world having an income, what options do I have now, in June 2025, for making sure our family home is protected?”
They all sent nice replies and did their best not to act like I was being a weirdo. Nobody had any breakthrough suggestions. It turns out my one strand of hope - income protection insurance - only works for illness, not technological obsolescence.
For all of human history there have been backup plans (we’ll retrain/we’ll sell up/we’ll borrow money until things improve) but in the above scenario there is no backup. AI-powered drone death squads may be terrifying, but having hundreds of thousands of dollars of debt and not being able to find a buyer for your assets or your skills is its own kind of scary.
How close might such a future be?
“We are in February 2020” said a panellist on Real Time about AI recently, meaning that most of us are walking blind into a world we will not recognise and aren’t prepared for - a world some of us will not survive.
AP reports that joblessness among university degree holders aged 22 to 27 is now higher than among the general population; that seems like a pretty strong signal. In the New York Times last month was an op-ed titled “I’m a Linked In Executive. I see the bottom rung of the career ladder breaking”. To drive down costs, the CEO of Shopify recently issued a memo: “before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI”
Perhaps we’re in a moment summed up by my wife’s favourite quote: “the future is already here, it’s just unevenly distributed”
*******************************
“People believe that which they fear and that which they desire”
I can’t now remember where I read this quote and I can’t find an origin for it online, which means it now only exists in my head and perhaps in an old book nobody will read.
I first encountered the idea maybe 30 years ago and I often think about it. Stories travel faster and louder when their engines are running on the superfuels of terror and wishful thinking. We’re afraid that immigrants will destroy our way of life so we believe politicians who tell us our families are under threat. We want our parenting to be faultless so we ignore any evidence that we’ve made a bad decision.
With an issue like AI, which has so much commentary around it, I worry that the two emerging narratives are all fear and desire. You pick which one you’re going to believe in, then you stick with it.
As per above there is plenty of stuff to be afraid of, but what does the desire story look like? I think it’s the wish that nothing will change, that life will continue as normal and that you, an underrated genius, have access to the sort of wisdom that normal people and experts can’t comprehend: the risks of AI are overstated/we’ve been through this all before/when disruption comes you just have to adapt and things will work out fine.
I mention all this because I’ve recently read some ideas that make me less … existentially pessimistic about AI, in the medium term at least. But do I believe these ideas because I want them to be true? Or are they objectively comforting?
Well, comfort is comfort. Here are a couple of smart people who think we’re panicking unnecessarily:
Paul Krugman is a former White House economist and NY Times columnist. Writing on the graduate unemployment problem he says:
I’d mostly discount the idea that this is largely about AI displacing educated workers. That might happen eventually, but replacement of workers by AI (or the complex number-crunching that we have, misleadingly, been calling AI) is probably too new a phenomenon to explain such a drastic change.
A more likely story, as many have pointed out, is that we’re looking at one consequence of an economy that has been “frozen” by uncertainty, largely uncertainty about U.S. government policy.
…
So what does a business do in the face of this kind of uncertainty? It tries to avoid making commitments that it may soon regret.
And hiring recent college graduates is a significant commitment. Whatever their formal training, young people need to acquire real-world experience to be effective in their new jobs. Employers need to be willing to spend time and money while new hires gain this experience. And in this uncertain environment, that’s not a commitment employers are willing to make.
I mean it’s not shout-it-from-the-rooftops good news but it’s at least a reprieve from the idea that the robots are taking over tomorrow.
And then there is Cal Newport: MIT Professor, New Yorker columnist and the biggest brain in podcasting by some margin. He thinks the idea that Super AI is just around the corner is nonsense.
First he points out the limited usefulness of the (albeit impressive) AI we already have:
When generative AI made its show-stopping debut a few years ago, the smart money was on text production becoming the first killer app. For example, business users, it was thought, would soon outsource much of the tedious communication that makes up their day — meeting summaries, email, reports — into AI tools.
…
It’s becoming increasingly clear, however, that for most people the act of writing in their daily lives isn’t a major problem that needs to be solved, which is capping the predicted ubiquity of this use case. (A survey of internet users found that only around 5.4% had used ChatGPT to help write emails and letters. And this includes the many who maybe experimented with this capability once or twice before moving on.)
Sidenote: has anybody else had the experience where somebody sends you an email and leaves AI fingerprints on it? Not just the tone and language which is still unmistakably unhuman, but I’ve twice had emails that begin “sure, here is an email you can send to Jesse which is warm and friendly, yet professional”. One was from somebody who was learning English which makes total sense. The other was from a friend who I was surprised would bother running such a simple human interaction past a chatbot.
Back to Cal Newport, who goes through various other uses of current AI - some game-changing, some underwhelming - then moves on to what they tell us will be next: firstly agentic AI, where you can ask it to perform tasks (including using third-party software) to do useful stuff for you. He says while early progress towards this moved fast, there’s a scaling problem which has foiled those who thought agentic AI would come as a matter of course:
For a while this proved true: GPT-2 was much better than the original GPT, GPT-3 was much better than GPT-2, and GPT-4 was a big improvement on GPT-3. The hope was that by continuing to scale these models, you’d eventually get to a system so smart and capable that it would achieve something like AGI, and could be used as the foundation for software agents to automate basically any conceivable task.
More recently, however, these scaling laws have begun to falter. Companies continue to invest massive amounts of capital in building bigger models, trained on ever-more GPUs crunching ever-larger data sets, but the performance of these models stopped leaping forward as much as they had in the past. This is why the long-anticipated GPT-5 has not yet been released, and why, just last week, Meta announced they were delaying the release of their newest, biggest model, as its capabilities were deemed insufficiently better than its predecessor.
…
I once said that the real Turing Test for our current age is an AI system that can successfully empty my email inbox, a goal that requires the mastery of any number of complicated tasks. Unfortunately for all of us, this is not a test we’re poised to see passed any time soon.
Then there is superintelligence, which is the one everybody is worried about. As “Herald of the Apocalypse” Daniel Kokotajlo told Ross Douthat recently, this is AI which can develop its own motivations and evil plans while being smart enough to trick us into thinking it’s still working in our interests.
Cal says all the predictions around this sort of intelligence are missing a big question of how it will happen.
The current energized narratives around AGI and Superintelligence seem to be fueled by a convergence of three factors: (1) the fact that scaling laws did apply for the first few generations of language models, making it easy and logical to imagine them continuing to apply up the exponential curve of capabilities in the years ahead; (2) demos of models tuned to do well on specific written tests, which we tend to intuitively associate with intelligence; and (3) tech leaders pounding furiously on the drums of sensationalism, knowing they’re rarely held to account on their predictions.
But here’s the reality: We are not currently on a trajectory to genius systems. We might figure this out in the future, but the “unlocks” required will be sufficiently numerous and slow to master that we’ll likely have plenty of clear signals and warning along the way. So, we’re not out of the woods on these issues, but at the same time, humanity is not going to be eliminated by the machines in 2030 either.
That last sentence made me feel happy. You could say I desired to believe it. Not everyone will be convinced, but I’m hoping the ones who think he’s got this wrong will be suffering from their own fear-based delusions. Me, I can’t pay down my mortgage any faster so, while the world continues to change at mysterious speed, I’m clinging on to hope and faith.
Our two sons are in the 22–27-year-old age bracket. They work at an Auckland-based startup that uses AI to create research tools to assist people with legal and tax questions. Their mission is not to replace people with technology, but to make experts more productive and to democratise access to legal and tax information. While some are running around screaming that the sky is falling, others are quietly finding ways to use AI sensibly and responsibly to support people in their jobs. We don’t have to kill the beast if we can learn to ride it.
Thanks for the update on AI. In NZ AI is taking entry level software programming jobs but it isn't efficient at good programming.