Minimum Standards for Taking AI Seriously
Disaster planning for our economic future.
Is AI going to revolutionize the world, upend the economy, and propel us into an unprecedented age of abundance, or, alternately, dystopia? Or is it all just a big bubble, a historic financial folly, a mania built on an overhyped fancy pattern-matching machine?
I do not know. But I can say confidently that the outcome of the AI era will be somewhere on the spectrum between the above options. The precise answer depends not only on technical matters of computing progress that I am not qualified to assess, but also on the chaotic swirl of global events that make the future difficult for anyone to predict. Partly because of this uncertainty—and also because of existing political tribalism, and also because of the history of Silicon Valley and Wall Street overhyping things for their own benefit, and also because of the nature of capitalism—the public discussion around how we should collectively prepare for our AI future has become polarized in an unhelpful way. On one side we have people who have a ton of money invested in AI saying “it will change everything” and on the other side we have people who hate those types of people saying “I doubt that, because you guys are greedy, untrustworthy liars with an enormous personal stake in getting everyone to believe the hype.”
Let me point out that both things can be true. A thick cloud of hustlers, grifters, and the greediest monsters on earth surround the AI industry like flies surround a butchered corpse. Sure. This has been true with all new technologies. At the same time, notwithstanding the great volume of bullshit issuing forth from this cloud of self-interested actors, the underlying technology itself—railroads, electricity, the internet, whatever—does often have profoundly transformative effects on the world. (Even if it takes longer and unfolds in a different way than the grifters said.) This, in fact, is the most likely path for AI. The good news is that, unless you are a tech investor or tech journalist or AI company engineer, the precise specifics of how and when every advance occurs and who wins the race to each specific benchmark and how much money they make off of it… do not really matter. What matters to the vast majority of people in America and around the world is: How will AI change the economy and the distribution of power in our society? And, if it is going to fuck us up, how can we take wise steps to prevent or mitigate that?
This is the interesting and productive conversation to have, and we—progressives, the left, people who care about the common good—can have it more productively if we don’t allow our dislike for the class of people who control the AI industry to deceive us into believing that we can disregard everything they say about where this is all going. There are some similarities with the way that debate over YIMBYism played out, in the sense that YIMBY’s insights took longer to permeate the left than they should have due mostly to the fact that lefties disliked the sort of people who were making a lot of the YIMBY arguments. (Today, Zohran Mamdani is promoting YIMBY policies.)
It’s okay to think that people are assholes and also to do your best to evaluate their arguments on the merits. Many greedy assholes still know things!
Complicating our judgments about AI’s impacts is the fact that it is a sort of “everything” technology that will affect different aspects of society in different ways, and to varying degrees. In labor, inequality, and the distribution of wealth, the things I focus on the most, AI may, in the next decade, range from “a minor factor” to “the single most important factor, dwarfing all others.” How, then, are we supposed to think about wise policymaking today?
That’s easy. This is disaster planning. You hope for the best and plan for the worst. Uncertainty over whether a hurricane will have zero impact on you or destroy your city should not prevent you from taking steps to minimize as much potential damage as possible.
The same thing is true with our planning for AI’s possible impact—if we make serious plans to prevent AI-caused apocalypse, and then it turns out that the skeptics were right and it was all a big disappointment, well, so what? Then we didn’t have any apocalypse, which was the goal to begin with. On the other hand, if we assume it’s a scam and therefore dismiss the possibility of awful outcomes, the price for being wrong is much, much higher.
In recent weeks, two pieces of writing about AI’s future impacts have drawn the most attention. One is the novella-length essay “The Adolescence of Technology” by Dario Amodei, CEO of the $375 billion AI firm Anthropic. In it, Amodei runs through a number of plausible worst-case scenarios, ranging from AI-enabled biological and nuclear weapons to unstoppable dystopian mass surveillance by authoritarian regimes. Would be bad. But I want to zero in on the economic aspects here. Amodei predicts that “AI could displace half of all entry-level white collar jobs in the next 1–5 years,” that its abilities will soon surpass the best humans in virtually all forms of knowledge work, and that it will lead to an unprecedented concentration of wealth. “I am sympathetic to concerns about impeding innovation by killing the golden goose that generates it,” Amodei writes, defending his arguments for (rather mild) tax policy and government regulation to counteract these economic effects. “But in a scenario where GDP growth is 10–20% a year and AI is rapidly taking over the economy, yet single individuals hold appreciable fractions of the GDP, innovation is not the thing to worry about. The thing to worry about is a level of wealth concentration that will break society.”
Another widely-read piece, by (less successful) AI executive Matt Shumer, hit similar notes. Shumer, like Amodei, emphasizes the rapid progress of AI, and warns of widespread automation of jobs that may be harder to replace than in past technological evolutions. “When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services,” he writes. “But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.”
Both of these essays prompted many reactions along the lines of: These guys suck; these guys are talking their own book; the solutions these guys propose suck; and so on. Think back now to the good news about what we are interested in here—that stuff doesn’t particularly matter! Dario Amodei’s company stole the book I wrote to train its models, which made him a billionaire. Might make me mad at him, yes. But that is separate from the question we are thinking about here today, which is: What should we reasonably be afraid of concerning AI, and what should be done?
I notice that although AI’s technical capabilities are advancing rapidly and the warnings about it are growing louder and more urgent, the baseline things we have to fear are the same ones that many have been predicting for years now: Widespread automation causing job losses, dangerous concentration of wealth, and the possibility of disasters like terrorism or electrical grid attacks or whatever either aided by or directly produced by AI.
In other words, even without being able to predict the future in detail, the outline of the most prominent dangers are clear. What policies are most prudent to prepare for these foreseeable dangers? Well, those aren’t too hard to figure out either. They include:
Unionization of the work force: Unless you believe that the US government is capable of independently solving these problems (while being in the industry’s pocket), it is clear that there must be some entities that exist for the sole purpose of protecting the workers who are exposed to having their livelihoods destroyed by AI. Those entities are unions. Union contracts, not laws, are the very front lines of AI regulation. Even if the biggest thing that union contracts accomplish is managing the reduction in jobs in a way that doesn’t utterly screw workers, that is significant. Right now, less than 10% of American workers have unions, meaning that most workers are exposed to unilateral damage from AI with no safety net. Policies like reforming labor law to make organizing easier and funding more union organizing are not old-timey things—they are, in fact, necessary policies to address our AI future. Also, as a practical matter, we are going to need the political influence of strong unions to counteract the political influence of the rich, which will be fighting against the other items on this policy list. Including,
A stronger social safety net, including higher unemployment benefits, public health care, free higher education, and other common sense measures necessary to get a large number of newly unemployed people through the hard times and into whatever comes next. And,
Much higher taxes on the rich: We will need a significant federal wealth tax in some form in order to avoid the scenario that Dario Amodei himself describes of a small handful of individuals who control the AI industry becoming so insanely rich, as they centralize the income that once flowed to many other industries, that they are effectively running the country and the world. America’s wealth inequality is already a crisis. AI could make it significantly worse. At some point democracy will fully crumble. Not allowing people to get that rich is necessary if we want to avoid having unaccountable godlike dictators. And finally, as a related matter,
Strong government regulation of the AI industry: There are more safety regulations involved in making a car than there are in releasing an AI model to the public that will, maybe, help people produce biological weapons or mass-produce child porn or who knows what else. This is an insane situation. Truly insane. Our current nonexistent regulatory apparatus around AI is similar to having just invented nuclear weapons and not yet having written any rules about who can make them or what they can do with them. There should be, at minimum, a robustly funded independent government agency that evaluates and regulates AI models before they are released into the world. Just for starters. America regulates food safety and auto safety and financial products and pollution and many other things because, if we don’t, they are potentially dangerous to the public. AI is the most potentially dangerous product to the public that exists today, and it is almost completely regulated by the industry itself. History tells us that disaster is guaranteed under this framework. We need strong government regulations immediately. This should, theoretically, be a bipartisan issue, but in reality, it will be a dirty political fight that will require all of the combined anti-corporate political power in America.
You may perceive that this list of common sense measures to protect us from the worst outcomes of AI are more or less the heart of existing progressive economic policy: Stronger labor, better social safety net, tax the rich, corporate regulation. It is a testimony to the inherent wisdom of these policies that they can also form the outline of how we address the unpredictable effects of brand new technologies. The flaws in America’s economy and distribution of power are a result of our failure to carry out these policies in the past. That will be true in an AI-dominated future, as well. Its proportions will just be more terrifying.
You don’t need to like annoying tech people, you don’t need to believe AI CEOs are motivated only by the public good, you don’t need to use or “like” AI, you don’t need to have a crystal ball, you don’t need to be a technological expert, you don’t need to have an entire philosophical debate over the nature of consciousness. You just need common sense to see the direction this is going. What we can control is whether we get ready for it, or whether we just let it happen to us.
More
Related reading: Thinking of AI as a Social Problem; AI, Unions, and the Vast Abyss; Confiscate Their Money; Automate the CEOs.
Yesterday, Hao Nguyen published an interview with me at his site “How I Make Money Writing.” His site is very well done, and he speaks to a wide variety of writers. It’s a good place to check out if you are interested in the realities of making a living in this industry.
Thank you for reading How Things Work, a publication that is 100% human-produced. Unlike some publications, we are not sponsored by gambling companies. We also have no paywall here, so anyone can read, regardless of income. How do we manage to exist, then? We are funded entirely by readers just like you who choose to become paid subscribers. It’s $60 a year, or $6 a month, and it keeps us going. If you like reading How Things Work and want to help us keep rolling in 2026, take a quick second to become a paid subscriber yourself right now. I appreciate you all for being here.




Really excellent! Esp:
“That’s easy. This is disaster planning. You hope for the best and plan for the worst. Uncertainty over whether a hurricane will have zero impact on you or destroy your city should not prevent you from taking steps to minimize as much potential damage as possible.“
Thanks yet again for the thoughtful analysis.
If AI can replace workers....why not the C-suite? No more bonuses, parachutes, wealth concentration. Publicly owned companies run by a benevolent AI instead of greedy sociopaths.
....Probably just a pipe dream or an AI hallucination though.