Thinking of AI as a Social Problem
No need to know the unknowable. The knowable is bad enough.
The scale of the AI investment boom in America is just ludicrous. The quest to build chips, data centers, and other infrastructure to enable AI has, in a very real sense, swallowed up the entire investment-oriented portion of the American economy. The hundreds of billions of dollars being spent annually—a figure that has multiplied by 20 times in the past three years—represents fully half of the nation’s annual economic growth figure. The handful of companies most involved drive the majority of the stock market’s gains. AI executives expect to spend $3 trillion more over the next few years. AI companies are now more overvalued than tech companies were at the height of the late-90s tech bubble, despite significant questions about whether this technology will actually, you know, work.
What lures all of this money is the promise of building an AI superintelligence that would effectively make the winner of the race to build it the most powerful business on earth. It is not hard to imagine the potential economic gains that would be associated with an AI smart enough to, say, be the world’s best hedge fund, create popular new drugs for any disease, and so on. Though AI thus far has not proven to be a reliable profit-driver for businesses that use it (rather than build it), the flood of investment in its development will continue for the time being—both because the potential prize is so large, and because the costs already sunk into the industry carry an incredible economic momentum, regardless of whether or not they ultimately prove to be unwise.
For those of us outside of the AI industry and the financial industries investing in AI, there can be a sense that we are simply watching this process unfold. We—by which I mean “95% of Americans, including most elected officials”—do not fully understand the technical aspects of AI; we do not work at the companies in question; and we may just assume that because of this, we can do little but wait and see the outcome of this gargantuan economic and technological gamble that will, one way or another, determine the shape of the US economy for decades to come.
That feeling is not quite right. The AI industry is so large that it has become like a massive star warping money and politics and Wall Street and the working class around it. Thinking ahead about responsibly handling this industry does not require being able to fully predict the future. For example, I do not know the answers to the following questions: Will the dream of AGI be achieved? Will the investments in chips and data centers pay off? Which company will win the AI race? Who will be the dominant AI players of the future? What specific applications will prove to be most profitable for AI companies? Will this entire era of technological dreams prove to be the beginning of a utopia, or a nightmare? Will AI save us, or destroy us, or just be a big old bust?
I don’t know, you don’t know, and we don’t need to know. It is enough, right now, to focus on more narrow aspects of AI that we can predict with confidence. The one that I am most confident about today is this: Investors will not wait an infinite amount of time waiting to see if AI becomes the magical superintelligence that dominates the world. Companies that have invested tens and hundreds of billions of dollars in AI will come under increasing pressure to be able to produce profits now, even as they continue to reach for the elusive AGI breakthrough. In order to show profits now, these companies will push incredibly hard to sell AI technology that already exists to corporate clients. And what is the main value proposition of the AI that they will be pushing? To automate labor.
Specifically, to automate vast swaths of white collar and creative jobs. That is the clearest and most direct way that companies can be induced to buy AI products today. That is the most obvious way that a corporation can increase profits by using AI. That is what AI companies are offering right now. Maybe in the future they will offer superintelligence that solves financial markets and dominates biotech. Maybe not. We don’t know. What we do know is: They aspire to bring down corporate labor costs by automating labor. The pressure from investors on AI companies to show profits will produce increasingly urgent efforts to demonstrate this one, existing, tangible value. The society-wide push by AI companies to get every nook and cranny of every field to begin using AI should be understood, above all, as an attempt to lay the groundwork for mass automation of labor in order to produce corporate profits.
Why would OpenAI and Microsoft spend $23 million to give “free training” on how to use AI to teachers? Because they love teachers? No. Because they want to get their products into public schools in an attempt to cultivate the education market. Their value proposition in that market will be, in the long run, to automate the labor that now produces textbooks, education materials, and education itself. To automate teachers, in other words. This, not some esoteric backwards fear of progress, is why it is stupid for a teachers union to welcome these AI companies in with open arms. Don’t be patsies! The destruction of your jobs are their profits.
This basic example applies to industries from Hollywood to hospitals. AI, in general, has not proven itself to be as good as human employees in most fields. But it doesn’t have to be. It only has to be good enough to convince the employers in these fields that its lack of quality is more than made up for by its potential to lower labor costs. There are many philosophical and aesthetic objections to the use of AI in creative fields, but we do not even need to make those arguments here. We need only understand a much more basic, inarguable quality of AI as it stands now: It is a machine that is trained on our work and then used to put us out of work. That is enough.
I do not know the extent to which it is possible to actually halt the deployment of AI in all of our industries, though I fully support the effort of labor unions to be the firewall against the irresponsible and destructive deployment of untested technologies in the pursuit of corporate profits. Whether unions are strong enough to be a meaningful firewall against the weight of the AI industry, in the absence of robust government help, remains to be seen. I will, again, focus on what we can know—there has already been and will be to an increasing extent a strong push to automate jobs out of existence in order to show that AI can turn a profit.
Knowing that this is coming allows us to think more reasonably about what a wise policy response would be. With no intervention from government or another countervailing force, what is likely to happen is: The gains from automating those jobs will be full privatized, captured both by employers and by the AI companies, resulting in a large number of newly unemployed people whose job skills are no longer able to get them a job. This is bad, from the perspective of society. It is good from the perspective of investors in and management of these specific companies. In other words, a widespread and potentially devastating economic change that harms many people will be balanced by a very large economic gain for a much smaller number of people. Inequality—America’s most pressing underlying economic problem—will increase. The richest people and the richest companies will get richer. More broadly, this trend will detract from the broad consumer buying power that has long driven the American economy, and it will shift the distribution of wealth further towards the top of the income spectrum. Perversely, it may very well drive a rising stock market (due to its effect on corporate profits) at the same time it drives rising unemployment and poverty. Owners of stocks and other financial assets—the top ten to twenty percent of American earners—will therefore get richer, driving inequality even higher. The wealthiest people will see their wealth rise and everyone else, who derives most of their wealth from labor income, will see their wealth decrease.
In other words, the consequence of the AI boom is likely to be that every socioeconomic problem we have now is exacerbated. (This is not even taking into account the consequences of a financial crash, if the investment boom dries up.) A modest suggestion I would like to make to, you know, everyone is that we think of AI in terms of “What will this mean for everyone?” rather than “What will this mean for the stock market?” or “What will this mean for [specific company or billionaire]?” When you think about it this way, it is clear that, at the very least, we need to plan for a way to socialize the economic gains that AI creates for corporations. That could be higher corporate taxes to fund a social safety net for laid-off workers, or it could be regulation to ban specific abuses of AI (are automated nurses as good as real ones? Etc), or it could be straightforward tax-the-rich policies, or it could be some form of nationalization of AI as a public good.
Or, it could be universal basic income. This is the policy the tech world was always advocating for this very scenario. So where is it? Have you noticed that you haven’t heard Sam Altman and Elon Musk talking much about UBI lately? Have you noticed that all of the discussion of spending trillions on data centers has not been accompanied by any discussion of planning for these socioeconomic problems in advance? Do the AI CEOs need to be trillionaires before they really “Lean In” to getting UBI passed? I am not even suggesting that UBI is the best policy response—I’m just making note that the will to bring it about seems to have dried up at right about the same time the AI gold rush that might make it a necessity got going in earnest.
I realize this is all very broad, but it is worth having a big-picture idea of what is coming regardless of the sci-fi uncertainty about the details. We are walking down a path that is virtually guaranteed to supercharge economic inequality—the trend that has already eroded American society to the point that our democracy’s continued viability is in question. Is that a good idea? No, it is not. AI is not just a technology. It is a social problem. There is zero reason to allow it to run us over without a plan to mitigate its completely predictable negative effects. The political coalition for this should be: everyone. Everyone! Including the AI CEOs, who will have a hard time enjoying their trillion dollars when they have to spend it all on autonomous robot soldiers to fight off the starving hordes of citizens whose jobs they destroyed.
More
Related reading: Automate the CEOs; AI, Unions, and the Vast Abyss; To Whom Go The Spoils?; On Having a Maximum Wealth.
I wrote an installment of “Jaguars Junction” for Defector today. WARNING: May be too technical for those of you without a deep well of football knowledge.
Thank you for reading How Things Work, a human-based publication. I feel fortunate that ChatGPT writes like some dipshit on LinkedIn. In my biased opinion, the demand for human writers will persist for a long time, and that is a Good Thing. This publication has no paywall, and no corporate sponsors. I pay the bills by asking those of you who can afford it to chip in and become paid subscribers. It’s affordable, and it is a good deed, in the universe of “Helping independent media exist.” I hope you’ll take a second to click the button below and become a paid subscriber yourself. If you can’t afford it, just keep coming back.
Personally, I put my money where my mouth is and my mouth is saying that AI is a bubble that's about to pop. An internal report by Apple got external and the report says that, basically, AI doesn't think, it just looks at its database for an answer - https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf and https://www.digitalinformationworld.com/2025/06/apple-study-questions-ai-reasoning.html
Congratulations everyone, we've reinvented the search engine.
Note, also, that the only big tech that's NOT heavily investing in AI is Apple. That big $500 billion investment that Apple made in AI back in February? It was for "AI and other opportunities." They could invest exactly nothing into AI and still meet that commitment.
I think that we're about to see trillions of dollars of investment go into a product that flops. It's going to be hilarious, except for the fact that lots of people (including myself) are going to lose their job now, and lots more people are going to lose their investments and retirement accounts in the near future. And that is why my money is not in the stock market.
There's an immediate effect, on everyone, that we can both see now _and_ predict for the future: the environmental impacts of the exorbitant amounts of land, energy, and especially water that are needed for the data centers on which "AI" runs.
Those factors alone are enough for me to do my best to avoid any use of LLMs, generatives, etc. (Which is becoming especially difficult, in that my employer has signed on to a Copilot integration agreement.)