Congratulations everyone, we've reinvented the search engine.
Note, also, that the only big tech that's NOT heavily investing in AI is Apple. That big $500 billion investment that Apple made in AI back in February? It was for "AI and other opportunities." They could invest exactly nothing into AI and still meet that commitment.
I think that we're about to see trillions of dollars of investment go into a product that flops. It's going to be hilarious, except for the fact that lots of people (including myself) are going to lose their job now, and lots more people are going to lose their investments and retirement accounts in the near future. And that is why my money is not in the stock market.
If the environmental effects of this technology isn't enough (disturbing to me that this one often gets glossed over), I've been talking to different software engineers every week about how they're using AI in their daily roles and how they envision using these tools in the future, and a lot of them have the same answer: That these AI tools, at best, are good at generating boilerplate code but that in no way do they come close to replacing them in their roles, because there are far too many errors, and reviewing that code ends up becoming a job in and of itself.
I also know several people who work in roles that have been described as "most replaceable" by AI, and even their biggest concern isn't that these tools are actually capable of doing their jobs, but that companies are laying people off and not hiring enough people to replace them because they're *banking* on the tools being capable of doing their jobs.
So what's actually happening in a lot of tech jobs, especially lower-level tech jobs, is that one person is being forced to do five people's jobs with shitty AI tools that often make more work for them in some way, even if they "help" in other ways.
There's an immediate effect, on everyone, that we can both see now _and_ predict for the future: the environmental impacts of the exorbitant amounts of land, energy, and especially water that are needed for the data centers on which "AI" runs.
Those factors alone are enough for me to do my best to avoid any use of LLMs, generatives, etc. (Which is becoming especially difficult, in that my employer has signed on to a Copilot integration agreement.)
The workings of Large Language Models (LLMs) have been labeled Artificial Intelligence (AI). That is wrong. The more appropriate name would be AofI (Appearance of Intelligence). Unfortunately, due to the allure of that deceptive appearance, enormous resources are being poured into LLM development.
LLMs do not think. All they do is select what the next element (word or pixel) should be in light of the context, their prior output, and all the information they have consumed. LLMs perform a sophisticated regurgitation of what the people of this planet have produced and shared.
In many situations, an LLM model’s outputs do appear amazing, but there are other times when they have trouble matching the ability of a child playing with blocks. Try giving the following prompt to an LLM model (Gemini, ChatGPT, etc.):
“I have 3 wooden blocks that are one inch thick. One is a square, 5" on a side, one is a triangle, 5" on each side, and the last is a circle with a 5" diameter. What is the height of the tallest stable structure that can be built with those 3 blocks?”
Apparently, LLMs are not very good at learning the types of things that children learn by physically manipulating objects.
While LLMs will likely find productive uses, we need to recognize that they only give the appearance, in some situations, of being intelligent. In reality, they are just fancy statistical predictors that are built on the accumulated historical labor of humans. Whatever net value LLMs have, it is only appropriate that it be shared among all.
I’ll finish with a radio quip (BobFM), “I’m more worried about natural stupidity than artificial intelligence.” Might it be the former that is driving AofI investments?
I think the question is worth being asked: IS AI cheaper than old fashion labor, in reality? Right now it's being heavily heavily subsidized, but sooner or later capital is going to want returns on the billions invested. GPUs can run $20k a piece, have a maybe 5 year shelf life, and data centers employ hundreds, if not thousands of them. The costs of infastructure, energy, etc. is not reaching the consumer yet, and when it does, I'm not sure who will pay.
The future is unknown, but if I had to guess, AI is just the grandest pump and dump scheme out of Silicon Valley yet, and the folks on top are either going to get out right in time or go the way of Sam Bankman-Fried or Elizabeth Holmes.
Ed Zitron is a great source for analysis the potential for AI being a major flop. He is the best writer I've seen on just how dodgy, and overvalued AI investment and development is.
Who does AI serve? A basic question you are well raising -- does it elevate people or further endanger them? Plus, the environmental costs are huge; the amount of energy and water consumption by data centers might bring some initial construction jobs, but does AI further exacerbate climate change by sucking up more energy, and for drought prone areas, who gets water first -- people or AI centers?
A most dangerous social problem indeed. I have been trying to identify the engines driving the ‘artificial intelligence’ frenzied expansion and furious, no holds barred race.
Discarding the specious and essentially ecclesiastical notion of the singularity gifting us with a super-being to techno-magically solve all our (self-made) problems, the fundamental question is:
What ‘problems’ are the teams of engineers and coders and the few multibillionaires employing them and feeding them with unlimited funds trying to solve?
I think there are four parts to the answer (noting that these large language models are not needed to parse the endless data gathered on Americans, as neural network techniques suffice):
These handful of tech companies are looking to extract the rest of our attention and focus. Essentially put each citizen, including little citizens with minds still forming, into customized echo chambers, within which we can endlessly watch movies (AI generated for the audience of one) and read texts and interact with bots that feed us exactly what will keep us engaged. A dystopia beyond Postman’s worse nightmares.
In recent news: ”The company [Meta] said on Wednesday that users' chats and interactions with Meta AI will soon be used to target them with even more personalized ads.” This is the path towards even more effectively sell us things we don’t need, endlessly, with each ‘fulfilled need’ supplanted by the next more urgent one. What Edward Bernays (nephew and semi-protégé of Freud) saw clearly when he pioneered modern public relations and advertising.
Endless expansion: the large language model frenzy the top seven companies in the S&P 500 and the energy producer selling the electricity powering the data centers is driven by the hard necessity for these corporations (NVIDIA, Microsoft, Apple, Alphabet, Amazon, Meta, Broadcom, Berkshire Hathaway) to continue to grow. They are structured, loosely, like private empires. And like more traditional empires, they must colonize at an expanding rate or come to an end. Only the ‘colonization’ they are rabidly engaged in is of the attention, agency, and income of all Americans below their uber-rich class, i.e. all Americans worth less than their sacred shibboleth: a hundy ($100 Million).
Tech leaders drinking their own tech-Koolaid: I am, reluctantly, coming to the conclusion that a few unimaginably wealthy men have made the mistake of Icarus and Bellerophon. They now see themselves as of a different order than most other humans. I perceive they actually think that they can purchase something adjacent to immortality by conjuring (through the work of countless scientists, engineers, technicians, etc.) a super-intelligent artificial mind which would then snap its fingers and prolong their lives for centuries or indefinitely. And grant them the tools (from early science fiction) to control the natural world and expand and extract without limits. Straight out of a naïve reading of Iain M. Banks Culture series not comprehending it was deep satire.
A number of years ago, noticing how the richest people who could do the most to stop it are least worried about reversing climate change, it occurred to me that the 1% might be planning to do without the 99%, in other words, to kill off the rest of us. The planet would recover quickly with only 1% of humanity to support, and that 1% would be able to keep all the energy-guzzling tech it had developed. AI would provide all the work the 1% needs but doesn't want to do itself. Not only our jobs but we ourselves are being phased out.
The 1% would love to believe that , but existing AI technology can’t provide the work the 99% are currently doing, and never will be able to (see Rodney Brooks https://arxiv.org/pdf/2509.09892 also see William Gibson’s “Jackpot” novels for a contrary view). The danger is that the 1% won’t figure that until they’ve killed off the rest of us.
Does anyone else feel like this is the most unasked-for revolution we've ever had? Companies are now trying to ram AI down our throat at every turn because they need the user counts to sell to investors. We have CEOs gleefully threatening to fire everybody the second a chatbot can do their job half as well as they can. We have the ever-popular tech bros looking into their trough of AI slop and somehow descrying emerging superintelligence and the end of humanity.
At least with crypto and VR and every other bubble it was possible to ignore it as long as you didn't read the news. And at least with the dot-com bubble we built fiber lines and other infrastructure. The tech bros say well, as a percent of GDP this bubble isn't as big as the railroad boom. Well, in the railroad boom we got a bunch of rail lines out of the deal! I was just on a train the other day crossing a bridge that was built in 1908 - it still has the telegraph lines on it, and it sees 30 trains a day. In this bubble we're wasting a ton of material and energy just to end up with data centers strewn everywhere filled with rapidly depreciating hardware that has no other real use. That's in addition to an internet that is now flooded with formulaic AI garbage, a bunch of scammers who are now able to speak in perfect English, and the ability for anyone to deepfake pretty much anything, which will be a boon for criminals just as much as crypto has been. Thanks again, tech bros!
- we somehow manage to find sufficient renewable energy to avoid draining the planet of finite resources
- and AI delivers on the promised land of putting us all out of a job
Who do the capitalists think they are going to sell their stuff to? We have no money! AI will turn Karl Marx's prediction of the downfall of capitalism into reality. I'm in.
Kinda predictable as we look at late stage capitalism as a return to an extractive economy and social system. It’s the ultimate tool for bloodsucking parasites at the very top of the economic ladder. Where we once had tribal chiefs and lords of the manor restricting access to innovation we now have those same parasites celebrating no longer needing the rest of us.
No question in my mind that this will be unsustainable, resulting in, we hope, creative destruction.
I've become so pessimistic and cynical in "our troubled times" ! In addition to not completely understanding how it all even works, I deem it to be like crypto or some ponzi scheme for already incredibly wealthy people to participate in and not get totally ruined when it crashes. It's not you, Hamilton, it's me.
Personally, I put my money where my mouth is and my mouth is saying that AI is a bubble that's about to pop. An internal report by Apple got external and the report says that, basically, AI doesn't think, it just looks at its database for an answer - https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf and https://www.digitalinformationworld.com/2025/06/apple-study-questions-ai-reasoning.html
Congratulations everyone, we've reinvented the search engine.
Note, also, that the only big tech that's NOT heavily investing in AI is Apple. That big $500 billion investment that Apple made in AI back in February? It was for "AI and other opportunities." They could invest exactly nothing into AI and still meet that commitment.
I think that we're about to see trillions of dollars of investment go into a product that flops. It's going to be hilarious, except for the fact that lots of people (including myself) are going to lose their job now, and lots more people are going to lose their investments and retirement accounts in the near future. And that is why my money is not in the stock market.
"Congratulations everyone, we've reinvented the search engine."
WIN!
Exactly!
If the environmental effects of this technology isn't enough (disturbing to me that this one often gets glossed over), I've been talking to different software engineers every week about how they're using AI in their daily roles and how they envision using these tools in the future, and a lot of them have the same answer: That these AI tools, at best, are good at generating boilerplate code but that in no way do they come close to replacing them in their roles, because there are far too many errors, and reviewing that code ends up becoming a job in and of itself.
I also know several people who work in roles that have been described as "most replaceable" by AI, and even their biggest concern isn't that these tools are actually capable of doing their jobs, but that companies are laying people off and not hiring enough people to replace them because they're *banking* on the tools being capable of doing their jobs.
So what's actually happening in a lot of tech jobs, especially lower-level tech jobs, is that one person is being forced to do five people's jobs with shitty AI tools that often make more work for them in some way, even if they "help" in other ways.
There's an immediate effect, on everyone, that we can both see now _and_ predict for the future: the environmental impacts of the exorbitant amounts of land, energy, and especially water that are needed for the data centers on which "AI" runs.
Those factors alone are enough for me to do my best to avoid any use of LLMs, generatives, etc. (Which is becoming especially difficult, in that my employer has signed on to a Copilot integration agreement.)
The workings of Large Language Models (LLMs) have been labeled Artificial Intelligence (AI). That is wrong. The more appropriate name would be AofI (Appearance of Intelligence). Unfortunately, due to the allure of that deceptive appearance, enormous resources are being poured into LLM development.
LLMs do not think. All they do is select what the next element (word or pixel) should be in light of the context, their prior output, and all the information they have consumed. LLMs perform a sophisticated regurgitation of what the people of this planet have produced and shared.
In many situations, an LLM model’s outputs do appear amazing, but there are other times when they have trouble matching the ability of a child playing with blocks. Try giving the following prompt to an LLM model (Gemini, ChatGPT, etc.):
“I have 3 wooden blocks that are one inch thick. One is a square, 5" on a side, one is a triangle, 5" on each side, and the last is a circle with a 5" diameter. What is the height of the tallest stable structure that can be built with those 3 blocks?”
Apparently, LLMs are not very good at learning the types of things that children learn by physically manipulating objects.
While LLMs will likely find productive uses, we need to recognize that they only give the appearance, in some situations, of being intelligent. In reality, they are just fancy statistical predictors that are built on the accumulated historical labor of humans. Whatever net value LLMs have, it is only appropriate that it be shared among all.
I’ll finish with a radio quip (BobFM), “I’m more worried about natural stupidity than artificial intelligence.” Might it be the former that is driving AofI investments?
I think the question is worth being asked: IS AI cheaper than old fashion labor, in reality? Right now it's being heavily heavily subsidized, but sooner or later capital is going to want returns on the billions invested. GPUs can run $20k a piece, have a maybe 5 year shelf life, and data centers employ hundreds, if not thousands of them. The costs of infastructure, energy, etc. is not reaching the consumer yet, and when it does, I'm not sure who will pay.
The future is unknown, but if I had to guess, AI is just the grandest pump and dump scheme out of Silicon Valley yet, and the folks on top are either going to get out right in time or go the way of Sam Bankman-Fried or Elizabeth Holmes.
Ed Zitron is a great source for analysis the potential for AI being a major flop. He is the best writer I've seen on just how dodgy, and overvalued AI investment and development is.
https://www.wheresyoured.at/
Just want to second Kiddo's recommendation. Ed Zitron knows his stuff. The AI business model simply isn't sustainable.
Who does AI serve? A basic question you are well raising -- does it elevate people or further endanger them? Plus, the environmental costs are huge; the amount of energy and water consumption by data centers might bring some initial construction jobs, but does AI further exacerbate climate change by sucking up more energy, and for drought prone areas, who gets water first -- people or AI centers?
A most dangerous social problem indeed. I have been trying to identify the engines driving the ‘artificial intelligence’ frenzied expansion and furious, no holds barred race.
Discarding the specious and essentially ecclesiastical notion of the singularity gifting us with a super-being to techno-magically solve all our (self-made) problems, the fundamental question is:
What ‘problems’ are the teams of engineers and coders and the few multibillionaires employing them and feeding them with unlimited funds trying to solve?
I think there are four parts to the answer (noting that these large language models are not needed to parse the endless data gathered on Americans, as neural network techniques suffice):
These handful of tech companies are looking to extract the rest of our attention and focus. Essentially put each citizen, including little citizens with minds still forming, into customized echo chambers, within which we can endlessly watch movies (AI generated for the audience of one) and read texts and interact with bots that feed us exactly what will keep us engaged. A dystopia beyond Postman’s worse nightmares.
In recent news: ”The company [Meta] said on Wednesday that users' chats and interactions with Meta AI will soon be used to target them with even more personalized ads.” This is the path towards even more effectively sell us things we don’t need, endlessly, with each ‘fulfilled need’ supplanted by the next more urgent one. What Edward Bernays (nephew and semi-protégé of Freud) saw clearly when he pioneered modern public relations and advertising.
Endless expansion: the large language model frenzy the top seven companies in the S&P 500 and the energy producer selling the electricity powering the data centers is driven by the hard necessity for these corporations (NVIDIA, Microsoft, Apple, Alphabet, Amazon, Meta, Broadcom, Berkshire Hathaway) to continue to grow. They are structured, loosely, like private empires. And like more traditional empires, they must colonize at an expanding rate or come to an end. Only the ‘colonization’ they are rabidly engaged in is of the attention, agency, and income of all Americans below their uber-rich class, i.e. all Americans worth less than their sacred shibboleth: a hundy ($100 Million).
Tech leaders drinking their own tech-Koolaid: I am, reluctantly, coming to the conclusion that a few unimaginably wealthy men have made the mistake of Icarus and Bellerophon. They now see themselves as of a different order than most other humans. I perceive they actually think that they can purchase something adjacent to immortality by conjuring (through the work of countless scientists, engineers, technicians, etc.) a super-intelligent artificial mind which would then snap its fingers and prolong their lives for centuries or indefinitely. And grant them the tools (from early science fiction) to control the natural world and expand and extract without limits. Straight out of a naïve reading of Iain M. Banks Culture series not comprehending it was deep satire.
I think you've nailed it.
Welcome to the Matrix.
A number of years ago, noticing how the richest people who could do the most to stop it are least worried about reversing climate change, it occurred to me that the 1% might be planning to do without the 99%, in other words, to kill off the rest of us. The planet would recover quickly with only 1% of humanity to support, and that 1% would be able to keep all the energy-guzzling tech it had developed. AI would provide all the work the 1% needs but doesn't want to do itself. Not only our jobs but we ourselves are being phased out.
The 1% would love to believe that , but existing AI technology can’t provide the work the 99% are currently doing, and never will be able to (see Rodney Brooks https://arxiv.org/pdf/2509.09892 also see William Gibson’s “Jackpot” novels for a contrary view). The danger is that the 1% won’t figure that until they’ve killed off the rest of us.
Does anyone else feel like this is the most unasked-for revolution we've ever had? Companies are now trying to ram AI down our throat at every turn because they need the user counts to sell to investors. We have CEOs gleefully threatening to fire everybody the second a chatbot can do their job half as well as they can. We have the ever-popular tech bros looking into their trough of AI slop and somehow descrying emerging superintelligence and the end of humanity.
At least with crypto and VR and every other bubble it was possible to ignore it as long as you didn't read the news. And at least with the dot-com bubble we built fiber lines and other infrastructure. The tech bros say well, as a percent of GDP this bubble isn't as big as the railroad boom. Well, in the railroad boom we got a bunch of rail lines out of the deal! I was just on a train the other day crossing a bridge that was built in 1908 - it still has the telegraph lines on it, and it sees 30 trains a day. In this bubble we're wasting a ton of material and energy just to end up with data centers strewn everywhere filled with rapidly depreciating hardware that has no other real use. That's in addition to an internet that is now flooded with formulaic AI garbage, a bunch of scammers who are now able to speak in perfect English, and the ability for anyone to deepfake pretty much anything, which will be a boon for criminals just as much as crypto has been. Thanks again, tech bros!
I think of AI as s social disease, and would rather not catch it. 🦠
An excellent piece. Best overview of what AI’s really about that I’ve read.
Thanks, Hamilton!
Let's assume:
- the AI bubble doesn't burst
- we accept the damage to the climate
- we somehow manage to find sufficient renewable energy to avoid draining the planet of finite resources
- and AI delivers on the promised land of putting us all out of a job
Who do the capitalists think they are going to sell their stuff to? We have no money! AI will turn Karl Marx's prediction of the downfall of capitalism into reality. I'm in.
Kinda predictable as we look at late stage capitalism as a return to an extractive economy and social system. It’s the ultimate tool for bloodsucking parasites at the very top of the economic ladder. Where we once had tribal chiefs and lords of the manor restricting access to innovation we now have those same parasites celebrating no longer needing the rest of us.
No question in my mind that this will be unsustainable, resulting in, we hope, creative destruction.
Destruction in another form maybe.
I've become so pessimistic and cynical in "our troubled times" ! In addition to not completely understanding how it all even works, I deem it to be like crypto or some ponzi scheme for already incredibly wealthy people to participate in and not get totally ruined when it crashes. It's not you, Hamilton, it's me.
who is going to buy these products made by AI?