“That’s easy. This is disaster planning. You hope for the best and plan for the worst. Uncertainty over whether a hurricane will have zero impact on you or destroy your city should not prevent you from taking steps to minimize as much potential damage as possible.“
I appreciate your advice to avoid knee-jerk reactions (despite the jerks who own, control, and promote this technology). The tech bros are rolling out AI faster than boulders down a mountainside—and the rest of us are living in the valley below.
I recently attended a public lecture at my local university by Carl Bergstrom, a professor at the University of Washington. He’s the coauthor of a book and open online course, “Calling Bullshit,” about disinformation and AI (https://callingbullshit.org). Among other things, improving our bullshit detectors will have to be on our to-do list.
If AI can replace workers....why not the C-suite? No more bonuses, parachutes, wealth concentration. Publicly owned companies run by a benevolent AI instead of greedy sociopaths.
....Probably just a pipe dream or an AI hallucination though.
This article is the best I’ve ever seen about where things stand on AI and what we must do. First point is that workers need a union. Then safety net, taxes and regulation. No silver bullets but common sense solutions. Bravo
Well done (as usual, but here particularly) at getting to the heart if it:
“Our current nonexistent regulatory apparatus around AI is similar to having just invented nuclear weapons and not yet having written any rules about who can make them or what they can do with them”
And: “You just need common sense to see the direction this is going.”
Exactly why recent history bodes most dire for how this will be handled- the concentrations of wealth and power are making it their priority to insulate themselves from any common sense/greater good regulation and oversight, and systematically dismantle any that already exist, creating a kind of political dominance where they can continue to do whatever they want (colonize the moon? Sure! Let Earth starve!) The intent of those making these decisions -to essentially regulate themselves- clearly is to just hoard everything. I hope for the best but prepare for the worst.
Well done, Hamilton! Back to basics. Core values & principles, which, if allowed to drive policy, would help humanity survive, if not thrive. Unfortunately, most of our ruling elite and corporate monarchs have neither.
Many thanks for this. You are right: we (definitely I) can't keep trying to predict what will come of this, but there are things we can do to make a meaningful change, and you lay out what those things are very clearly. Much appreciated.
AI cannot displace workers at the projected rates and still be an economically profitable entity for the obvious reason that income for AI operators depends on having a market.. which means income-earning humans. The alternative is a vast welfare system.. which also does not return a profit and will never happen anyway for political reasons.
This would be the appropriate set of actions. However, how this is to be put in place is far from clear. Organizing on making these actions happen is essential, so how does this organizing happen now? The publicly owned companies would solve so much, so let's all find actions to get it in place. How do we work together on this?
This piece is great and I agree that the changes it outlines are needed whether ai fulfills the grandest prophecies or not.
As a high level software engineer seeing how ai tools have been getting steadily better (which is not to validate the claims of the boosters and hucksters), I have had to concede that the previous contention that it's "just spicy autocomplete that can't write the most basic code" is obsolete. We're still figuring out the contours of what AI can do well, but it's not "nothing". This makes predicting the harms it will cause very difficult.
THIS! As greedy as techies are, and as foolishly as they trust the tech, there's still plenty of potential for real life harm as people keep building it.
Relatedly, of course, most groups who *theoretically* should be going after Big AI corps about risk... are absorbed into the capitalist NGO blob! They end up applauding the lamest measures imaginable while their staff frequently revolving-door themselves back into the Big AI corps.
Really excellent! Esp:
“That’s easy. This is disaster planning. You hope for the best and plan for the worst. Uncertainty over whether a hurricane will have zero impact on you or destroy your city should not prevent you from taking steps to minimize as much potential damage as possible.“
I appreciate your advice to avoid knee-jerk reactions (despite the jerks who own, control, and promote this technology). The tech bros are rolling out AI faster than boulders down a mountainside—and the rest of us are living in the valley below.
I recently attended a public lecture at my local university by Carl Bergstrom, a professor at the University of Washington. He’s the coauthor of a book and open online course, “Calling Bullshit,” about disinformation and AI (https://callingbullshit.org). Among other things, improving our bullshit detectors will have to be on our to-do list.
Thanks yet again for the thoughtful analysis.
If AI can replace workers....why not the C-suite? No more bonuses, parachutes, wealth concentration. Publicly owned companies run by a benevolent AI instead of greedy sociopaths.
....Probably just a pipe dream or an AI hallucination though.
Yes.
https://www.hamiltonnolan.com/p/automate-the-ceos
This article is the best I’ve ever seen about where things stand on AI and what we must do. First point is that workers need a union. Then safety net, taxes and regulation. No silver bullets but common sense solutions. Bravo
Well done (as usual, but here particularly) at getting to the heart if it:
“Our current nonexistent regulatory apparatus around AI is similar to having just invented nuclear weapons and not yet having written any rules about who can make them or what they can do with them”
And: “You just need common sense to see the direction this is going.”
Exactly why recent history bodes most dire for how this will be handled- the concentrations of wealth and power are making it their priority to insulate themselves from any common sense/greater good regulation and oversight, and systematically dismantle any that already exist, creating a kind of political dominance where they can continue to do whatever they want (colonize the moon? Sure! Let Earth starve!) The intent of those making these decisions -to essentially regulate themselves- clearly is to just hoard everything. I hope for the best but prepare for the worst.
Well done, Hamilton! Back to basics. Core values & principles, which, if allowed to drive policy, would help humanity survive, if not thrive. Unfortunately, most of our ruling elite and corporate monarchs have neither.
Brilliant essay.
Many thanks for this. You are right: we (definitely I) can't keep trying to predict what will come of this, but there are things we can do to make a meaningful change, and you lay out what those things are very clearly. Much appreciated.
AI cannot displace workers at the projected rates and still be an economically profitable entity for the obvious reason that income for AI operators depends on having a market.. which means income-earning humans. The alternative is a vast welfare system.. which also does not return a profit and will never happen anyway for political reasons.
This would be the appropriate set of actions. However, how this is to be put in place is far from clear. Organizing on making these actions happen is essential, so how does this organizing happen now? The publicly owned companies would solve so much, so let's all find actions to get it in place. How do we work together on this?
This piece is great and I agree that the changes it outlines are needed whether ai fulfills the grandest prophecies or not.
As a high level software engineer seeing how ai tools have been getting steadily better (which is not to validate the claims of the boosters and hucksters), I have had to concede that the previous contention that it's "just spicy autocomplete that can't write the most basic code" is obsolete. We're still figuring out the contours of what AI can do well, but it's not "nothing". This makes predicting the harms it will cause very difficult.
THIS! As greedy as techies are, and as foolishly as they trust the tech, there's still plenty of potential for real life harm as people keep building it.
Relatedly, of course, most groups who *theoretically* should be going after Big AI corps about risk... are absorbed into the capitalist NGO blob! They end up applauding the lamest measures imaginable while their staff frequently revolving-door themselves back into the Big AI corps.