As things stand, I already don't trust the NYT or our local Murdoch news to provide an unbiased report on factual matters. However, I know (or am familiar with) the bias and editorial filters of the NYT. I can trust them to report in a consistent manner that lets me interpret their reporting consistently.
With publications that use AI reporters, I will (hopefully) be able to tell the difference between AI reporting and human reporting. I will NOT be able to consistently trust the path back from 'text --> [black box] --> fact' for any AI reporting, so I won't trust their reporting (or masthead) in general.
There truly is not such thing as an ‘AI reporter.’ I know you know this—but I think it’s important to point it out. Journalism requires a human to go around and find out things (almost always this requires talking to other humans). AI could not do journalism. It could possibly generate an email that asks people questions. So an LLM could do an interview I suppose. It could not witness anything. These machines cannot perceive. This is something that is rarely discussed, and that’s probably why I am jumping in here to mention it. The human capacity for perception—to use our senses—is an essential aspect of how we come to know. These machines can only spit back a thing which a knower has put out there. This is one of the many things which makes what Hamilton is mentioning here completely absurd. It would definitely fail if we had some sort of conception of journalism and the media as meeting the needs of the public. A major need is the need for us to know things, something a chatbot can *by its very nature not provide.* But this seems not to be the main conception that many are operating from anymore. AFAICT, they regard ‘people have read this thing and are influenced by it’ as entirely sufficient. If there is nothing out there which meets the need to know, then people will consume this other, fundamentally useless and frequently harmful, product. So it will ‘succeed’ on the terms that media seems to have set for itself. Unfortunately, this is not the purpose of journalism or a great deal of human communication. The actual connection with reality (which requires perception) is the current purpose of most of that. It is something we need. We will be substituting one thing for another thing which is not even the same thing, let alone an adequate substitute.
Hamilton brought up a great point I hadn't considered before...AI can ONLY produce a kind of 'style-free' ubiquitous pap BY definition...Its the same problem that 'decision by committee' faces...All you get is an 'acceptable' product hobbled together...OTOH, AP feeds might present a problem, simply because the style is DELIBERALY simple and curt...A kind of flattened style easy to assimilate and produce through AI
100% agree. And the publishing world has been two steps behind since news moved online to be honest. The people at the top have no long term vision and are just scrambling. But I do think there is a place for AI in a very limited role...in support of actual writing. Let Siri automate the CMS system and generate content tags so writers can write.
If this garbage continues--the tossing of humans into the darkness while some puppet tells everybody what's "the truth", I have no doubt that all the arts, especially books, will turn into little exercises for AI, and when and if somebody like Trump crawls out of the sewer and runs the show, they'll use AI to spread their rot--already in progress,I'm sure--and we'll become the bunch of robots that people like Putin have wet dreams over.
Last year, Google was shopping its AI to news publishers nationwide. Though I don’t know if there were any takers last year, the writing was on the wall.
In the main, I think it’s too late to put the genie back in the bottle.
AI won’t replace all journalists, nor all attorneys, nor all doctors, but the productivity enhancements will render many to be (to borrow a phrase from the British) “redundant.”
What the larger question is—that no one seems ready to discuss yet—what happens to government revenues when so many taxpayers are displaced?
If governments survive through taxes, and medium and high earners are displaced by AI, are people ready to demand that government ‘tax the machine?’
NY Times and others should sue Google too, its automated excerpting regime is likely against copyright law. Hell, showing the whole-ass Headline, lede, and image from articles on Facebook might be against copyright law too, I don't think anybody has bothered to test it in court.
Publishers have repeatedly let tech companies eat their business because they think they'll be the new big fish, never noticing that the pond is drying up.
The power dynamic is so blatant when the most automatable job (upper management) is the last one they consider replacing. I just read a Ted Chiang article from last year arguing that AI is basically the new McKinsey whose goal is to provide an unaccountable reason for exploiting labor and cutting workers/pay.
So the NYTimes did NOT sign a deal with OpenAI and Microsoft, and is filing a lawsuit against them? How on earth did they get smart enough to do that? Maybe they remembered Tasini vis NYTimes back in ..1999? When the National Writers Union (your union, Hamilton) sued the NYTimes to keep it from selling bundles of writing produced by freelancers and not paying them. It went to the Supreme Court and we (the NWU) won. New step: "All rights" contracts -- sign them or else. But it's basically just moving the boundary around what is for sale. And yes, it would have been nice to have defeated all-rights contracts.
Fundamental laws of physics and math are against LLMs ever being useful for more than they are already; ChatGPT and Bard are barely above 'tech demo' levels of capability. The energy costs and diminishing returns on training the next model and unpredictable bad data returns are going to kill so-called AI eventually.
This is sad, because these kinds of mathematical models are valuable for automating annoying drudgerous tasks like data entry. This is what I do for a living, and I have firsthand experience with the time and cost savings of handwriting and speech recognition. Not as sexy, as you still need humans to edit and correct the text generated, but you do need fewer humans for the job.
But sadly the way things are structured in the US we've let people with wealth and power retain too much of it. So they can double down on stupid decisions because they have a bottomless pool of resources to keep forging ahead on bad ideas.
And the people who pay the costs are the working class who get their lives disrupted and thrown into turmoil every time some dimwit who had a lucky series of wins on his birth parents and entrepreneureal gambles buys out a business and drives it into the ground.
Automate the CEOs.
Please print that on a t-shirt and sell it here. Another great piece.
You make very important points here, Hamilton.
Trust can never be generated by AI, and trust, I believe, is the new gold.
This is part of the point he is making;
As things stand, I already don't trust the NYT or our local Murdoch news to provide an unbiased report on factual matters. However, I know (or am familiar with) the bias and editorial filters of the NYT. I can trust them to report in a consistent manner that lets me interpret their reporting consistently.
With publications that use AI reporters, I will (hopefully) be able to tell the difference between AI reporting and human reporting. I will NOT be able to consistently trust the path back from 'text --> [black box] --> fact' for any AI reporting, so I won't trust their reporting (or masthead) in general.
There truly is not such thing as an ‘AI reporter.’ I know you know this—but I think it’s important to point it out. Journalism requires a human to go around and find out things (almost always this requires talking to other humans). AI could not do journalism. It could possibly generate an email that asks people questions. So an LLM could do an interview I suppose. It could not witness anything. These machines cannot perceive. This is something that is rarely discussed, and that’s probably why I am jumping in here to mention it. The human capacity for perception—to use our senses—is an essential aspect of how we come to know. These machines can only spit back a thing which a knower has put out there. This is one of the many things which makes what Hamilton is mentioning here completely absurd. It would definitely fail if we had some sort of conception of journalism and the media as meeting the needs of the public. A major need is the need for us to know things, something a chatbot can *by its very nature not provide.* But this seems not to be the main conception that many are operating from anymore. AFAICT, they regard ‘people have read this thing and are influenced by it’ as entirely sufficient. If there is nothing out there which meets the need to know, then people will consume this other, fundamentally useless and frequently harmful, product. So it will ‘succeed’ on the terms that media seems to have set for itself. Unfortunately, this is not the purpose of journalism or a great deal of human communication. The actual connection with reality (which requires perception) is the current purpose of most of that. It is something we need. We will be substituting one thing for another thing which is not even the same thing, let alone an adequate substitute.
Hamilton brought up a great point I hadn't considered before...AI can ONLY produce a kind of 'style-free' ubiquitous pap BY definition...Its the same problem that 'decision by committee' faces...All you get is an 'acceptable' product hobbled together...OTOH, AP feeds might present a problem, simply because the style is DELIBERALY simple and curt...A kind of flattened style easy to assimilate and produce through AI
100% agree. And the publishing world has been two steps behind since news moved online to be honest. The people at the top have no long term vision and are just scrambling. But I do think there is a place for AI in a very limited role...in support of actual writing. Let Siri automate the CMS system and generate content tags so writers can write.
If this garbage continues--the tossing of humans into the darkness while some puppet tells everybody what's "the truth", I have no doubt that all the arts, especially books, will turn into little exercises for AI, and when and if somebody like Trump crawls out of the sewer and runs the show, they'll use AI to spread their rot--already in progress,I'm sure--and we'll become the bunch of robots that people like Putin have wet dreams over.
Last year, Google was shopping its AI to news publishers nationwide. Though I don’t know if there were any takers last year, the writing was on the wall.
In the main, I think it’s too late to put the genie back in the bottle.
AI won’t replace all journalists, nor all attorneys, nor all doctors, but the productivity enhancements will render many to be (to borrow a phrase from the British) “redundant.”
What the larger question is—that no one seems ready to discuss yet—what happens to government revenues when so many taxpayers are displaced?
If governments survive through taxes, and medium and high earners are displaced by AI, are people ready to demand that government ‘tax the machine?’
Thank you! I keep trying to follow as many real live journalists/writers as possible.
NY Times and others should sue Google too, its automated excerpting regime is likely against copyright law. Hell, showing the whole-ass Headline, lede, and image from articles on Facebook might be against copyright law too, I don't think anybody has bothered to test it in court.
Publishers have repeatedly let tech companies eat their business because they think they'll be the new big fish, never noticing that the pond is drying up.
The power dynamic is so blatant when the most automatable job (upper management) is the last one they consider replacing. I just read a Ted Chiang article from last year arguing that AI is basically the new McKinsey whose goal is to provide an unaccountable reason for exploiting labor and cutting workers/pay.
So the NYTimes did NOT sign a deal with OpenAI and Microsoft, and is filing a lawsuit against them? How on earth did they get smart enough to do that? Maybe they remembered Tasini vis NYTimes back in ..1999? When the National Writers Union (your union, Hamilton) sued the NYTimes to keep it from selling bundles of writing produced by freelancers and not paying them. It went to the Supreme Court and we (the NWU) won. New step: "All rights" contracts -- sign them or else. But it's basically just moving the boundary around what is for sale. And yes, it would have been nice to have defeated all-rights contracts.
Fundamental laws of physics and math are against LLMs ever being useful for more than they are already; ChatGPT and Bard are barely above 'tech demo' levels of capability. The energy costs and diminishing returns on training the next model and unpredictable bad data returns are going to kill so-called AI eventually.
This is sad, because these kinds of mathematical models are valuable for automating annoying drudgerous tasks like data entry. This is what I do for a living, and I have firsthand experience with the time and cost savings of handwriting and speech recognition. Not as sexy, as you still need humans to edit and correct the text generated, but you do need fewer humans for the job.
But sadly the way things are structured in the US we've let people with wealth and power retain too much of it. So they can double down on stupid decisions because they have a bottomless pool of resources to keep forging ahead on bad ideas.
And the people who pay the costs are the working class who get their lives disrupted and thrown into turmoil every time some dimwit who had a lucky series of wins on his birth parents and entrepreneureal gambles buys out a business and drives it into the ground.
Are we implying that News Corp and Financial times and Axel Springer haven't already planted the seeds for their own industry's demise?