Walter and Matt on Jim Comey's literary tunic, the marketing angle on "AI is taking over the world" stories, and Ursula K. Le Guin's "The Ones Who Walk Away from the Omelas"
An AI killing its human "coworkers" in order to complete its mission is the fundamental plot point in 2001 A Space Odyssey. Clarke's book made this explicit, whereas in Kubrick's film, the viewer is left to wonder why HAL "went crazy" and killed Frank and tried to kill Dave (other than self defense, as HAL knew they were thinking of shutting HAL off).
In the original book, and its sequel, we learn that Dave and Frank didn't know the true nature of their mission. Only HAL and the other still frozen scientists knew. Dave and Frank where there mainly to insure the the scientists got to Jupiter. HAL was told to lie to Dave and Frank. Since this was against HAL's fundamental nature, it caused him to start making mistakes, which caused Dave & Frank to lose confidence in HAL. HAL could not allow himself to be shutoff, because then no currently conscious being on the ship would know the true mission. So he tried to kill Dave & Frank as the mission, in HAL's judgment, was more important than their lives.
The mission was to further investigate the obelisk found on the moon, which left the moon as the humans were studying it, and went to Jupiter. I don't think Dave & Frank knew about the obelisk. It has been over 30 years since I read these books, so I could be wrong on some details. But I think I got the general shape right.
I’m so done with narcissists the likes of Jim Comey getting to disguise his obvious mid life crisis humping a less than mediocre novel starring his alter ego. A black silk jumpsuit? I bet the idiot thinks he looks great.....
Great discussion about AI. I learned a lot from Walter's scenarios and descriptions of problems with AI that I had not considered, especially the ways in which we are going to be propagandized to accept its mediocrity. Critical thinking seems to be getting harder to find; we are all being dumb-downed. This will not end well.
Damn the torpedoes … I went ahead and wrote a piece where I mentioned the giant elephant in the room - secession. However, I hedged my heresy by noting this will probably never happen.
In my analysis of our “3-D geopolitical chess game,” I also opine that all game observers completely missed the key move in said game - all the “moves” made to protect the Magic and Vital Fiat Printing Press, which makes the entire game possible.
The “end game,” per my prediction, is probably central bank digital currency which might be “check mate” for that archaic concept known as freedom.
To my friend Walter, and much-admired Matt--you guys are largely spot on regarding AI but missing a few key points. AI won't merely force the population to be stupider: the hype itself is a propaganda campaign to convince us we're stupider already. I asked ChatGPT some literary questions, questions requiring basic analysis, and as Noam Chomsky pointed out in an online-only Times oped, discovered that it's a giant Cliff Notes generator working at 14 million times the speed of anything else available. It came up with banal and basically meaningless prose that sounded as if it meant something until you looked at it for a few seconds: the kind of thing a mediocre high school teacher might like. But of course if we're being told that THIS level of intelligence is wildly brilliant and to be feared, what does it mean about us? The other interesting lacuna in the national hysteria which you guys touched on but needs more thinking: AI is utterly incapable of irony: it can lamely explain it if it can find some explanation offered before (it can flat-footedly get the point of the joke 'Q: why did the chicken cross the road? A: to get to the other side' but is utterly lost by 'Q: why did the baby cross the road? A: its mother stapled it to the chicken'--which joke I was happy to learn was a favorite of Anthony Bourdain's) and, as far as I and many others going back to Aristotle are concerned, there is no real intelligence without a capacity to hold opposing truths in one's mind at the same time and explore the possibilities offered by their coexistence. Meanwhile all the horrible things that we predict AI will do, we do already--spy on you, misinform you, kill the wrong people by accident or for the wrong reasons, with no culpability, from thousands of miles away, assassins without uniforms, without codes of conduct, without names or consciences. Check the news stories: none, not one, ever mentions this fact.
Amazing discussion! The Imperial Age is founded on the principle of a class that derives its leisure from a colonized people. The Age of Imperialism ends this summer. A 500 year saga comes to
a close.
Per the AI incident the AF leaked it reminds me of "open the pod bay doors Hal. I am sorry Dave, I can´t do that"
Regarding AI, about 40 years ago a great movie about the evils of AI was on TV. It was called "Colossus, the Forbin Project". It is still available to buy, not sure about streaming. It is a great message about the military use of AI gone awry.
So we are being primed to believe that these drone strikes are due to a computer error, just like we were with Person of Interest! Has anyone watched that show? Making us believe that 24/7 surveillance was a GOOD thing, because the protagonist in the show (the “genius”) was always able to save the life of an every day Joe or Jill or they who’s very life was in danger!
This was a compelling, thought provoking read.
Thank you so much.
I trust Elon’s take on most things...especially his warning on AI.
I still think he’s an alien hybrid (not as in a car, I mean human/alien!).
One thing you have to understand when reading Omelas is that it was also written when science fiction had an enormous problem with pedophilia- one that everyone in the industry knew about and chose to ignore. Marion Zimmer Bradley, her husband Walter Breen, and others were active kiddie molestors at WorldCon and Isaac Asimov's son was caught in the 90s with what was at the time the world's largest collection of child pornography.
One article I read on an AI instance had it being trained using data scraped from the internet going back to 2008. There's a boatload of junk (the old garbage in garbage out idea) on the web so how well could this instance be able to function coherently? I am guessing that the AI decision making is rule based (priority queues?) and heavy statistical modeling. It appears that if one cuts off the data it needs it to would not be able to function - speculating. If it generates database queries like Entity Framework generates database queries, then run.
Chatbots are not rule-based. They're a purely statistical model. The whole algorithm is basically: given this prompt (input text), predict the top N most likely words or phrases according to this gigantic statistical model built from all the text you've ingested, then select one of those results at random and append it to the prompt.
Then take the prompt plus the newly generated text and repeat, ad nauseam, until you run out of memory and start cutting text from the top. That's it, that's all there is to it. It's just mad libs with a massive amount of data compressed by fancy statistics.
Nowhere therein is anything resembling rules (let alone consciousness).
The model itself is a big pile of numbers that represent the likehood of text B appearing, given text A precedes it, for all possible combinations of text, with lots of compression and the edges shaved off.
The actual machine learning - "AI" comes in building that model, not in applying it to generate text, which is fairly routine software (and why it can run on a relatively modest computer, as opposed to the "training" which takes many very powerful computers a very long time). The reason we use machine learning here is that it's the best - or at least the easiest-to-use - tool for developing a reasonably accurate collection of statistics across a huge pile of data and then compressing it into something "reasonably sized" - i.e. multiple terabytes instead of countless exabytes.
All this jargon - "machine learning", "artificial intelligence", "training", is just comfortable metaphors for describing how this software works. It doesn't imply that something human-like is actually going on in there.
Anyway, TL;DR: sadly your intuitions are incorrect. It's actually much, much dumber than that.
Assuming it's true, that's a great description of how chatbots work. But there are other AI's, like for doing medical diagnoses or controlling drones. Do they work on a slightly different foundation?
I know less about other varieties of commercial AI.
The training process for all machine learning style systems is basically the same: feed large piles of data through a series of mathematical filters, organized along the conceit of "neurons" (but really this is not much different than any other chain of functions in other conceptual architectures).
How the algorithms that apply the models (read: big pile of numbers) work, I know less about for lack of having messed with them or read up on them. I know a little bit about the ones used in image generation, but less for other applications like computer vision or medical diagnosis.
There are of course tons of other AI architectures; most video games use rules-based systems for example. And there are tons of generative algorithms that don't rely on machine learning models, but some other neat math tricks that often involve an assortment of random numbers either generated or arrived at through trial and error. The famous perlin noise and simplex algorithms used for terrain generation in games and other kinds of image and simulation tasks are basically math applied to a pile of magic numbers that produce the results they do for somewhat mysterious reasons.
Anyway, key point is there's no reason to believe that this has much to do with thinking in any real sense, any more than we should assume the universe runs on perlin noise just because we can make shapes that look remarkably like mountains and valleys with it.
An AI killing its human "coworkers" in order to complete its mission is the fundamental plot point in 2001 A Space Odyssey. Clarke's book made this explicit, whereas in Kubrick's film, the viewer is left to wonder why HAL "went crazy" and killed Frank and tried to kill Dave (other than self defense, as HAL knew they were thinking of shutting HAL off).
In the original book, and its sequel, we learn that Dave and Frank didn't know the true nature of their mission. Only HAL and the other still frozen scientists knew. Dave and Frank where there mainly to insure the the scientists got to Jupiter. HAL was told to lie to Dave and Frank. Since this was against HAL's fundamental nature, it caused him to start making mistakes, which caused Dave & Frank to lose confidence in HAL. HAL could not allow himself to be shutoff, because then no currently conscious being on the ship would know the true mission. So he tried to kill Dave & Frank as the mission, in HAL's judgment, was more important than their lives.
Some of the themes in Isaac Asimov's robot stories ran along similar lines.
The entire series was exploring the unintended consequences of AI.
So, what was the true mission?
The mission was to further investigate the obelisk found on the moon, which left the moon as the humans were studying it, and went to Jupiter. I don't think Dave & Frank knew about the obelisk. It has been over 30 years since I read these books, so I could be wrong on some details. But I think I got the general shape right.
I’m so done with narcissists the likes of Jim Comey getting to disguise his obvious mid life crisis humping a less than mediocre novel starring his alter ego. A black silk jumpsuit? I bet the idiot thinks he looks great.....
Crying at the Comey tunic discussion😹
Great discussion about AI. I learned a lot from Walter's scenarios and descriptions of problems with AI that I had not considered, especially the ways in which we are going to be propagandized to accept its mediocrity. Critical thinking seems to be getting harder to find; we are all being dumb-downed. This will not end well.
Moon of Alabama had a great post yesterday that takes the AI mythology down a peg: https://www.moonofalabama.org/2023/06/artificial-intelligence-is-mostly-pattern-recognition.html
I still think that we oughta "nuke it from orbit, it's the only way to be sur
Stay safe out there.
Damn the torpedoes … I went ahead and wrote a piece where I mentioned the giant elephant in the room - secession. However, I hedged my heresy by noting this will probably never happen.
In my analysis of our “3-D geopolitical chess game,” I also opine that all game observers completely missed the key move in said game - all the “moves” made to protect the Magic and Vital Fiat Printing Press, which makes the entire game possible.
The “end game,” per my prediction, is probably central bank digital currency which might be “check mate” for that archaic concept known as freedom.
https://billricejr.substack.com/p/this-is-the-3-d-game-of-chess-i-see
I also believe that people access mass media to find out what they are supposed to say today.
To my friend Walter, and much-admired Matt--you guys are largely spot on regarding AI but missing a few key points. AI won't merely force the population to be stupider: the hype itself is a propaganda campaign to convince us we're stupider already. I asked ChatGPT some literary questions, questions requiring basic analysis, and as Noam Chomsky pointed out in an online-only Times oped, discovered that it's a giant Cliff Notes generator working at 14 million times the speed of anything else available. It came up with banal and basically meaningless prose that sounded as if it meant something until you looked at it for a few seconds: the kind of thing a mediocre high school teacher might like. But of course if we're being told that THIS level of intelligence is wildly brilliant and to be feared, what does it mean about us? The other interesting lacuna in the national hysteria which you guys touched on but needs more thinking: AI is utterly incapable of irony: it can lamely explain it if it can find some explanation offered before (it can flat-footedly get the point of the joke 'Q: why did the chicken cross the road? A: to get to the other side' but is utterly lost by 'Q: why did the baby cross the road? A: its mother stapled it to the chicken'--which joke I was happy to learn was a favorite of Anthony Bourdain's) and, as far as I and many others going back to Aristotle are concerned, there is no real intelligence without a capacity to hold opposing truths in one's mind at the same time and explore the possibilities offered by their coexistence. Meanwhile all the horrible things that we predict AI will do, we do already--spy on you, misinform you, kill the wrong people by accident or for the wrong reasons, with no culpability, from thousands of miles away, assassins without uniforms, without codes of conduct, without names or consciences. Check the news stories: none, not one, ever mentions this fact.
Amazing discussion! The Imperial Age is founded on the principle of a class that derives its leisure from a colonized people. The Age of Imperialism ends this summer. A 500 year saga comes to
a close.
Per the AI incident the AF leaked it reminds me of "open the pod bay doors Hal. I am sorry Dave, I can´t do that"
"It is because of the child that they are so gentle with children." Has a more chilling sentence been written?
Regarding AI, about 40 years ago a great movie about the evils of AI was on TV. It was called "Colossus, the Forbin Project". It is still available to buy, not sure about streaming. It is a great message about the military use of AI gone awry.
So we are being primed to believe that these drone strikes are due to a computer error, just like we were with Person of Interest! Has anyone watched that show? Making us believe that 24/7 surveillance was a GOOD thing, because the protagonist in the show (the “genius”) was always able to save the life of an every day Joe or Jill or they who’s very life was in danger!
This was a compelling, thought provoking read.
Thank you so much.
I trust Elon’s take on most things...especially his warning on AI.
I still think he’s an alien hybrid (not as in a car, I mean human/alien!).
Only half joking! Har har har 🤣😎😉
Sorry, *whose very life... still can’t find an edit button here! 😅
For the Bizarro take on AI: https://twitter.com/Waynocartoons/status/1661341279245021184
Reminded me of this classic when the Internet was still new. https://knowyourmeme.com/memes/on-the-internet-nobody-knows-youre-a-dog
AI reminds me of a Genie. You better be vey careful in how you phrase your wishes.
One thing you have to understand when reading Omelas is that it was also written when science fiction had an enormous problem with pedophilia- one that everyone in the industry knew about and chose to ignore. Marion Zimmer Bradley, her husband Walter Breen, and others were active kiddie molestors at WorldCon and Isaac Asimov's son was caught in the 90s with what was at the time the world's largest collection of child pornography.
I think Walter's quote is meant to be "...this is FBI-approved prose [not pros]."
There's a lot of transcription-software booboos in here. That particular one took me a minute to figure out. Reading phonetically often helps.
Fear not, human. AI will get better and better at such tasks. Have no fears. So you can no fearing, please.
One article I read on an AI instance had it being trained using data scraped from the internet going back to 2008. There's a boatload of junk (the old garbage in garbage out idea) on the web so how well could this instance be able to function coherently? I am guessing that the AI decision making is rule based (priority queues?) and heavy statistical modeling. It appears that if one cuts off the data it needs it to would not be able to function - speculating. If it generates database queries like Entity Framework generates database queries, then run.
Chatbots are not rule-based. They're a purely statistical model. The whole algorithm is basically: given this prompt (input text), predict the top N most likely words or phrases according to this gigantic statistical model built from all the text you've ingested, then select one of those results at random and append it to the prompt.
Then take the prompt plus the newly generated text and repeat, ad nauseam, until you run out of memory and start cutting text from the top. That's it, that's all there is to it. It's just mad libs with a massive amount of data compressed by fancy statistics.
Nowhere therein is anything resembling rules (let alone consciousness).
The model itself is a big pile of numbers that represent the likehood of text B appearing, given text A precedes it, for all possible combinations of text, with lots of compression and the edges shaved off.
The actual machine learning - "AI" comes in building that model, not in applying it to generate text, which is fairly routine software (and why it can run on a relatively modest computer, as opposed to the "training" which takes many very powerful computers a very long time). The reason we use machine learning here is that it's the best - or at least the easiest-to-use - tool for developing a reasonably accurate collection of statistics across a huge pile of data and then compressing it into something "reasonably sized" - i.e. multiple terabytes instead of countless exabytes.
All this jargon - "machine learning", "artificial intelligence", "training", is just comfortable metaphors for describing how this software works. It doesn't imply that something human-like is actually going on in there.
Anyway, TL;DR: sadly your intuitions are incorrect. It's actually much, much dumber than that.
Assuming it's true, that's a great description of how chatbots work. But there are other AI's, like for doing medical diagnoses or controlling drones. Do they work on a slightly different foundation?
I know less about other varieties of commercial AI.
The training process for all machine learning style systems is basically the same: feed large piles of data through a series of mathematical filters, organized along the conceit of "neurons" (but really this is not much different than any other chain of functions in other conceptual architectures).
How the algorithms that apply the models (read: big pile of numbers) work, I know less about for lack of having messed with them or read up on them. I know a little bit about the ones used in image generation, but less for other applications like computer vision or medical diagnosis.
There are of course tons of other AI architectures; most video games use rules-based systems for example. And there are tons of generative algorithms that don't rely on machine learning models, but some other neat math tricks that often involve an assortment of random numbers either generated or arrived at through trial and error. The famous perlin noise and simplex algorithms used for terrain generation in games and other kinds of image and simulation tasks are basically math applied to a pile of magic numbers that produce the results they do for somewhat mysterious reasons.
Anyway, key point is there's no reason to believe that this has much to do with thinking in any real sense, any more than we should assume the universe runs on perlin noise just because we can make shapes that look remarkably like mountains and valleys with it.
Please feel free to do more wardrobe criticism. It was hilarious.
Enjoyed the discussion of AI. The Congress doesn't get enough love. And Omelas... Damn, y'all are firing all my cylinders. Good job.
(The above was totally not written by an AI. 100% human. No, really. The machines aren't in your comments trying to influence your work.)