Transcript: America This Week, June 2, 2022, "AI, the Tone-Deaf Parakeet"
Walter and Matt on Jim Comey's literary tunic, the marketing angle on "AI is taking over the world" stories, and Ursula K. Le Guin's "The Ones Who Walk Away from the Omelas"
In a departure from the near-nonstop madness of the last eight months or so, this past week was a relatively quiet one from a news standpoint, with no open catasrophes, celebrity faceplants, or shocking revelations. We did, however, see more of the increasing prevalence of AI as a character in the press. On this episode of “America This Week,” Walter Kirn and I compile some of those stories, discuss whether AI is really becoming sentient or if that’s just clever marketing, and also talk about this week’s story, The Ones Who Walk Away From Omelas, by Ursula K. Le Guin.
A Transcript, edited for length and clarity:
On Jim Comey’s Media Tour
Matt Taibbi: Jim Comey made a media tour this week. And for those of you who don’t know how this works, if you write something like a book or in this case a novel, your PR people schedule you to go on all of these channels, they have their own contacts, it’s automatically done. But in the case of Jim Comey, he’s written a detective novel. He’s rebranding himself as a novelist, but people don’t really want to talk to him about the novel because they know that’s not going to generate any traffic.
So, it’s a symbiosis. He goes on, he has to answer some unpleasant questions about things like the Durham report, which he completely doesn’t. He non-answers all the questions. Everybody slobbers all over him. They go through this unpleasant ritual, and then they ask him about his book, which nobody read. But there were a couple of really funny things about this media tour, which included NPR, CNN and then MSNBC, there were multiple hits there, but a particularly bad one on Morning Joe with a quartet of questioners who did nothing but throw softballs at him. And he appears dressed in this thing, I don’t know, it was like a sheer black singlet with ... It was almost like he was trying to wear a costume that just said writer on the front, like John Belushi’s sweatshirt that said, “College,” in Animal House.
Walter Kirn: I saw it. We don’t do a lot of wardrobe criticism on this show, but let’s start with James Comey’s black tunic/turtle necky thing. It looked like something a Jungian analyst on the upper West side of Manhattan would wear in 1963, the guy who tells you to keep a journal of your most violent dreams. It was the shirt that usually goes with a big medallion, maybe a pentagram. It was very Rosemary’s Baby, let’s put it that way.
Matt Taibbi: Yes, that’s right. That’s what I was thinking of.
Walter Kirn: And it’s something that “real writers,”not that he isn’t one now that he has an actual novel with his name on the front of it, used to wear to some extent and ... Or Anton LaVey, remember him.
Matt Taibbi: Yes, the Satanist.
Walter Kirn: He wore this kind thing a lot. It slims you down. I think it’s apt reference, at least in satorial terms.
Matt Taibbi: That’s edifying.
Walter Kirn: So it made him into a mysterious guy who has suddenly transformed into an artist. We haven’t kept up with him. While the Durham stuff has been going on he’s been absent from the scene, like a caterpillar in its dormancy, and now he’s hatched into this black, ominous intellectual butterfly.
Matt Taibbi: It’s Silence of the Lambs.
Walter Kirn: One thing I noticed about his interviews was that when asked anything of substance, it was all about Donald Trump again. He’s coming out as the Vampire slayer, who somehow kept Trump in his crypt, or put him in the crypt. And he kept talking about him being an existential threat to the rule of law. He uses the word “existential” a lot, because he’s dressing in the era of the existentialists.
Matt Taibbi: Oh, that’s right. Yeah. So it’s Sartre meets…
Walter Kirn: Jean Paul, we should call him Jean Paul Comey.
Matt Taibbi: So it’s Sartre meets LaVey…
Walter Kirn: Meets a Jungian therapist from West End Avenue.
Matt Taibbi: Meets Rosemary’s Baby meets ... I thought there was a little Arthur Miller, but maybe that’s the wrong person I’m thinking of. Is it Mailer? Which writer? James Baldwin wore that a little bit, but I don’t know.
Walter Kirn: I think a lot of Nureyev, a lot of ballet people, the Russian influenced ballet people wore these things too in the great days of the New York Ballet. But it was definitely a blast from the past. It was a confusing cultural signaling, I guess it registered seriousness, depth and just a new, maybe polyamorous late life version of himself. I don’t know.
Matt Taibbi: I dug it. I thought it was bold. And look, this is the part where I come out as jealous because all I ever wanted to be in life was a detective novelist. I mean, I named one of my kids after Marlowe, and I’m not going to lie, it is bothersome to me that somebody can be the FBI director and basically a spook his whole life then write a detective novel,
Walter Kirn: That’s not unprecedented, Matt. In this, he joins the company of Howard Hunt, the Watergate Burglar and possible JFK assassin.
Matt Taibbi: Those books were good though!
Walter Kirn: They were. I just read a whole long Gore Vidal essay about the novels of Howard Hunt. And even Gore Vidal, who hated everything, and everyone had to admit that Hunt’s novels weren’t half bad. And Hunt wrote them under four different pseudonyms with four different alter egos. And then we had Buckley and we had James Webb, remember erstwhile presidential candidate James Webb, who is a novelist. So being a man of intrigue in Washington and writing a novel is a tradition, one that we didn’t expect he would insert himself into, but here he has.
Matt Taibbi: Bill Clinton did it. Hillary did it, right?
Walter Kirn: Novels? There’s a Hillary Clinton?
Matt Taibbi: Yeah, Bill Clinton did a novel.
Walter Kirn: Oh boy, did I miss that.
Matt Taibbi: Hillary Clinton, I believe did a novel. State of Terror.
Walter Kirn: State of Terror. Okay. I always expect that having served as FBI director, they’ll have some knowledge, some inside stuff that they’re going to put into the novel and wink, wink, we’re going to recognize it in some way. I’m wondering if that is true, because I haven’t read the book, you did, or you at least started it.
Matt Taibbi: I made it to about chapter four, I’ll admit it. He doesn’t do that because: what did he do? He based it on his experiences working as a mob prosecutor back in the ‘80s. So the protagonist is a female federal prosecutor from the Southern District of New York who lives in Hoboken, and her name is Nora Carlton. And apparently the book was co-written in consultation with his wife. So that was where the female insights came from.
It follows the standard format of detective novels today, which is when you do the internal dialogue of the main character, it’s in parentheses and then italics, like, “Gosh, that was strange, she thought.” You know how they do that?
Walter Kirn: Yeah, sure.
Matt Taibbi: So it’s very formulaic. I mean, I’m not going to say that it’s Thomas Friedman level maladroit writing. The thing that I thought was really funny was this ritual of a first time writer. It’s not terribly original. It’s a tough read. And immediately there is a whole slate of famous detective writers who jump on a bandwagon, and I just want to read you some of the blurbs.
Here’s Jeffery Deaver, “A nonstop thrill ride!” Harlan Coben, “Memorable characters…gripping plot… A truly outstanding debut… Announces a bold new talent!” Michael Connelly says, “It’s a great read… He delivers it with the addictive style of an expert storyteller.” Ian Rankin, the author of the Rebus series, says, “Reeks of authenticity… [a] debut publishers dream of.”
Matt Taibbi: This whole thing is funny to me too, because not only do you immediately get the wide release the first-time authors of detective novels never get, but you get all these famous detective writers slobbering all over you the same way they did with Clinton’s book and all of these folks. What do you think the ethics of that are as a fiction writer, Walter?
Walter Kirn: No, because I don’t know any famous people, and no one would allow my somewhat tarnished name to mar the cover of their book. If I were asked by some luminary, a former CIA head or FBI head to blurb their book, and it was good, then I would feel like I had the best of both worlds. Here I was currying favor with power, and I was delivering an honest as critical assessment of their work. So I’d go ahead.
If it was bad and I had to somehow decorate this awful thing with my words, I would back off because, Matt, I’m not rich. I don’t write bestselling books. All I really have is my little integrity, and I wouldn’t sacrifice it on the altar of that. It wouldn’t get me anything anyway. What I notice is that speaking of AI, which we’re going to get to, all of those blurbs seem AI generated to me. You could literally use those refrigerator magnets that have words on them in which you go every day to the refrigerator and put a new sentence together. You could use the top 100 words that are used in blurbs and put those things together almost at random, “The addictive style of a born storyteller.”
Matt Taibbi: What was the spy feature, with the monocle? In Spy Magazine there was the reviewer with the monocle.
Walter Kirn: Walter Monheit!
Matt Taibbi: Right. Yes, yes, exactly.
Walter Kirn: Walter Monheit of Spy Magazine. He was our fictional rave reviewer. He especially reviewed movies as I remember. And it was always, “Four Oscars,” or he always addressed Oscar. He said, “Oscar, stay up all night reading this thing and then tell us in the morning how much you loved it.” Or yeah, Walter Monheit would have given this four monocles, I think.
On this week’s spate of AI-themed news stories:
Matt Taibbi: This week was filled with stories about AI. This is something that you and I have talked around before, and you’ve written about this before, and there’s a number of different things that we wanted to get to today. But one of the stories, one that got the most attention involved is simulation. And here’s the Fox News headline, “Air Force pushes back on claim that military AI drone sim killed operator, says remarks, ‘Taken out of context.’” It goes:
The US Air Force on Friday is pushing back on comments an official made last week in which he claimed that a simulation of an artificial intelligence enabled drone tasked with destroying surface-to-air missile sites turned against and attacked its human users saying the remarks, “Were taken out of context and were meant to be anecdotal.”
What’s funny about this is that this retraction of last week’s news in which somebody said, there was a simulation at which AI killed one of our own soldiers. The retraction reads like an AI written retraction, and this is not the first time that we’ve seen stories like this. This recalls the other AI incident recently where the AI reportedly pretended to be disabled in order to hire somebody.
Walter Kirn: The only thing more ominous than super competent AI is AI that does the baby bird broken wing act. “Let me come in, pet me, I can’t do anything,” and then it orders a drone strike on you. When AI learns to feign pain and sorrow, then we’re really in trouble. Right now it’s still glinting, scary, monstrous presence, but when it starts to act like a little girl coming to your door for UNICEF, “Hi, I’m just a lonely AI helping out the children of the world. May I come in?”
Matt Taibbi: It’s a little bit like The Exorcist. Remember when the creature speaks to the priest and the voice of his dead mother, like, “Why did you do this to me, Dimi?”
Walter Kirn: “Why are you ignoring me?” To rewind, first of all, an AI in a simulation, in an event that happened inside a computer, decided to complete its mission by killing its human operator. And in the stories I read, the AI supposedly decided, I don’t know how they have this much insight into its thought processes, but it supposedly decided that because the human operator had veto power over the mission, the first thing to do to make the mission successful was annihilate its boss, which is a very logical.
Matt Taibbi: It is!
Walter Kirn: The AI is doing what so many humans think of but don’t have the guts to do. It just cuts to the chase, “Wait, kill my boss, and then I’m free.” We’re going to see that the AI not having a super ego and not having to a conscience and having to explain itself to others is going to do all kinds of things that maybe we’re going to root for because we think, “Yeah, I never had the guts to just kill my superior so that I could actually do the job,” or, “Take out a machine gun in a meeting that was going on too long and get rid of everybody so I could get onto the task at hand.”
Matt Taibbi: Call in an air strike.
Walter Kirn: AI cuts to the chase. They didn’t like this story getting out for obvious reasons, because it suggests that the future of our military, which seems to be more and more devoted to this high-tech model, might suddenly go awry. I mean, we might never even get to the fight. It will just turn on us and say, “We wouldn’t be having this war if the United States of America didn’t exist. Therefore, to end the war, I just have to kill everybody I’m working for.” But what I find about these stories, the doomsday AI stories, is that they I think take this perverse pleasure, the people who are behind these technologies in showing us how scary they can be and how close to autonomy and independence they are. And I think it’s actually a sales technique with AI. It’s set of drug addicts that when they hear that there is a form of heroin on the street that is causing a lot of overdoses, everybody wants that type. And everybody wants the AI that’s right on the cusp of becoming Skynet and destroying everybody. And so it may have been the military or maybe the contractor subtly advertising the fact that, “Hey, man, this AI is high octane. This is the good shit.”
Matt Taibbi: Yeah, I actually did a story once where in Camden, New Jersey where I followed addicts and cops around for a few days, or I think it was three. But that whole thing where the addicts get together in the morning, and if you hear that a whole bunch of people were falling out around the corner because of batch X is filled with fentanyl or whatever, everybody rushes to go get that brand.
Walter Kirn: But the orange sunshine of AI, which killed its operator, the drone AI that killed its operator is also interesting, that the story got out, that it wasn’t anecdotal, that they actually told it, which I thought was one of the most interesting features of it. Why would the US military be admitting or speaking publicly about friendly fire basically? Seemed like advance excuse making. In case something goes awry in the future, we can blame the AI. See, I saw a whole future of AI-as-patsy unscrolled before me when I read this story. We had all these drone strikes during the War on Terror, which killed wedding goers and people hurting animals in the desert, innocents and so on. Now all that can be shifted to the AI. Everything in warfare that doesn’t go well is potentially the problem of the computer, not the person now. So, I guess I saw a preliminary ass-covering that might come in useful later.
Matt Taibbi: People take a perverse pleasure in this stuff. And why would they leak it out? It does feel a little bit odd. But when I first saw this, I instantly recall, I did a story a long time ago about drone attacks, and I remembered that the military had released a very strange detail as part of a story that was given to the Washington Post, I believe in 2012. Well, the headline of that story was “Remote US Base at Core of Secret Operations.” This was about a drone basin in Djibouti. And there was a very strange passage in the very middle of that article. It goes like this:
In March of 2011, a predator parked at the camp started its engine without any human direction, even though the ignition had been turned off and the fuel lines closed. Technicians concluded that a software bug had infected the ‘brains of the drone’, but it never pinpointed the problem.
“After that whole starting at itself incident, we were fairly wary of the aircraft and watched it pretty closely,” an unnamed Air Force squadron commander testified to an investigative board according to a transcript. “Right now I think the software is not good.”
Somebody had to tip off I think that reporter to the existence of that piece of testimony. Just that detail is a little bit strange to allow out to the public. So the idea that Predator drones or Reaper drones are maybe trying to get onto the runway by themselves. I wonder if we’re going to start getting stories like that too.
Walter Kirn: As any listener to this show knows, I cultivate a paranoid garden in the back of my head. And I try not to let it into the front of my head too often, but worst-case scenarios always appeal to fiction writers such as me. And as I say, one of the problems with drones has been when they do something bad, when they blow up a wedding or a school, their movements can always be traced back to a human operator. But this new emphasis on AI, it looks more and more to me like excuse number three that can be used in the case of a disaster.
It took over... “I’m sorry. Usually, it doesn’t do this. It only does this one in a hundred thousand times, but this time it happened.” Will we see the Washington Monument explode in our lifetime due to independent autonomous AI? Who knows? It just seems like if the JFK assassination were to happen again, it would probably be a drone that went crazy – one that was meant to protect the motorcade. “We don’t know what happened.” Anyway, as I say, scariness seems to be a marketing tool for AIs. Last week we had Sam Altman, the OpenAI impresario of Chat GPT-4 on Capitol Hill saying, “Watch out. Be careful. You need to regulate this thing, maybe.”
Matt Taibbi: It does sound a little bit like a marketing ploy. On the one hand it’s calling for regulation, but on the other hand, it’s a way of getting into the news that AI is so close to being alive that, look out.
Walter Kirn: I’m no engineer, Matt, but I have tried to do as much reading as can be expected of a novelist on the subject of how AI actually works, especially these large language model chat programs, and it is far from conscious. If it gives the impression of consciousness, it’s only because such massive computing power is being trained on rather small problems like writing an essay that our brain is tricked into thinking the complexity of the document must be the result of some form of consciousness. In fact, it’s the result of a conjuring trick of such immensity and so much, as I say, computing power that it fools us into thinking so. And the more I learn about AI, strangely, the less scared I am of it, except as maybe a tool that will launder human culpability and be used as an excuse or maybe as a social factor in that it will displace a lot of workers or make redundant a lot of jobs.
But its potential to go rogue once you study its current state seems rather limited to me. And so, I wonder why, given that, that there are so many scare stories and gleeful scare stories. Now, Elon Musk a few weeks ago gave his scenario for why AI should be feared and his scenario for it breaking out and causing damage in society. And his picture was that because AI could generate texts so quickly and supplely it was going to learn to become the most influential writer on earth, that somehow it was going to generate speeches and arguments that were irresistible to the human mind and would lead us into decision making that otherwise we would resist or abhor. It was going to become the great orator, in other words.
Matt Taibbi: Let’s get into that because I think I’m with you on this. I totally disagree about that. I am terrified of AI as a targeting tool for drones because clearly, as Michael Hayden said, we kill based on metadata, and so they’re creating algorithmic targeting processes. And that is just a way to... It’s an end around conscience, basically for people who are firing missiles at people halfway around the world.
Walter Kirn: It’s a modern dream to kill without responsibility, to have war without culpability. And you’re right, that is terrifying, the offloading of all of that onto a computer.
Matt Taibbi: On a different scale in a moral universe that’s not nearly as horrible, but is scary in different ways, I think AI and machine learning will be used in the censorship realm, they already are, to vastly enhance the ability of platforms and organizations and governments to streamline the process of forcing everybody into the same narrow band of permitted thought. That’s already happening. They’ve become very good at that. And there are lots of AI enabled tools that are out there already helping make that happen. But in terms of this idea that AI is going to become a thought leader and is going to become a great orator. There were two stories that came out this week that are good starting points for that discussion. One was in Business Insider, the headline was, “It’s Becoming Clear that AI is going to Whack the Mediocre Middle of Office Workers.”
It starts off with the lede, “Mediocrity will be automated.” It says,
Specifically, AI could disproportionately impact the middle class of white-collar workers. The folks who are mid-career, mid ability, mid-level, and yes, in many cases, mediocre. Here’s why. A bunch of academics, studied the impact of access to an AI based conversational assistant on 5,200 customer support agents at a Fortune 500 software company. They found that the tool helped increase productivity by 14%, and critically it was novice workers who benefited the most.
So basically, the ML systems are capturing what the mid-level employees are supposed to do, and that’s where the productivity is coming. They’re not able to reproduce what the more senior level operators are able to do. So that came out this week. And then there was another story that just came out today, and this I thought was something that I really wanted to get your take on, although you’ve already written about this. This is in Wired today, “Why Hollywood Really Fears Generative AI.” And then it’s just this whole thing about basically how AI is a threat to SAG writers.
Walter Kirn: SAG is actors, WGAis writers.
Matt Taibbi: Yeah, right. Sorry, SAG actors. SAG has been concerned about machine learning tools. This is about the deepfake involving Ryan Reynolds and the idea that actors might be replaced by AI-enabled deepfakes. But there’s been similar concern about AI being able to produce screenplays. And you’ve written about this, you wrote a really funny essay back in March called Project Parakeet. There’s some stuff in here I want to read from, but first I want to get your thoughts on all that because in my mind, AI is going to be great at quantitative jobs like stifling speech, not so great at trying to imitate people and convince them and become cultural icons.
Walter Kirn: I have a thousand thoughts here, and it’s a bit of a challenge to structure them, but I’ll start with this. I have written screenplays, I’ve written for TV shows, and I know something of how the process works, and I’m a member of the Writer’s Guild of America that is currently on strike in Hollywood. And one of the issues in the strike is: how will the studios deal with AI-generated material? Will writers who maybe assist themselves with AI still get credit for scripts and be paid in the same way? Or will the whole thing get away from us? The actors have their own concern, which is even more futuristic and weird, which has to do with the use of AI-generated images as actors in films. There was a movie many years back called, not many, but a few years back, called The Congress in which Robin Wright Penn is scanned into a data bank. Her image is scanned, thousands of photographs from thousands of different angles are taken, and she’s scanned in, and she loses control of her image.
Walter Kirn: And that was done at a time when it was already foreseeable that this technology would come in. Some of the articles I’ve read about the possibilities for AI with visual images, they’re even talking about movies that will take the viewer and take your image, Matt, and put you in the movie. You could be the consigliere, you could be the Robert Duvall character in The Godfather. We can imagine a time when the very tradition of movie making in which stars and actors we don’t know fill the screen is changed, and we can insert ourselves, other people we know, or even actors that we particularly like into roles. Maybe you don’t like Marlin Brando, and you would rather have The Godfather be played by Mickey Rourke. So it’s a big ball of wax, but it comes down to this: will this stuff be any good? Because as far as the audience is concerned, whether a writer gets paid properly or whether an actor loses control of their image, doesn’t much matter. That’s an inside industry concern.
Will the audience prefer or at least accept AI-written dramatic literature scripts? Well, so now there’s the problem of: how good are those scripts now that they’re written by humans that AI has to compete with? And one of the problems is that in our industrialized over commercial and rather formulaic popular culture, the scripts we see on TV and in movies are proto-AI. They come already from templates about how the first, second and third act should work, and how the hero’s journey should be played out and so on. So we’ve gotten used to formulaic entertainment, and it’s not a great leap to think that those formulas can just be further streamlined, and maybe people won’t know the difference.
But there is one area in which people are still demonstrably superior to AI, and that is in the writing of prose, the writing of texts to be read, not necessarily performed. Though I will suggest that I think AI will have trouble with dialogue when it comes to movies. Dialogue is such a personal and idiosyncratic and strange intuitive process for a writer, and it depends on so much knowledge of the real world and people and ethnicities and all kinds of things that I don’t think that they’ll be able to make credible dialogue in movies through computers for quite a while. But getting back to prose, I called that article-
Matt Taibbi: Can I read what you wrote? There’s this great paragraph in your Project Parakeet story:
Then there’s the mediocre writing itself. If you’ve ever wondered what the speech of the world’s most articulate parakeet might sound like, the world’s most articulate robot parakeet with a chip in implanted behind its beak in a web connection in its skull, wonder no more. Though AI strikes few false notes in its speeds of mimicry, that’s precisely why it offends my ear, because false notes are expressive in their own ways, as any parent of a toddler knows. In the garden one day, my little son stumbled backwards, avoiding a giant insect. “It’s a jungle bee,” he cried. The AI could be programmed to make such charming mistakes, of course, but it would be playing catch-up with tiny tots. The prospect of synthetic machine-made baby talk is the very stuff of nightmares.
Which is totally true. My little son, just last week was reading one of Aesop’s Fables, and the last line is, “Honesty is the best policy,” and he read it as, “Honesty is the best police,” which I thought was hilarious. But you’re absolutely right that the computers will never be able to do unpredictable, idiosyncratic, illogical. That’s one of the reasons I’m not a good fiction writer is because I can’t depart from what I’m logically putting together as, what is going to happen next? And that’s the great gift of a fiction writer, is to make the fictional world work in the same illogical or ridiculous way that the actual world works.
Walter Kirn: That’s because to AI, the world does not consist of the cars going by and the people walking down the street and the temperature of the air and the smell of pierogis from the Polish Diner wafting out onto the street. To AI, the world is all the texts that have been written up until that point, which it has access to and can train on. So, I’ll give you an example of the absurdities that result. I did an experiment once where I took the first couple of lines of a book I’d written and gave them to an AI program as the beginning of a story and asked to see how it would continue the story. And this is going to get a little complex. It’s a book called Lost in the Meritocracy. And in the first sentence, I describe how when I was going to the SAT tests with my school bus full of high school classmates in St. Paul from our small town, everybody started passing around some cherry schnapps, not giving a damn obviously, how they were going to perform on the SATs, but taking advantage to drink on a field trip.
Well, so I put the first line about us being on a bus to St. Paul to take the SAT test and the AI then within seconds, churned out a couple of paragraphs, new first paragraphs for my book. And in paragraph one, it said, “As the bus passed through St. Paul, we looked out the windows at the refugee camps lining the road and so on.” And it asserted that in St. Paul, Minnesota in the 1970s, there were a lot of labor camps or some concentration camps next to the road because it doesn’t know anything about the world. It doesn’t know anything... It can make plausible sentences because all the AI is really doing is running a constant set of analyses about whether the words that follow the words that you just had will result in a plausible, credible sentence.
And so it’s doing these calculations after the words, “Yes, I have to go,” what are the most likely next words? And then it does a distribution and sees which are the most likely in the universe of texts that it is scanned, and it gives you the next most plausible word, and on and on and on and on, without any reference to reality. You might... Now, people who argue for the wonders of AI and AI writing, tell me, “Oh, yes, but it derives reality secondhand, because the universe of texts that it’s drawing from were about reality, and therefore it has got a synthetic notion of reality.” But that just isn’t really convincing. You can’t get a story about what it feels like to be on a troop ship headed to Normandy Beach from the inside of an American 18-year-old from AI.
You can get an average of all the stories that have already been written about that incident, but you won’t get one that feels like it happened to an individual because it didn’t. And you see the difference between writing and AI quote, unquote, “writing,” is that we write about something. We write about the world, we write about our reactions, our emotional and mental reactions, the memories that are triggered by the world and so on. It’s done with reference to something. AI refers only to a body of texts that preexist it. It’s psychopathic in the sense that it’s like an alien that came to earth and said, “How do I behave at birthday parties?” And it has no idea, so it somehow does an analysis of, I don’t know, movies of birthday parties, and in an instant, it makes itself the perfect birthday party goer.
But it’s still not having an experience, it’s just a psychopath. And the example I like to use is: is there a difference between a good mother and an actor portraying a good mother? Well, of course, there is. Is there a difference between a good mother and a psychopath who has come in into a party to murder someone, but in order to gain access, has to be the perfect mother of a child at the party? Of course, there is. Now, interestingly, Matt, we talked about randomness and surprise in AI, they have found that when the programs have to choose the next most likely word in a sentence or a paragraph, that if they always choose the most likely word, the prose comes out very flat, it seems robotic.
So they have built in a little bit of wiggle room such that sometimes it chooses the second most likely word, or even the third most likely word, and that gives it life likeness. So, they are trying to overcome this problem of robotic sounding text through giving a little room for error, a little fuzziness to the whole thing. But I can still spot the stuff a mile away, and it’s starting to crop up in journalism a lot. I’m starting to see sports stories and financial stories especially, which are always formulaic in the first place, except in the most inspired hands, I’m starting to see them and know that they were written by an AI.
Matt Taibbi: That in itself is a little bit scary because AI is probably equal to all sorts of tasks that, as you say, we’ve already started down the road of making formulaic. But there are things that it just will never be able to do because an AI doesn’t live in a human body. It doesn’t have a lifetime of experience. It’s not experiencing the world through five biological senses, and it doesn’t have whatever that weird thing is in the human brain that makes us laugh when we see Larry David in a nun costume. That’s not going to be there. It can be taught that it’s supposed to laugh at certain things, but it will never actually have that reaction.
Walter Kirn: But scarier yet, we can be taught to laugh at stuff that’s not funny. Witness the Stephen Colbert Show. I mean, in other-
Matt Taibbi: Well, that’s true.
Walter Kirn: In other words, the laugh track, the insertion of social pressure to laugh in comedy soundtracks so that we know when other people supposedly laugh, when normal people supposedly laugh, does cause us to laugh too.
Matt Taibbi: I feel totally immune to that.
Walter Kirn: So do I. Just like there are some people who are immune to hypnosis and some people on whom advertising appeals don’t really work, there are some of us, and maybe that’s why we’re journalists in the first place, who maybe even bristle when we feel these fake and canned attempts on our mind coming. But I will say this, it’s not just that AI getting better is in some ways scary, in even other cases, terrifying. What’s more terrifying is that we will train ourselves to accept and even be delighted by a inferior product. Remember, there were no jokes funnier than the jokes Stalin told in locked rooms with guards outside. You know what I mean?
Matt Taibbi: That is true, actually. They were really funny.
Walter Kirn: Once you start to realize that your participation in this society depends on you having expected responses to things, you will absolutely train yourself to laugh at the unfunny, cry at the non-moving, scratch your head at the stuff that is only semi profound or mock profound. And because we are... What can be hacked in the human being is our social response. We want to, at some level, maybe not you and I and maybe not everybody fit in, and because AI in a way averages the whole human history of statements, it can then be placed in certain social settings in which it’s dangerous not to agree with it, be moved by it and so on. So while I call this phase in which the AI is getting better and in parallel, people are starting to train themselves down, as it were. Dumb themselves down in order to accept the product that it can provide the taste destruction phase. In the rise of artificial processed foods, they had a problem.
Wonder Bread didn’t taste like anything. It certainly didn’t taste like anything a baker could make, and its texture was so uniform that it could be used as insulation, but they had to get America to like it.
They had to get America to like Tang over orange juice, Wonder Bread over something that came from the bakery, miller beer over something that a real brewer made. And they did it through advertising and through social pressure basically. And they also made the variations that come, and human made real food seem suspect like, “Who wants that crunchy crust? Who wants those specks of colored grain in the bread? I want it absolutely as white as paint on a new house.” And so, I expect a social push to make us comfortable and even excited about the mediocrity that AI is so perfect at providing.
Matt Taibbi: But don’t you think that there will always be people who just will not be attuned to that?
Walter Kirn: They will be akin to the decadent counterrevolutionaries who didn’t find Soviet art inspiring.
Matt Taibbi: But there were a lot of those people, they had to kill a lot of those folks. Because when you’re talking about the way AI writes, it reminds me of the Twain’s essay, Fenimore Cooper’s Literary Offenses.
Walter Kirn: One of my favorites.
Matt Taibbi: It’s one of the great essays of all time. He’s talking about the experience of reading Cooper. It’s just like somebody who’s tone-deaf in music, who’s sharping and flatting all over the place, and it just makes you cringe. And then he makes this huge list of where Cooper uses one word when another one is called for. And I’ll just read the small section:
He uses "Verbal" for "oral"; "precision" for "facility"; "phenomena" for "marvels"; "necessary" for "predetermined"; "unsophisticated" for "primitive"; "preparation" for "expectancy"; "rebuked" for "subdued"; "dependent on" for "resulting from"; "fact" for "condition"; "fact" for "conjecture"; "precaution" for "caution.."
Matt Taibbi: Mark Twain, his ear is nearly infallible. If you read anything that he’s ever written, like all great writers, it’s always the right word in the right situation. And he even had rules about that. He always used the right word, not its second cousin, even if you have to repeat it, which is a violation of one of those AI-style rules that people give you about writing. But the truly really good writers will always come up with a word that’s alive on the page, and the machine is never going to be able to – You’ll be able tell it’s a mimic.
Walter Kirn: But what I’m saying is, it may not have to get that good. First of all, we may lose our facility for recognizing stuff that’s that good. And we may actually be bullied to the point where we don’t want to see that human writing anymore. It’s upsetting. It’s like in the story last week, the machine stops where everybody’s afraid of doing firsthand thought. We may start to feel that, unless something has the dull aura of machine made, pre-approved, state generated text, we’ll be allergic to it.
Unbeknownst to ourselves, we’ve already given the very best example of the process that I’m describing with James Comey’s novel. Here, we have James Comey’s, I’d say, not great novel, maybe even mediocre at best novel, being praised by the big bestselling mystery writers on the cover. And being lavished with attention by the big stars of cable television, all of which is a message to us that if we don’t like this book, it’s our fault. If we don’t like this book, we’re in a dangerous situation. We’ve become a dissenter. We may agree with all their politics, but we still hold out the thought that writing should be good. And when we see that we aren’t jiving with this thing, we might learn to jive because, I mean, this is FBI approved pros.
Matt Taibbi: You’re right. That’s ultimately what scares me about AI, it’s not that it’s going to be really good at this, but that what it is good at is heightening the social anxiety. Clearly with human direction, it is very good at creating the fear response in people to avoid these dangerous ideas or ideas that don’t quite fit or whatever it is, and you’re right.
Walter Kirn: Well, here’s a scenario for you. The first AI comedian will be linked to an AI drone program, and when the AI comedian, maybe it’s feeding its AI into a live human who’s reciting the lines on stage, when the AI detects that someone in the audience has not laughed a certain number of times at its jokes, it then sets them up for a drone appointment on their way home.
AI may actually eliminate its critics and eliminate its detractors and those who are simply left cold by its behavior to the point where it becomes the most brilliant thing on earth, simply because those who didn’t think so are gone. They’re checking accounts have been frozen, they failed to get healthcare. It’s how it works once the thing gets onto social media because AI novels, once they’re put on social media, will give their own reviews.
They will troll their critics, and we’re already seeing that in the way they’re starting to adjust Rotten Tomatoes for certain movies. Apparently the Little Mermaid sucks. Some say that there is an attack on the Little Mermaid on these reviewing sites because they made the Little Mermaid Black. Others have noted that the critiques on these sites don’t have to do with the race of the actor playing the Little Mermaid. They have to do with how bad the CGI is, and how somehow amateurish and predictable the production is. But in any case, I think it was Rotten Tomatoes, or maybe it’s another one of those similar sites has had to adjust its algorithm to boost the score of Little Mermaid because of all this negativity that’s flooded in.
So that’s a numerical score on those sites. But I imagine bad AI critics, Robert EbertTron will... Well, like Walter Monheit or Pete Travers... I’ve got to say, you work for Rolling Stone.
Matt Taibbi: I knew Pete.
Walter Kirn: Pete was famous. Pete’s a real writer and a real reviewer and a real thinker about movies and everything, but he was famous for giving quotable reviews to movies, very positive quotable reviews. He seemed to know how to embed the sentence that Warner Brothers would want to put above the advertising. And so AI will do all that. It will propose and it will dispose. It will be the judge, the jury, and the celebrator of its own products, and that process will go on social media. And people who try to stand up to it will get algorithmized out of existence or suppressed in their influence over the final judgment. And we may have a culture that is very much like Wonder Bread or Tang or Miller Beer. All the things that happened to our food in 1953 will happen to our culture in 2026.
Let me add something about AI, as it stands, the text generation, large language chat programs, as I say, basically go word by word deciding what the next word should be on the basis of a lot of calculations that are run on the database, the searchable, scannable, preexisting universe of texts. But the thing may break through into four dimensions when the productions of AI, and this is going to sound really out there, and everybody’s going to think Walter’s on mushrooms or something. When the productions of AI are linked to some feedback metric that can judge our response, in other words, if we already have... And people don’t realize it, we already have eye tracking software and abilities in our iPhones. We already have it in our computer cameras. We could probably easily, and maybe already do have it to some extent in our smart televisions.
So as, let’s say, an AI movie goes on, it is perfectly possible, and we may even volunteer for this, we may even put special gloves on that will study our galvanic skin response like the lie detector does. As the thing plays out, the computer may say, “This is working, that’s not working. This is getting to them, this isn’t getting to them.” Or a text on a phone could be, “Eyes are glazing over, eyes aren’t showing this rhythm of movement that comes with interest and excitement, reading slowed down, reading stopped.” And it will learn not just how to make plausible seeming texts based on every text that’s been written before. It will learn how to make exciting or influential texts based on our response reading them, and that really is just a heartbeat away and I’m sure is being experimented with now.
And at that point, Elon’s dystopian vision of the ultimate eloquent Oracle of Delphi... Well, not Oracle of Delphi was a little cryptic, but the ultimate persuasive speaker who can get people to do things against their better judgment. At that point, it may come true because it’s like once it has sussed out the program of both individuals and society at large in terms of our likelihood to sit up, be excited, start sweating, get scared, once it’s hacked our emotions through some feedback mechanism, then the scary scenario becomes genuine, I think.
On Ursula K. Le Guin’s, The Ones Who Walk Away from Omelas:
Walter Kirn: This is the story from 1973. It’s extremely short, as you and I said before the show. It’s almost like the outline or the treatment for a short story rather than a short story in itself. And it describes without a lot of specifics, a wonderful society in which everybody is happy. It’s summer, they’re preparing for their summer festival. It’s like Lake Wobegon where it was said to be, “All the children are above average, peace reigns, happiness...” We’re told that there is no guilt in this society, that one of the secrets to its happiness is no one feels any particular guilt about anything. And Le Guin hesitates to describe the society specifically because she knows all readers have different ideas of what a happy world would be. And she says, “Well, just insert your vision of what a happy world is.”
If she says, “If you need orgies in this world, if you need to believe that everybody’s sexually liberated and having a great erotic time, imagine that. Whatever it is you need to imagine in an ideal society, go ahead.” And so that’s a strange move in a story. Usually, the writer takes that on herself to come up with a specifically inspiring world. She says, “Just imagine anyone you like.” It’s like a hypnosis tape that says, “Go to a relaxing place from your youth. I don’t care what it was. Just go to one. You get to choose.” And then she, after luring you in by this hypnotic prompt to imagine a completely serene and cheerful society where there’s no black, there’s no guilt. There may even be orgies that you get to be the most beneficial participant in, she says, there’s one problem with the society. It all depends on the suffering of a little child who’s kept in a basement, kept off to the side in this compartment that she’s never allowed to leave.
The child doesn’t get any sunlight, barely gets any food, is covered with sores, sits in its own excrement and just is miserable 24 hours a day. But it’s only one child, and though everybody in the society knows that their happiness is, in a way, the negatively contingent on this child’s unhappiness, they all get used to it. Because in the utilitarian analysis, for many to be happy at the expense of one tiny miserable child that isn’t even seen that often, who wouldn’t take that?
Matt Taibbi: And surrounded by mops.
Walter Kirn: Yeah. She’s in a mop closet, the child. Who wouldn’t take that deal? And for anybody who believes in the utilitarian analysis of society, I guess, that’s not a problem. But for anybody who doesn’t, it might be. And so the reader is just basically confronted with this notion of a happy Disney society that has no problems except for this secret of a suffering child on whose unhappiness everybody else’s enjoyment depends.
Matt Taibbi: It reminded me a little bit of The Picture of Dorian Gray.
Matt Taibbi: There’s a passage:
They know compassion, it is the existence of the child and their knowledge of its existence that makes possible the nobility of their architecture, the poignancy of their music, the profundity of their science. It is because of the child that they’re so gentle with children. They know that if the wretched one were not there sniffling in the dark, the other one, the flute player could make no joyful music as the young riders line up in their beauty for the race in the sunlight of the first morning of summer.
We hide away in the attic our secrets and our monstrousness, and that’s what allows us to have this superficial happiness. But the difference between this story and the Wilde story is that it doesn’t come crashing down in the end, some monstrous moral lesson, that’s just the way it is. But this idea that happiness is predicated on somebody else’s secret suffering, that’s pretty creepy and disappointing.
Walter Kirn: And that the happiness of the collective is dependent on this misery of an individual. But there are many such stories, and I think a lot of them came out in the early ‘70s, frankly. I don’t know if you remember The Wicker Man, the movie where this island off of Scotland has this festival where they sacrifice someone to a sun God every year so that the crops will grow… We had The Lottery in which a random person is chosen in this little village to be stoned to death to keep the cycle of agriculture and social peace running. But this story is different than all those in that it doesn’t involve a sacrifice. It involves the ongoing suffering of someone, and I think that makes it particularly apt for our era of cancellation, in which we don’t sacrifice somebody and have this cathartic ritual going back to early mankind in which we somehow get all of our anger and jealousy out in one scapegoating. Like I say, that’s a common trope. It goes back to the golden bow and all these other studies of mythology. It goes back to the New Testament.
And in those stories, there’s somehow a magic that comes of the sacrifice, but in this story, it’s that often some corner obscurely in a way that and cannot deny there is misery. It’s ongoing misery, and that misery is somehow mystically the support structure for your happiness. And it asks the question, the story, would you be able to be happy? Would you be one of the festive enjoyers of the summer festival? Or would you be, as in the title, one of the people who walks away from this society? And at the end of the story, she says, for some reason, it’s hard to imagine why, and we don’t really have insight into their processes, their thinking.
Some people can’t deal with this, and they walk away, they leave the place. I can’t imagine what society they’re imagining, what do they think the better world they’re going to is? But there’s something about this arrangement that they can’t countenance. And these stories are morally simplistic in some ways. The reader is always invited to imagine that they’re the one person who wouldn’t throw a stone or would walk away from the society whose secret sustenance was the misery of an imprisoned child. But the good stories ask us whether or not we really are not the heroes at all.
They ask us whether, in fact, we are currently abiding in a society which depends on unseen misery, which we all secretly know about, and we’re not walking away. And as a metaphor for the world now, I think it’s a pretty good one. Every day that you go to buy your Starbucks or every day that you log on to Netflix, you know that a bomb is falling in the Ukraine. It might be a bomb put there by the evil Putin, or it might be one of ours. But every day you know the suffering is increasing. You know the people are dying. It was certainly the way it was in the Vietnam era when this story was written. We had that searing picture of the napalm child or whatever sitting out on maybe railroad tracks or something.
Matt Taibbi: The running girl.
Walter Kirn: Yeah, running girl. Well, that one too. There was also a baby on a train track. And maybe that wasn’t Vietnam era, I’m confused. But we had the running girl. She wrote this story at precisely that time. We also in, the early ‘70s, were very aware of Anne Frank, the little girl who’d been hidden throughout World War II. It was a heroic story, but at the same time, the image of a child imprisoned while adults go on about their business, I think, had unconscious resonance in that period. What would it be now?
Matt Taibbi: We all know that we’re getting iPhones for cheap, but that’s because you got 13-year-olds somewhere in China or Indonesia who are making parts for pennies in the dollar. We’re all aware that somewhere in the world people are working in atrocious conditions to deliver us the goods and services that we use every single day, and that we order conveniently through Amazon painlessly with no human interaction whatsoever. And on some level, all of us are aware that not too far away from us, in most cases, there are families that are going to bed every night and they’re dividing up a cheeseburger into five parts to feed their kids or whatever it is.
Once you get people to accept all the fact that this is going on all around us, it’s just overwhelming you. There’s a limit to how much you can think about all these things. And once you stop, I think that’s when we become the good citizens of Omelas. That acceptance of our culpability is it becomes part of a secret compact for the rest of the world.
Walter Kirn: But what’s great about the story is that it doesn’t allow the reader or the hypothetical citizen of Omelas any outs. You see, we can tell ourselves that the child laborers making the tennis shoes that we’re right now running down the street in our suburb using is temporarily yes, suffering, but their whole economy is being brought up. It’s at an early stage and everything’s going to get better in Cambodia or wherever it is once more investment comes in. It’s a temporary state.
We can tell ourselves that a war going on will end and, “Anyway, it’s a war we didn’t want and so don’t blame us.” But in the story, she’s very uncompromising and stipulating that this child will never be released, will never be happy, and there is no way to construe this child’s predicament except as permanent and absolute. And you are not allowed any of the rationales or any of the fantasies that we use in life to excuse away the suffering that may underlie our prosperity or our peace. It says you have to be able to deal with it. And it seems that most of the people in her society have. Remember, she goes on very early about how there’s no guilt. Most people have somehow conquered any guilt response over this predicament. And I think in that sense it does what literature does that the news can’t in that it makes a very pure pristine case for this quizzing feeling that we have that real life situations don’t, because they always offer us an out, a rationale.
In Omelas, there’s no way you can pretend this kid’s going to grow up happy or this is happening to them because they were bad, or they come from the wrong group or whatever.
Matt Taibbi: The news teaches us to blame suffering on an outgroup.
Walter Kirn: And in that sense, the news has a consolatory function. In other words, it allows us all to believe that the terrible things about our world are temporary, justified, part of a process, inevitable, et cetera, so that we don’t have to feel bad about. The news tells us that wars are somebody else’s fault, that economic suffering is often transitional or transitory, or is the result of bad people and bad rulers that we’re going to get rid of. And insofar as the country is in a moral panic, and it’s in about 10 of them right now, we tell ourselves that canceling people, censoring them, ruining their lives, throwing them into silence and ignoring them is somehow always deserved, part of the engine of progress.
I saw a tweet the other day by a recording artist who somebody had said they didn’t like their song. It was too politically correct. And this guy said, “You’re not the people I want at my concerts anyway. I’m glad this offended you because now I know there’s one less in my audiences.” So our little children locked away now are all there because they were bad. And our media has developed about a thousand ways of convincing us that anybody who’s unhappy or upset or losing out is defective, wrong, politically backwards, socially unacceptable in some way. We can’t just stare at the fact that they exist, that we have constructed a society that depends on losing. We’ve even constructed a media and a discourse, which depends on constantly kicking people out, silencing them, and even villainizing, vilifying them.
Matt Taibbi: That closet is getting a little crowded in the modern landscape. We keep shoving people in there, the number of people who are shoved to the side. This story’s concern with suffering, but there’s also like an element of modern media of just exile where we take people and they’re no longer discussed or they’re un-personed. But the overall effect is the same. It’s the strengthening of this collective at the expense of the individual, the idiosyncratic individual, which coincides with our AI topic, right? I mean, the broad mean stays healthy, unconcerned, guiltless, and then we keep stuffing the ugly human, sore covered human side of ourselves that’s in a mop closet and is really 10 years old but only looks six.
Walter Kirn: The other brilliant feature of this story is that it stipulates the absolute minimum of suffering. It’s only one person, one child, and you don’t see them. It whittles it down to its theoretical minimum and says, “Would you still be content if it were only one?” Because we live in a world of course where millions suffer. Tens of thousands die. Maybe hundreds of millions are without the resources for thriving. But if it were only one, would you be okay? And the story says most people would be. It’s Buddhist in it that it doesn’t criticize them. In fact, it’s only doubt. It’s about, “Who are these people who walk away? What are they thinking? I can’t quite imagine.”
Matt Taibbi: “Where do you think they’re going?”
Walter Kirn: Is there a better world? Is there a world where they could be a better situation than this? Are they crazy?” Oftentimes in real life, Matt, people come to me and say, “Hey Walt, you got a lot of criticisms of society and politics, and you seem to have a lot of dissatisfaction with the way things run. You don’t agree with the censorship regime, and you think surveillance is a restriction of liberty and the mental freedom of the human being. Yeah, okay, but let’s be real. Could we have our society and all the good things that confers without all these things that you are down on? You’re not a very realistic guy, Walt. In fact, you’re just playing a game because you know down deep that indeed your prosperity, your ability to speak, the computer that you’re talking into right now are all predicated on a certain amount of warfare, suffering, injustice, inequality and so on. You’re trying to have your cake and eat it too…”
Matt Taibbi: My final thought on this is just this has been an ongoing theme in a lot the anti-disinformation stuff is that they’re not content to have you sit there in silence and not have an opinion. They’re not content if you quietly express displeasure. They want you actively expressing support for it and showing somehow that you’re on the right track, otherwise you’re accruing negative thoughts, you’re going to end up being demerited according to all these algorithms. And that’s disturbing.
Walter Kirn: To sit it out.
Matt Taibbi: But they don’t want to let you do that. And that’s really troubling that they’re searching you out, not letting you quietly have your own opinion. They want you to make expressions of non-conscious all the time, which is different from what happened even during the Bush years where we had all stories about what went on at the black sites. We had people who protested use of torture and that thing. I think they realized that it’s not good enough to just have those people be an ignored minority. They have to actually be eliminated, I think, in order for this thing to work.
Walter Kirn: I think Abu Ghraib is a great image from reality to parallel the one in the story we discussed. How happy were you with the war on terrorism and how safe we were at home supposedly when you realize that depended on people being hung upside down or having to stand on one foot with a hood over their head? And some people were like, “I’m fine with it,” you know? It is in a way a story about torture and the way it perhaps undergirds normalcy as it were.
Matt Taibbi: And living with it.
Walter Kirn: In real authoritarian societies, totalitarian societies, Kim’s North Korea and so on, there’s often a book that everybody has to be seen reading. Of course, Mao had a book. I’m sure there were books in the Soviet Union that it was good to be seen reading. And maybe in ours it’ll be James Comey’s detective novel. Maybe you’re talking about how they’re not allowing you to just passively not participate, you’ve got to actually get in on the game like those writers who put their blurbs on the cover of Comey’s book. Can you imagine? How does the writer who said, “Hey man, James, as much as I’d like to put my name on your book, it sucked and I just can’t do it now”? Do you think there was that person? If so, they’re quivering at home right now and/or they are one of the bravest souls alive.
But we are always asked to show tokens and badges of our enthusiastic agreement now. Usually, they’re emblems on Twitter and social media. I think that is a problem. We’re no longer allowed to sit this one out or quietly whisper among ourselves that the band sucks. They want us up dancing, clapping, and nothing less will do.
Matt Taibbi: In Soviet Russia though they had this concept called sovok. It’s a play on words because it means dustpan or dustbin, but it’s also short for a Soviet person. And so a person who is a sovok, it’s a specific personality that was everywhere in Soviet Russia. It’s pretty hard to describe, but it’s like a mass man, mass person. One of the characteristics of a good sovok was they could never shut up. You ran into these people in Russia long after the collapse of communism. It was a protective mechanism because under Soviet times, the person who was quiet and looked to be thinking and pensive about something was suspect. So, the safest thing to be was just a babbling idiot who just never stopped talking and saying platitudes over and over again.
The average Soviet person spoke in just constantly in what they call pogovorki, these slogans, right? Like, “He who doesn’t take risks, doesn’t drink champagne” or “Work isn’t a wolf. You don’t have to chase it into the woods.” You just repeat stuff like this over and over again. And that way people know that you’re stupid, which is safe. Meanwhile, if you’re smart and you’re going home every day locking yourself in your apartment to write Master and Margarita or whatever it is, you’re the person who ends up in the bread truck going to Siberia. So I think there’s an element of that with the net age, it’s just we like people who are constantly expressing their non-thought. Those are the people who succeed it seems to be.
Walter Kirn: Well, so there we have a best use case for AI because you’ve just described these meaningless maxims and folks sayings that people would babble out to appear to be harmless and unconcerned with things. I think AI could probably every morning generate a script for people that they could just repeat on social media, like, “This is the average completely bought in non-dangerous citizen speak that you can pour out today.” So I might put in the morning, “What should I tweet today and say on Instagram and maybe even on Substack that will keep the state uninterested in me?” And it’ll say something like, “Well, first tweet ‘A penny saved is a penny earned.’ At 11:30, write a Substack about how it’s not the heat, it’s the humidity. At three o’clock, put up an Instagram of yourself with a puppy that you rescued from the pound and saying he’s doing great. And at seven o’clock sign off by saying, ‘I don’t think Succession could get any better, but after having seen the last episode, I expect this is going to be the War and Peace of our time on TV’.”
And you’ll have completed an entire day of total, banal invisibility to the state. I think AI would be perfect for that job. Mediocrity, after all, Matt, is the safest place to be, especially when the cops are out, you know?
An AI killing its human "coworkers" in order to complete its mission is the fundamental plot point in 2001 A Space Odyssey. Clarke's book made this explicit, whereas in Kubrick's film, the viewer is left to wonder why HAL "went crazy" and killed Frank and tried to kill Dave (other than self defense, as HAL knew they were thinking of shutting HAL off).
In the original book, and its sequel, we learn that Dave and Frank didn't know the true nature of their mission. Only HAL and the other still frozen scientists knew. Dave and Frank where there mainly to insure the the scientists got to Jupiter. HAL was told to lie to Dave and Frank. Since this was against HAL's fundamental nature, it caused him to start making mistakes, which caused Dave & Frank to lose confidence in HAL. HAL could not allow himself to be shutoff, because then no currently conscious being on the ship would know the true mission. So he tried to kill Dave & Frank as the mission, in HAL's judgment, was more important than their lives.
I’m so done with narcissists the likes of Jim Comey getting to disguise his obvious mid life crisis humping a less than mediocre novel starring his alter ego. A black silk jumpsuit? I bet the idiot thinks he looks great.....