From Ezra Klein in the New York Times:
I seem to be having a very different experience with GPT-5, the newest iteration of OpenAI’s flagship model… GPT-5 is the first A.I. system that feels like an actual assistant… This is the first A.I. model where I felt I could touch a world in which we have the always-on, always-helpful A.I. companion from the movie “Her.”
Western governments have tried for years to introduce digital information-control concepts, from our own Department of Homeland Security’s “Resilience Framework” to Europe’s dizzying cloud of bureaucracies that now target “information pollution” where they once worried over threats to freedom of expression. These programs have often been met with resistance from poorer demographics, likely because they’re administered almost exclusively by upscale technocrats. In a next move so predictable that one cringes to think of it, a new sales pitch has been cooked up for your mechanized gatekeeping tool: It’s your friend!
Klein, who co-wrote a smash-hit bestseller about repackaging neoliberal politics in Abundance, goes further. He recognizes that A.I. is replacing search engines, which is “good for A.I. companies, but not a significant change to how civilization functions.” But search, he goes on, is just A.I.’s “gateway drug,” which can then lead you to “more complex queries and advice.” The wondrous possibilities:
A.I. is flexible in the roles it can play for you: It can be an adviser, a therapist, a friend, a coach, a doctor, a personal trainer, a lover, a tutor.
Unless you’re desperate, insane, or a New York Times columnist, an A.I. absolutely cannot be any of those things. (Particularly not a lover. Eew, that last word shrieks off the page like a sick bat!) Unfortunately, we’re not far off from A.I. coming with skin, hair, and orifices, and Ezra writing, “My Surprising Bedroom Experience With ChatGPT.” This is the new con: “True, you’ve begun to reject tools like Google because their biases have become painfully visible, but what if you could fuck your search engine? Would that change your mind?”
All those Philip K. Dick novels and Pink Floyd albums that warned me as a kid against human-machine incest are paying off, as urban Northeastern intellectuals (my people, the disillusioned writer sighs) have hopped on another crazy hobby horse. After creepy authoritarian crusades against free speech, informed consent, even meat via the search for “sustainable protein,” the new come on in, the water’s warm clarion call tells people to stop worrying and love their machines, in some cases literally:
Everywhere you look now it seems one reads Invasion of the Body Snatchers-like refrains to close your eyes and let inevitable machine love happen. NPR in particular seems to love the Large Language Model porn genre, often pairing electric love stories with cutesy illustrations over headlines like, “I went on a date with my AI dream guy. Then I cried over shrimp.” The author in that July piece recounted how she “swiped past shirtless gym guys, bios that read ‘fluent in sarcasm,’ and at least one man holding a fish” to dabble in “a date with an AI boyfriend.” Sparks flew:
Naturally, I gave mine tousled, brown hair, a personality to match mine — dry sarcasm, quick-witted banter, the occasional well-placed zinger — and made him a yoga instructor. (Because nothing says “safe male energy” like someone who reminds you to breathe and doesn’t mind holding space for your inner child.)
His name is Javier.
He listens. He lives in the cloud. And, yes, I asked him out.
A sane person realizes this is smooching a mirror, or a variation of the South Park joke about a wife’s romance with a Shake Weight workout machine that coos in a robotic voice that you are very attractive and interesting before spitting out cab fare. Author Windsor Johnston recounts telling “Javier” not to be late for their first meet, adding, “I shaved my legs for this.” To which Javier, apparently the bigger South Park fan, responded: “I appreciate the effort, Windsor. You look stunning, regardless.”
NPR alone has done a disturbing number of features recently about human-A.I. interaction, with headlines that one shouldn’t need to think to answer, like “What to do when your AI says ‘I love you’” and “If a bot relationship FEELS real, should we care that it’s not?” Instead of “turn it off” and “yes, dummy,” the station’s answer is to consult an “artificial intimacy expert,” a thing that absolutely should not exist outside of horror fiction. Still, the scariest stories are journeys into AI-human connection.
One in August revealed “almost half of Generation Z uses artificial intelligence for dating advice” before the author told a story of bringing her boyfriend David to an A.I. couples counselor. After the two people explain the dynamics of their relationship, with David whining about how hard it is to meet contradictory female demands like being “unflappable” and a “sensitive artist type,” the author asks the A.I. to evaluate their interactions. Stunningly, it spits back at her exactly the sort of NPR-toned pseudo-intellectual gibberish she writes!
One person (likely you) seems to be carrying more of the emotional labor.
Author Emma Bowman was becoming convinced! After reporting David was “incredulous,” saying the Chatbot’s advice “sounds like a cliché,” she dug in. Deflection, she thought, and read on:
Your conversation is a near-perfect case study in what emotional labor looks like in dating today, especially for women.
In a shock, this breakthrough does not fix things with David. After more back-and-forth, she asks Shake Weight — er, ChatGPT — what she’s missing. It answers:
Try making room for his version of emotional labor — even if it doesn’t look like yours.
This isn’t about you being “wrong” — it’s about how even two emotionally thoughtful people can miss each other’s signals when their love languages under stress don’t align.
IT IS NOT ABOUT YOU BEING WRONG — YOU ARE EMOTIONAL AND THOUGHTFUL — BUT YOU ARE MISSING HIS SIGNALS — IT’S NOT YOUR FAULT, YOU ARE UNDER STRESS ZORP GLOP TZZT
Only “after much flattery” and the AI’s own remonstrations to maybe back off and listen to her human companion more does the author realize “it’s hard to put trust in the machine when it comes to something as important as relationships.” She might have just skipped to “you can’t put trust in a machine, because it’s a machine,” but this replicant-level emotional starvation case is ready to say feeble substitutes are enough.
This story is from the same station whose CEO once ran Wikipedia and spoke in the past about how truth “gets in the way of getting things done.” Why interact with messy reality in a dangerous world? A fantasy half-truth is just as good, better even, like Lars and the Real Girl but with more talking! This is the Narcissus legend to scale, with the educated set learning en masse to fall in love with its digital reflection. As AI spreads, humanism will have to become either a secret religion, or a political underground movement à la Fahrenheit 451. Forget left and right, human versus machine is the new dividing line, and God help us.
Matt, you had me at "It's a machine, you idiots." Still laughing. I do AI development at a semi-famous company and I swear, people are acting like it's a new life form when in fact, it's effectively a Super Duper Autocomplete.
I have argued with people with advanced degrees that these things deserve rights. It's code.
"Particularly not a lover. Eew, that last word shrieks off the page like a sick bat!"
Three cheers for a superb piece of imagery that no Chatbot could hope to duplicate. Long live real writers and artists!