Discussion about this post

User's avatar
SimulationCommander's avatar

We're so lucky that AI was so unreliable right out of the box. Imagine the damage they could have done if they had started off straight and added the slant later. Now these things are going to be taken offline for legal liability reasons, if nothing else. You can't just have a machine that spits out slanderous material with a "whoopsie" disclaimer -- such a machine would immediately be abused to plant lies and hide behind the algorithm.

Expand full comment
Hot_Lettuce's avatar

I’m a classicist who does research on ancient Greece and Rome. Sometimes, for fun, I ask ChatGPT a classics question. What’s terrifying is that it will never admit it doesn’t know something, it’ll just make up answers that sound right.

Once I asked ChatGPT the etymology of a Spanish word my brother heard in Mexico, because I was curious if the word came from Latin. ChatGPT gave me a Latin root word. I looked that word up in my Latin dictionary, and it wasn’t there. I asked ChatGPT what Latin texts that word appears in, and it named a work by Cicero, and WROTE A FAKE QUOTE IN GRAMMATICALLY CORRECT LATIN THAT SOUNDS SORT OF LIKE CICERO that incorporates this nonexistent Latin word that it made up to answer a question about the etymology of a Spanish word which, I independently learned, is uncertain. It would rather make shit up than say it doesn’t know.

I’ve done a number of other attempts at using the thing for basic research lookup tasks, like “what’s the Ancient Greek text where this anecdote appears,” and it’s impossible, because it’s constantly inventing plausible sounding bullshit and passing it off as the truth. It’s a very impressive tool, AI, but we have to understand what it is and what it isn’t.

Expand full comment
764 more comments...

No posts