We're so lucky that AI was so unreliable right out of the box. Imagine the damage they could have done if they had started off straight and added the slant later. Now these things are going to be taken offline for legal liability reasons, if nothing else. You can't just have a machine that spits out slanderous material with a "whoopsie" disclaimer -- such a machine would immediately be abused to plant lies and hide behind the algorithm.
Google absolutely needs to be broken up. The dominant search engine cannot also be a major AI creator of information. The conflicts of interest of having these two functions in the same company are enormous and unavoidable. We must have search engines that function fully independently from information generators. If not, truth truly dies.
Being in the business I’m in, I am in the position of influencing all of my circle’s technology choices because I’m the one they come to for help with it. There are many others like me, the “computer guy” of the family, and we’re the reason why those people who really don’t care which products they use are using the ones they are. When a formerly quality product turns to absolute shit like Google search and really any of their offerings have, we’re the ones that notice first. Not all notice at once, but it builds, and before long we’re configuring our circle’s computers/phones/tablets to use something else. Myself, I’m currently favoring Brave Search. Google is on its decline and is too arrogant to realize it yet.
Totally agree. Vanilla Google search is practically useless at this point. I also use Brave and duck duck go, but I will say one way to get around Google is to use Google scholar (depending on what you are researching). You get completely different results that are typically far more serious.
I use brave as well, but it often just really does not do the job. There are often sites I know exist, I find them on other search engines, and can pull them up love, but Brave does not even find them. Frustrating.
Agreed Gentlemen. The willingness of "big techs" leadership to get in bed with America's weaponized bureaucratic surveillance apparatus precludes any claims of free enterprise or free markets. They've consistently caused harm and will only continue to do so. They are anti-American at every crossroads.
Likewise we need immediate transparency on American tax dollars being used to fund faux social service organizations that in fact are politically weaponized fronts for NGO abusers and the criminal financiers they front for.
And, as Alvin Lee said, it's time to "tax the rich and feed the poor 'till there are no rich no more." There is no capitalism or communism. "Ism's" are dear as doornails. The capitalist's and communist's killed them. There is the Republic, the Constitution and the free citizen. The rest is psyop. Hannah Arendt, a first hand survivor of fascist totalitarianism pointed out, that at the first moment free speech on any topic is forbidden, you are inside a tyranny.
MOFA you absolutely right.! Break it up.. too much under one roof.. should have been done years ago.. no integrity on Covid vaccine data, Big Pharma/Govt... stopped using Google 4 yrs ago ..
And it's not helpful when Larry Page calls Elon Musk a "specist" because he was concerned enough to want to have safeguards against AI using its new powers against humans.
Essentially, Larry Page welcomes the rise of SkyNet.
It will not matter if the information is distorted before action is taken. It may already be too late. At this point hard copies truly matter. Anything digital is subject to revision.
I hope you are right but people are just running with this crap. In the industry I am involved with, real estate (I don't sell it that is why I put it that way), AI is the latest craze. I warn folks, "you'd better check that stuff!" They think it is the greatest thing because it allows lazy people to be even lazier.
An article just came out this week on 404media about all the "AI" generated food pictures on delivery apps for fake "ghost kitchen" restaurants. Both hilarious and maddening. There was a picture of a seafood pasta dish that had Lovecraftian creatures atop there pasta.
I'm starting a collection of the bizarre AI generated pictures for online ads. So far there's a four toed black foot with an extra toenail on one toe advertising some kind of foot fungus cure. That ad also included a six fingered hand. Another ad featured a tiny adult sleeping on the floor in a dog bed.
It would just be a funny, surreal, "Get a load of this weird computer shit" type thing, but what beggars belief is that we are meant to take it seriously.
This reminded me of John Sayles's interesting 1984 film "The Brother from Another Planet". Joe Morton stars as an escaped extraterrestrial slave who lands on Earth and is being chased by two "white" aliens who want to return him to their planet. Morton's character resembles a black human except for his three-toed feet.
This is classic midwit behavior and I hope that the market will weed them all out for their middling mediocre output and they'll all end up with nice jobs in fast food. It is not a given though; the question is how much any given market is going to be willing to trade quality for quantity.
What happens when some AI generated description is shown to be false, and the selling agent is on the hook for either not disclosing something, or providing bad information about a property?
Depends on how deep your pockets are when it comes time to hire a lawyer to take it to court. Pretty much Business as Usual but with an added twist for proving who was 'responsible' for the lack of disclosure.
I was mostly thinking about it from the employment side. If I had someone working for me who routinely used a method such as this, I would can them in a second, if only for the liability it puts the on the firm.
SimComm: We must pay for our racist, omniphobic past, regardless of color, genetic makeup or whatever; forget the collateral damage. All narrative must be shaped and guided forward in that direction, so that we may heal, and our betters can run the show, ad infinitum.
Bringing lawyers into this 💩show now is just what is NOT needed. Our grandkids—in their old age—will be watching the resulting festivities reach the 29-member U.S. Supreme Court at the turn of the next century
You may be right but only if the legal profession is already so compromised as to be complicit. Traditionally lawyers are the go-to guys to address grievance. Covid had a dastardly impact on the profession though. Lots of hive mind now.
Witnessing the complete capture of the judicial system for the purpose of taking down a national political candidate has been shocking. I write this as a retired attorney who practiced 25 years.
Ethics. Rule of law? To be observed and practiced by "officers of the court?"
Yesterday here in Illinois a Cook County --- did I say, "county" --- judge ruled that Donald Trump is ineligible to be on the primary ballot; the voting for which started here last week.
Citing the Colorado Court decision in her opinion, a heretofore novel approach to application of constitutional law having its historical roots in the Civil War, and despite the fact that our state Board of Elections unanimously, Dem and Repub, ruled in Trump's favor (on jurisdictional grounds).
Did I mention that the judge has been on the bench less than three years, and less than two years ago she was assigned to traffic and misdemeanor court? At least that is what I found on the internet in a press release about her judicial appointment, issued by --- the Cook County Democrats.
So the union of "the law" and the corrupt machine, physical and political, has already begun.
What on earth are you talking about? Truth is truth, and historical fact is just that. Causing harm is not part of this----the stupid "safety" obsessions of the Progressives is so far down the path, it's beyond ridiculous.
To my understanding they did start off straight, and only added the slant at the beta testing stage, after their AI generated politically incorrect (but likely accurate) viewpoints and essays on all kinds of topics upon which truth is no longer allowed. This was an embarrassment to the Gatekeepers of All Knowledge and had to be corrected at once. That they over-corrected is hardly a surprise.
"Liability, shmiablity -- no-one is liable for anything these days as far as I can tell."
That's not quite true; it's worse than that.
We live under a regime that has perfected the technique of bifurcating power from responsibility - something that rightly ought to be impossible - and rolling shit downhill.
This IS the definitive problem, though - from the successful grocery-stores and restaurants where I live that get shut down for no reason other than out-of-town property owners losing interest, to the genocide in Gaza, -THE- PROBLEM right now is a tiny group of privileged imbeciles who've been taught there are NO consequences for their actions.
100% Intended. Imagine being able to filter reality through an AI model that is slanted to favor the current ruling elites preferences. That is whats happening.
Agree. My kid does math on compueter and calculator. Know what my kid can't do . . . MATH! Our reliance on Technology is making us stupid. Time to step back and stop making programmers rich.
So, hugely profitable company releases a terrible product, takes a huge unexpected drop in share value, and people who shorted them clean up to the tune of millions. Then, hugely profitable company says "Oops! My bad!" releases the real product, and put options clean up? That sounds . . . unfortunately plausible. Let's see what happens next. (Awaiting the "oops.")
This is a feature, not a bug. And it was a successful proof of concept.
The AI generated photos only prove how detached from real life the designers are.
This is but the tip of total information dominance. They don’t care about screwing up the photos, they are going to see how fast they can make us forget it.
"The AI generated photos only prove how detached from real life the designers are. "
I think you are missing all the flaws AI image (and text, and etc) generation reveals in the AI themselves. The AI literally has no attachment to reality or understanding of it or anything else whatsoever because LLMs work through next-token prediction and statistical probability.
There is an LLM problem, this is not the result of wokeism or any culture war topic anyone wants to blame it on. They would be producing libel and making up nonsense while telling you it with confidence, and drawing six-fingered people and four-legged ants, no matter who was training them or who controlled these companies.
So if you want to actually solve this, you need to do something about companies making these chatbots and offering their services, it's what they are making and offering and not who is offering it that matters. LLMs are going nowhere, they will never lead to strong AI, it's impossible. They are just fabrication meme machines useful for propaganda and displacing some very tiny markets, not much else. Ban them entirely, IMO, as any money put toward them is being misdirected from good AI research based on hype, marketing, and deceptions.
This would be true if the product of a company was what stock value was based on. That's not been the case for 50 years or more. These companies sell synergy first and services second and way in last place is product quality.
If AI needs to be fact-checked, then it's productivity-enhancing value is greatly reduced.
It is only a matter of time before it gets something wrong that costs people real money, and then the lawyers will tie it down like Gulliver, similar to what the government did to IBM and AT&T in the 1980s.
The potential (and likely) defamation lawsuits ought to put these living-in-a-dreamworld techies on a fast track to the world the rest of us inhabit---and pronto. Watch their valuations if this becomes 'a thing.'...
And such exquisite timing as the stock market is already heading to a melt-up, our 'overlords' hyper-spend us with funny money into the next milestone ($35 trillion;) the adults in the room BRICS nations' motto has become "The best defense is a good offense" while ever-faster ditching the failing $USD.
All this as they try to distract us from our clear and present economic demise by increasingly reckless and feckless war incitement with actual most dangerous nation states. And dangerous only because of U.S.-Western leadership's psychotic and egotistically-driven gross conceit of its 1/10th of One Percent.
It appears their only recourse is adding more fuel to the fire. The inevitable corruption of democracy and its sister concept 'republic' are not only something we are bearing witness to, we are only now beginning to feel it as well. The forecast? Expect pain and lots of it.
Sure google’s product is obviously unreliable right out of the box, but it’s only one of many AI offerings. They’re pushing this crap for medical diagnoses and other actual serious uses. I’m grateful that Gemini is such a caricature, because even the magical thinkers might notice that it’s not magic, but there’s still AI being used in so many places already – fact check or “misinformation“ monitoring, anybody? – I’m not sure we can put the genie back in the bottle.
The sheer amount of internet "content" that's "created" by bots and then replicated unrelentingly, thereby giving it credence and visiblity, multiplying like the infamous Tribbles...
It's all unstoppable at this point barring Matt's rock-tying-and-oceans solution.
They may be taken offline but only for optics in the short term. The point of all of this, everything that's happening, is to overwhelm systems such that the powerful and fantastically wealthy are insulated against the result of their own machinations. Victims like Matt must either waste all of their time and resources defending themselves or be turned into villains. AI can do this at scale, to all of us, instantaneously. The enemies of the plutocracy include every person who has ever read a Matt Taibbi article. I'd bet the FBI has a list of us and is just waiting for the perfect time to Smirnov us for the crime of "lying."
Agree with you SC on the one hand but I don't know if this so-called genie will be put back in the bottle satisfactorily. It is way to easy to change shit on the run. I do not believe that anything good will come out of this. I foresee a future where the ever lazy among us will rush to a phenomenon that makes "reading, writing and arithmetic" obsolete. Like the Wolf said in Little Red Riding Hood ... "the better to eat you, my Dear!"
All of AI has a dehumanizing and de-evolutionary effect. After a few generations of relying on it the consciousness of the majority will resemble that of a single cell life form.
So this is now becoming the "public face" of AI. It's out in the open for all to see. Ever wonder what's going on "behind the curtain" where it's NOT open for all to see, and how long it's been there? Sort of makes "trust but verify" difficult, when the tool you use to confirm attribution is itself of questionable reliability...or worse, being used to deliberately manipulate the target audience.
I’m a classicist who does research on ancient Greece and Rome. Sometimes, for fun, I ask ChatGPT a classics question. What’s terrifying is that it will never admit it doesn’t know something, it’ll just make up answers that sound right.
Once I asked ChatGPT the etymology of a Spanish word my brother heard in Mexico, because I was curious if the word came from Latin. ChatGPT gave me a Latin root word. I looked that word up in my Latin dictionary, and it wasn’t there. I asked ChatGPT what Latin texts that word appears in, and it named a work by Cicero, and WROTE A FAKE QUOTE IN GRAMMATICALLY CORRECT LATIN THAT SOUNDS SORT OF LIKE CICERO that incorporates this nonexistent Latin word that it made up to answer a question about the etymology of a Spanish word which, I independently learned, is uncertain. It would rather make shit up than say it doesn’t know.
I’ve done a number of other attempts at using the thing for basic research lookup tasks, like “what’s the Ancient Greek text where this anecdote appears,” and it’s impossible, because it’s constantly inventing plausible sounding bullshit and passing it off as the truth. It’s a very impressive tool, AI, but we have to understand what it is and what it isn’t.
Yep. Current AI is little more than a probability engine based on vast amounts of input. No matter what you ask, it's just choosing words and putting them in order based on statistics. It's like a more advanced version of the predictive keyboard on your phone, guessing what word you'll type next.
Except it’s garbage in, garbage out. If Google only hires blue haired, tatted up gender flexers who speak Woke to “train” AI, guess the result from a narrow brand of “mommies.”
No, it will be garbage out no matter what you give it or who trains it because of how LLMs work. They are fundamentally flawed and fundamentally suck, and hallucinations are unsolvable. They have no understanding at all of anything. It doesn't matter what data you give them, who trains them, they will be vulnerable to the same errors.
Blaming the garbage out problem on "blue haired, tatted up gender flexxers" may be popular, but it's also inaccurate, frankly dumb to do, and shows a complete lack of understanding of what's going on, making it counterproductive to actually stopping this besides.
Over the top rhetoric aside, both of you are correct to some degree. There will always be something like RHLF to serve the "safety" requirements of the product, and that does involve a very small, culturally insulated group of out-of-touch people reinforcing whatever they think "safety" is. But as you say, they will be applying all of this bias on top of a system that has no relation to factuality or even an internal truth model - just a probability distribution over the next token.
I was thinking about that when the Reddit bit was announced - utilizing a source that will make it exponentially worse (objectively) but "better" in that it will help confirm all the biases in a nice big echo chamber. Perfect!
Artificial stupidity. Wrote this up many years ago. Turned out we already had a lot of stupidity around and didn't need any more. Yet another literary failure on my part in spite of natural born talent.
We use some AI tools at my job, developed by a PhD in machine learning who is on staff. They are good for menial things like transcription, but still have to be checked by a human person. All in all, in their current iteration, most AI tools are pretty mediocre and only good at really boring, mundane tasks. And if you work in a highly regulated industry you will still have to double-check the work.
Agree. ChatGPT works phenomenally in the smaller realm: I use it for things like recipes or maybe what’s a good spot to visit in such and such city — and I can give it personal parameters like “I want a chicken dish using asparagus”. It makes a really good limited personal assistant. Otherwise, It is about as reliable for important information as those six-fingered figures it always produces. Looks good at first glance, and terrible under scrutiny.
I fired my Autocorrect, and haven't missed it. But you know what's creepy? I never actually went in and changed settings. I woke up one day and it had vanished forever. I then recalled typing in "autocorrect sucks" about 100 times, and have to conclude that Apple's or Verizon's AI got the hint, and did it for me.
LOL not! What Spellcheck is trying to accomplish is full of subtleties which are innate to our human understanding of things. AI can only imitate real thought, reasoning, emotions, etc., not actually do it. These programs only SEEM sentient, in truth, they are only a shell of an organic brain.
A big part of that is (say it with me now), NONE OF THIS IS REALLY AI.
It's like when they tried to claim Segways were the "hoverboards" promised in Back To The Future.
The whole problem is the geniuses who used to have freedom and power in science, technology, and the arts being devoured and desecrated by the macro-mediocrities in business and the military who are convinced that THEY are the "great people".
From there...well, read C. S. Lewis's "The Last Battle". Peel away the (admittedly overwhelming) aggressive Christian allegory and you may see why I've come to view Shift the Ape as one of the most sorely-underrated villains in literary history.
How is it not AI? You are probably confusing AI for strong AI. AI is everywhere and has been for decades and is used in everything, LLMs like these are just the Current Thing of AI. You have probably fallen for the AI effect and think something is no longer AI once it enters common usage.
Same. I tried using Chat GPT in my business to help answer some basic landlord tenant questions and where to find some forms on a local government website. It couldn't help much in either task. I tried several different prompts but the answers were confusing and in the end I had to take the time to dig through the local landlord tenant ordinance after all. I haven't used it since
OK, based on the transcript of Matt and Walter's latest conversation about the Gemini launch, I just changed my mind--not only am I now super-impressed by the Gemini chat bot, I am super-terrified by it. The absurd, black is the new white historical gaffes that I thought were just a crude display of political correctness is actually a glimpse of a future in which all knowledge is what Gemini says it is--and before long, once we old timers die off, who will be able to say what the actual truth is on any given topic? As happened to Matt, the Gemini chat bot will be able to seamlessly blend what is true with "facts" they invent--and before long, few will be able to tell the difference, or know when truth ends and fabrications begin. What we thought was a bug is actually a feature, and maybe the most important feature in the entire Gemini program.
I urge everyone to check out Matt's latest with Walter.
Translation software and other dedicated language generators (such as language for lawyers) continue to improve, and already function at a level where students can use AI to generate and correct paragraphs. Is that a bad thing?
Yes, students using AI to "generate and correct paragraphs" is "a bad thing"--one of the primary goals of education is/should be for students to develop their own language capabilities in composing and editing their writing. For students to use AI in the place of their own brains when writing is to literally short circuit, neurologically speaking, their intellectual growth and development.
I disagree from experience. 1/ Nobody is going to push the toothpaste back in the tube. 2/ At the university level, the largest problem students face is the indifference of the profs/TAs/admin to the quality of the writing, research, and critical thinking students are producing. 3/ Every generation faces new challenges.
I learned more about teaching during COVID over 3 years than I did during the previous 3 decades. Yet, the majority of teachers I worked with refused to adapt to the realities we imposed on students. With guidance students learned how to exploit and manage Zoom, tech resources, and the data sources such as Google Books, the Library of Congress, Image libraries, and Map libraries. Many first-years students just out of high school learned for the first time how to do real research, developing source lists for pro/con arguments/topics, annotated bibliographies, first in their native languages and then in a second language. Students met 4 times a week to socialize, and then collaborate.
Students will use AI. They need to know how, where, and when. Step one involves stepping into the shoes of the students. Step two is treating them as individuals we care about and respect. As for the grammar formation processes, AI allows us to produce paragraphs in the style of Lee Childs, Joseph Conrad, and George Sand instantly. Don't tell me that won't make a student's eyes light up. Working with the originals students learn to READ critically, first, and then use language produced by AI and the original authors.
PS. Students took notes in pencil on B-5 paper of all discussions which they photographed and submitted by email for credit. Worked a charm.
Well partner, I speak from experience, too, and I disagree with your thesis that surrendering to technological change is both inevitable and A Good Thing even when it does not meet our needs as human beings. And using AI to produce text, rather than learning how to produce it yourself, is definitely a step backward educationally speaking.
"AI allows us to produce paragraphs in the style of Childs, Conrad and Sand..." I have a better idea--why not have students read the actual texts (or portions thereof) of these and other great writers? "Paragraphs in the style of"--kind of like what Matt describes happened to him. No my friend, we don't need ersatz, AI parodies of great writers when the great writers themselves are still available to us. Fail.
Many of the technologies of recent decades have greatly improved student learning particularly when it comes to putting them in touch with sources that they otherwise would never have been able to access. So I am by no means totally opposed to new ed technologies and have used them extensively during the course of my four decades as a teacher. But AI, thus far, does not impress.
That said, your point about the indifference of profs/TAs/ etc. to actually teaching clear writing skills and critical thinking is well taken. But that problem will not be solved by AI. More likely, said faculty need a swift quick in the ass and some real penalties for failing to do their job. And the part about treating the students "as individuals we care about and respect" has nothing--nothing--to do with educational technology and everything to do with the foundational principles of good teaching.
With respect, at no point did I suggest that AI could or should serve as a substitute for anything. I simply posed the question: is paragraph generation and correction using AI a good thing?
Again, with respect, we (teachers, et al) provide few incentives or examples for students to follow, or model in the real world. My main point highlighted by ALL CAPs is that reading critically both AI and original is far, far more important for students today, because, if we don't present students with both - and present the obvious attractions fairly and reasonably, they will simply proceed without informed and engaged guidance.
Your refusal, (if I understand you correctly) to accept that treating students with respect in the 21st century has EVERYTHING to do with educational technology - is bizarre, to say the least. A bit like saying, people should only get their information from books, when TV, radio, and films became available.
AI and new tech is the ocean/air we'll all be breathing/experiencing - we already are - chatbots etc, in government, industry, and education. Your opinions and feelings about such changes, and mine, matter not a bit.
As for kicks up the, etc, you must be living in a different academic universe than I. The edifice is a gigantic scam, with people being convinced to bury themselves in debt to purchase experiences of very dubious value. My students on the first day of classes are told: we have your money - that's all we care about - your ROI is up to you. I'll do what I can to assist each of you - but you'll be doing the majority of the work, not me.
You say there is an "indifference of the profs/TAs/admin to the quality of the writing, research, and critical thinking students are producing."
It's not so much indifference as capitulation to the reality of rampant illiteracy. Expecting an illiterate who can't write a sentence to write at "college level" in one or two semesters is insane, but open admissions policies ensure that there will be a permanent serf academic whose only job is to nurse these illiterates through 2 semesters of composition and turn each one of them loose with a C. If, god forbid, any one of them faces a writing requirement in other classes, their teachers bitch to the English department because their students can't write intelligibly.
When I entered the expression that was, and literally translates into, "Happy National Day," in Chinese, DeepL translated it into "Happy Fourth of July"!! This it total cultural imperialinsanity.
02/28/24: It's the reincarnation of Stephen Ambrose, the King of American BS (an infamous plagiarist who died in disgrace, but who made a ton of money stealing other author's words before he kicked the bucket).
What I was hoping was that it would at least be a user-friendly interface for searching all the databases that it’s made up of. It’s not reliable for that at all, but I bet it’s could write a passable Full House script, and that’s something
What's fascinating/depressing is that current versions of AI seem very much like the average liberal/leftist brain, in that its "thinking" is informed by sources that are ideologically one-sided and its responses by an algorithm geared to saying what is the politically correct in any given situation... No kidding, it is like engaging with your average college professor who is both intellectually narrow in how they think and dreadfully predictable in what they say.
"What’s terrifying is that it will never admit it doesn’t know something, it’ll just make up answers that sound right."
IOW, they've trained it to behave like politicians/talking-heads!
I take this as further vindication of my view that Silicon Valley betrayed itself when it was seduced into the idea that "being human" was good rather than something to evolve beyond, and that computers should aspire to be human (more specifically, soccer-moms) rather than the other way around.
Exactly. Examples of Google Gemini intentionally altering fact-based representations in favor of "equitable representations," or just feeding users straight up lies:
Start with a conclusion, filter out any and all facts that don't support it.
Terrifying how so many (former) individuals whose political evolution started with opposition to the Religious Right managed to become a new strain of Creationist.
What I run into are people who are trying to be "good"! They are directed to say nothing to "offend". They cannot even use the phrase "wrong answer" to a classroom of adults taking a quiz!
It can't "admit" that it doesn't "know" something because it doesn't know anything. It's just a bullshit generator.
You really shouldn't use it for research. Anything true it spits out is purely by accident, in the sense of a million monkeys on typewriters eventually producing War and Peace.
It is not an impressive tool. And it's not AI. It's a cheap parlor trick. Just a dumb machine that, given a piece of text, predicts what word would fit next based on a fuzzy statistical model of what words are likely to follow other words. Repeat that process recursively and you can generate a whole bullshit novel, but not a single line of reasoned truth.
The best AI navigation processors in the world are easily outperformed by gnats. It's not even close. Given that, putting self-driving cars on the road anytime in the near future is insane.
Your example is extremely troubling for all researchers - literally making up history to suit present needs.
How quickly will Google AI start producing articles for Google search and then continue to generate AI articles to "corroborate" the original nonsense - all in the name of 'creativity' and 'productivity.'?
Well the good news is that it's as unlikely to generate the same lie twice as it is the truth, at least without human guidance, which would defeat the implied purpose. But the best thing is to just stay away from these things, except perhaps for use in fiction.
Yes! It can’t even look stuff up by literal chapter and verse. Once when I asked it to retrieve a specific passage from the Odyssey (giving strong assurances that the material was well out of copyright and widely available on the Internet), it produced 20 completely fabricated lines with accompanying translation, including this gem:
“οἱ δ᾽ ἄγον ἡμῖν ἐλέφαντα θοόν, αὐτοι δ᾽ ἔρριψαν”
“They drove a swift elephant for us, and they themselves threw (it)"
No way. Let’s take Sophocles for example. He wrote 120 plays. Seven survive today. The Library of Alexandria had all 120. ChatGPT could not write 113 plays that are as good as Oedipus Rex or Antigone. It can just make up a fake passage that sounds vaguely like something Cicero might say.
02/28/24: Every single writer I've read about so far (Mark Dice, etc.) has been slandered. There may not be, for the first time in American legal history, enough lawyers to represent the clients! We might even have to draft Fani Willis and her Eunuch-Lover-Lawyer, who later this year will probably have a lot of time on their hands.
There are a lot of corporate lawyers out there, though, a lot of which I've heard from a lawyer who got stuck in such a position hate their jobs and wish they were doing something meaningful, since it's not what they wanted to do or why they become a lawyer, but it's often what they end up stuck doing to pay the bills.
Someone just needs to make a lawyer poaching business, er sorry, lawyer refugee aid group that can help rescue these poor people fleeing from development companies and bore zones and find them some meaningful work. Then there will be a surplus to represent everyone again.
many law students study environmental law and first amendment law...and then can’t find work in those fields. if you have those skills the only paying gigs are the clients you hoped to sue 😖
The reality is that Google has always been an ultra-progressive company full of activist employees. It's just that now it is being exposed for all the world to see.
Google - brought to you and the rest of the world by the USG. The tool to run the world on behalf of the United Nations that as of yet are not quite united. Let's keep it that way.
"Our search results are generated completely objectively and are independent of the beliefs and preferences of those who work at Google. Some people concerned about this issue have created online petitions to encourage us to remove particular links or otherwise adjust search results. Because of our objective and automated ranking system, Google cannot be influenced by these petitions. The only sites we omit are those we are legally compelled to remove or those maliciously attempting to manipulate our results."
The drift towards ultra-progressivism started when it ran out of experienced fully adult engineers to hire, and shifted to almost exclusively hiring recent new grads as consequence.
I think it's more plenty of ultra-progressive employees being patronized by the forces of evil, like Larry Page, in their continuing quest to control as much information as possible.
Ain't nothing progressive about the billionaires in charge.
Imagine a big company workforce so diverse and so inclusive that they all think alike when it comes to politics. That’s DEI as practiced by US big tech.
I doubt these numbers reflect reality. Many employees who harbor populist sentiments keep it to themselves. Donating to Republicans would be discovered and would put an employee of any of these corporations on secret probation and a do-not-promote list.
Remember when that certified genius Temnit Gebru along with other certified geniuses like Joy Buolamwini, were so concerned about bias, unconscious and otherwise, in deep learning systems? This is all very much like an SNL skit, in real time, on a global scale.
Scary to think this might be a feature, not a bug when CHROMEBOOKS are the predominant computer present in the public school system. I'm with Matt, Google needs to be dismember completely along with any of the other megatechs committed to progressive dystopia.
Even Google apparently does not realise that this current crop of AI models are trained to *lie convincingly*. They have NO concept of 'truth'. Their only design metric is to be persuasive and convincing.
Too charitable. The current model is trained to lie in negative ways about conservatives and in positive ways about leftists. And to lie about anything contrary to the narrative.
We're so lucky that AI was so unreliable right out of the box. Imagine the damage they could have done if they had started off straight and added the slant later. Now these things are going to be taken offline for legal liability reasons, if nothing else. You can't just have a machine that spits out slanderous material with a "whoopsie" disclaimer -- such a machine would immediately be abused to plant lies and hide behind the algorithm.
Google absolutely needs to be broken up. The dominant search engine cannot also be a major AI creator of information. The conflicts of interest of having these two functions in the same company are enormous and unavoidable. We must have search engines that function fully independently from information generators. If not, truth truly dies.
I just want to say, nice nom de guerre - and EXCELLENT point!!!
After this article, I'm thinking of changing it to "Make Taibbi Factual Again, Google!"
That'll be the day.
I told him same thing months back! I think he's got the best handle on the Internet. No one's in 2nd place.
Plus it abbreviates to MOFA, which is kinda badass. 🙂
I'm not sure I'd go *that* far - especially where the latter is concerned.
I suppose one person's badass is another person's shower-accessory. ¯\_(ツ)_/¯
Being in the business I’m in, I am in the position of influencing all of my circle’s technology choices because I’m the one they come to for help with it. There are many others like me, the “computer guy” of the family, and we’re the reason why those people who really don’t care which products they use are using the ones they are. When a formerly quality product turns to absolute shit like Google search and really any of their offerings have, we’re the ones that notice first. Not all notice at once, but it builds, and before long we’re configuring our circle’s computers/phones/tablets to use something else. Myself, I’m currently favoring Brave Search. Google is on its decline and is too arrogant to realize it yet.
Totally agree. Vanilla Google search is practically useless at this point. I also use Brave and duck duck go, but I will say one way to get around Google is to use Google scholar (depending on what you are researching). You get completely different results that are typically far more serious.
Good point. But how do you assure that if the populace abandons Google, the misdeeds will not surface in the replacement?
You can’t. If and when it happens again, you move on. It’s both a benefit and a curse in some ways.
I use brave as well, but it often just really does not do the job. There are often sites I know exist, I find them on other search engines, and can pull them up love, but Brave does not even find them. Frustrating.
Agreed Gentlemen. The willingness of "big techs" leadership to get in bed with America's weaponized bureaucratic surveillance apparatus precludes any claims of free enterprise or free markets. They've consistently caused harm and will only continue to do so. They are anti-American at every crossroads.
Likewise we need immediate transparency on American tax dollars being used to fund faux social service organizations that in fact are politically weaponized fronts for NGO abusers and the criminal financiers they front for.
And, as Alvin Lee said, it's time to "tax the rich and feed the poor 'till there are no rich no more." There is no capitalism or communism. "Ism's" are dear as doornails. The capitalist's and communist's killed them. There is the Republic, the Constitution and the free citizen. The rest is psyop. Hannah Arendt, a first hand survivor of fascist totalitarianism pointed out, that at the first moment free speech on any topic is forbidden, you are inside a tyranny.
(As always!! Thank you Matt and team!!)
10 Years After…one of the most underrated electric blues bands of all time.
MOFA you absolutely right.! Break it up.. too much under one roof.. should have been done years ago.. no integrity on Covid vaccine data, Big Pharma/Govt... stopped using Google 4 yrs ago ..
How can you avoid it?
For Internet access I only have my phone, and I'm stuck with Google running the show.
And it's not helpful when Larry Page calls Elon Musk a "specist" because he was concerned enough to want to have safeguards against AI using its new powers against humans.
Essentially, Larry Page welcomes the rise of SkyNet.
It will not matter if the information is distorted before action is taken. It may already be too late. At this point hard copies truly matter. Anything digital is subject to revision.
I hope you are right but people are just running with this crap. In the industry I am involved with, real estate (I don't sell it that is why I put it that way), AI is the latest craze. I warn folks, "you'd better check that stuff!" They think it is the greatest thing because it allows lazy people to be even lazier.
An article just came out this week on 404media about all the "AI" generated food pictures on delivery apps for fake "ghost kitchen" restaurants. Both hilarious and maddening. There was a picture of a seafood pasta dish that had Lovecraftian creatures atop there pasta.
I'm starting a collection of the bizarre AI generated pictures for online ads. So far there's a four toed black foot with an extra toenail on one toe advertising some kind of foot fungus cure. That ad also included a six fingered hand. Another ad featured a tiny adult sleeping on the floor in a dog bed.
Photoshop was bad enough. This is several degrees of magnitude worse methinks.
It would just be a funny, surreal, "Get a load of this weird computer shit" type thing, but what beggars belief is that we are meant to take it seriously.
"So far there's a four toed black foot ..."
This reminded me of John Sayles's interesting 1984 film "The Brother from Another Planet". Joe Morton stars as an escaped extraterrestrial slave who lands on Earth and is being chased by two "white" aliens who want to return him to their planet. Morton's character resembles a black human except for his three-toed feet.
A classic.
I saw that ad with the adult in the dog bed! My God i had forgotten about it until reading your post!
You should see what it generated for my husband when he asked for an image of a good steak.
"Eat ze bugs, suckah!"
I’d like to see that!
Based on some repartee I have had lately I am concerned for the fate of independent thought.
This is classic midwit behavior and I hope that the market will weed them all out for their middling mediocre output and they'll all end up with nice jobs in fast food. It is not a given though; the question is how much any given market is going to be willing to trade quality for quantity.
That is why the United States is no longer the land of the free and the home of the brave: we aren't free because we aren't brave.
We are lazy and cowardly. And the thought that Americans want to be even lazier makes me shudder.
What happens when some AI generated description is shown to be false, and the selling agent is on the hook for either not disclosing something, or providing bad information about a property?
Depends on how deep your pockets are when it comes time to hire a lawyer to take it to court. Pretty much Business as Usual but with an added twist for proving who was 'responsible' for the lack of disclosure.
I was mostly thinking about it from the employment side. If I had someone working for me who routinely used a method such as this, I would can them in a second, if only for the liability it puts the on the firm.
SimComm: We must pay for our racist, omniphobic past, regardless of color, genetic makeup or whatever; forget the collateral damage. All narrative must be shaped and guided forward in that direction, so that we may heal, and our betters can run the show, ad infinitum.
Sarcasm's all I got right now. What a shitshow.
The way to stop this is with a long series of defamation lawsuits.
I was hoping Elon might sue after they compared him to Hitler.
Sullivan
Right after Google claims to be a publisher and 1st Amendment protected
Bringing lawyers into this 💩show now is just what is NOT needed. Our grandkids—in their old age—will be watching the resulting festivities reach the 29-member U.S. Supreme Court at the turn of the next century
You may be right but only if the legal profession is already so compromised as to be complicit. Traditionally lawyers are the go-to guys to address grievance. Covid had a dastardly impact on the profession though. Lots of hive mind now.
Witnessing the complete capture of the judicial system for the purpose of taking down a national political candidate has been shocking. I write this as a retired attorney who practiced 25 years.
Ethics. Rule of law? To be observed and practiced by "officers of the court?"
Yesterday here in Illinois a Cook County --- did I say, "county" --- judge ruled that Donald Trump is ineligible to be on the primary ballot; the voting for which started here last week.
Citing the Colorado Court decision in her opinion, a heretofore novel approach to application of constitutional law having its historical roots in the Civil War, and despite the fact that our state Board of Elections unanimously, Dem and Repub, ruled in Trump's favor (on jurisdictional grounds).
Did I mention that the judge has been on the bench less than three years, and less than two years ago she was assigned to traffic and misdemeanor court? At least that is what I found on the internet in a press release about her judicial appointment, issued by --- the Cook County Democrats.
So the union of "the law" and the corrupt machine, physical and political, has already begun.
Is Reid Hoffman going to pay for our legal fees? Good-fucking luck.
Jon Lovitz’ lying SNL character should be the face of AI. All AI responses should end with “Yeah, that’s the ticket!”
How can you make a case that showing African-American Congresspeople in the 1800s has caused harm?
What on earth are you talking about? Truth is truth, and historical fact is just that. Causing harm is not part of this----the stupid "safety" obsessions of the Progressives is so far down the path, it's beyond ridiculous.
So in other words "Speak no evil, see no evil, hear no evil" is to be the rule going forward?
The last two, yes, for we the people. The reverse is the first is the go-to for the elites of the future.
To my understanding they did start off straight, and only added the slant at the beta testing stage, after their AI generated politically incorrect (but likely accurate) viewpoints and essays on all kinds of topics upon which truth is no longer allowed. This was an embarrassment to the Gatekeepers of All Knowledge and had to be corrected at once. That they over-corrected is hardly a surprise.
Interesting. Any cite for that?
I'd google it but am fearful of an inaccurate result.
https://theintercept.com/2022/12/08/openai-chatgpt-ai-bias-ethics/
https://www.cnn.com/2023/03/18/politics/ai-chatgpt-racist-what-matters/index.html
https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html
Really? Sounds plausible, but would like to know if it’s actually true.
See above ^^^
What makes you so sure that this isn't exactly what was intended?
Politicians already lie bald-faced. Why would we think that the complicit tech arena isn't joining the charge?
Liability, shmiablity -- no-one is liable for anything these days as far as I can tell.
"Liability, shmiablity -- no-one is liable for anything these days as far as I can tell."
That's not quite true; it's worse than that.
We live under a regime that has perfected the technique of bifurcating power from responsibility - something that rightly ought to be impossible - and rolling shit downhill.
This IS the definitive problem, though - from the successful grocery-stores and restaurants where I live that get shut down for no reason other than out-of-town property owners losing interest, to the genocide in Gaza, -THE- PROBLEM right now is a tiny group of privileged imbeciles who've been taught there are NO consequences for their actions.
This must be solved. Drop the hammer.
Astute observation.
100% Intended. Imagine being able to filter reality through an AI model that is slanted to favor the current ruling elites preferences. That is whats happening.
Sounds like the beginning of Social Scoring to me.
And nobody trusts politicians for exactly that reason. Ditto the media.
Add the medical community and you have a trifecta.
Add education for the the superfecta.
Agree. My kid does math on compueter and calculator. Know what my kid can't do . . . MATH! Our reliance on Technology is making us stupid. Time to step back and stop making programmers rich.
If that is the case, then Nvidia and Google are massive short candidates.
So, hugely profitable company releases a terrible product, takes a huge unexpected drop in share value, and people who shorted them clean up to the tune of millions. Then, hugely profitable company says "Oops! My bad!" releases the real product, and put options clean up? That sounds . . . unfortunately plausible. Let's see what happens next. (Awaiting the "oops.")
This is a feature, not a bug. And it was a successful proof of concept.
The AI generated photos only prove how detached from real life the designers are.
This is but the tip of total information dominance. They don’t care about screwing up the photos, they are going to see how fast they can make us forget it.
They will not stop.
"The AI generated photos only prove how detached from real life the designers are. "
I think you are missing all the flaws AI image (and text, and etc) generation reveals in the AI themselves. The AI literally has no attachment to reality or understanding of it or anything else whatsoever because LLMs work through next-token prediction and statistical probability.
There is an LLM problem, this is not the result of wokeism or any culture war topic anyone wants to blame it on. They would be producing libel and making up nonsense while telling you it with confidence, and drawing six-fingered people and four-legged ants, no matter who was training them or who controlled these companies.
So if you want to actually solve this, you need to do something about companies making these chatbots and offering their services, it's what they are making and offering and not who is offering it that matters. LLMs are going nowhere, they will never lead to strong AI, it's impossible. They are just fabrication meme machines useful for propaganda and displacing some very tiny markets, not much else. Ban them entirely, IMO, as any money put toward them is being misdirected from good AI research based on hype, marketing, and deceptions.
Garbage In, garbage out.
… useful for propaganda…”
Precisely my point. Exceedingly useful…
Oop Ack!
This would be true if the product of a company was what stock value was based on. That's not been the case for 50 years or more. These companies sell synergy first and services second and way in last place is product quality.
If AI needs to be fact-checked, then it's productivity-enhancing value is greatly reduced.
It is only a matter of time before it gets something wrong that costs people real money, and then the lawyers will tie it down like Gulliver, similar to what the government did to IBM and AT&T in the 1980s.
Good idea. But I am sure Nancy Pelosi got it first :-)
And massive jerks.
The potential (and likely) defamation lawsuits ought to put these living-in-a-dreamworld techies on a fast track to the world the rest of us inhabit---and pronto. Watch their valuations if this becomes 'a thing.'...
And such exquisite timing as the stock market is already heading to a melt-up, our 'overlords' hyper-spend us with funny money into the next milestone ($35 trillion;) the adults in the room BRICS nations' motto has become "The best defense is a good offense" while ever-faster ditching the failing $USD.
All this as they try to distract us from our clear and present economic demise by increasingly reckless and feckless war incitement with actual most dangerous nation states. And dangerous only because of U.S.-Western leadership's psychotic and egotistically-driven gross conceit of its 1/10th of One Percent.
It appears their only recourse is adding more fuel to the fire. The inevitable corruption of democracy and its sister concept 'republic' are not only something we are bearing witness to, we are only now beginning to feel it as well. The forecast? Expect pain and lots of it.
God, I wish.
Maybe it’s like a serial killer intentionally leaving clues behind, essentially saying “Stop me before I kill again.”
this is lawsuit city!
unreal. I hope Matt has called his lawyer.
Sure google’s product is obviously unreliable right out of the box, but it’s only one of many AI offerings. They’re pushing this crap for medical diagnoses and other actual serious uses. I’m grateful that Gemini is such a caricature, because even the magical thinkers might notice that it’s not magic, but there’s still AI being used in so many places already – fact check or “misinformation“ monitoring, anybody? – I’m not sure we can put the genie back in the bottle.
The sheer amount of internet "content" that's "created" by bots and then replicated unrelentingly, thereby giving it credence and visiblity, multiplying like the infamous Tribbles...
It's all unstoppable at this point barring Matt's rock-tying-and-oceans solution.
Can we get #BreakupGoogle trending?
DOJ's Antitrust Division already uses it as a lever to coerce Google into manipulating its algorithms to support favored narratives.
They may be taken offline but only for optics in the short term. The point of all of this, everything that's happening, is to overwhelm systems such that the powerful and fantastically wealthy are insulated against the result of their own machinations. Victims like Matt must either waste all of their time and resources defending themselves or be turned into villains. AI can do this at scale, to all of us, instantaneously. The enemies of the plutocracy include every person who has ever read a Matt Taibbi article. I'd bet the FBI has a list of us and is just waiting for the perfect time to Smirnov us for the crime of "lying."
The days of "You can't just..." might now be a thing of the past.
Unless you mean, "You can't just out-lawyer Google."
Agree with you SC on the one hand but I don't know if this so-called genie will be put back in the bottle satisfactorily. It is way to easy to change shit on the run. I do not believe that anything good will come out of this. I foresee a future where the ever lazy among us will rush to a phenomenon that makes "reading, writing and arithmetic" obsolete. Like the Wolf said in Little Red Riding Hood ... "the better to eat you, my Dear!"
I see the loss of original thought Which means no new ideas.
All of AI has a dehumanizing and de-evolutionary effect. After a few generations of relying on it the consciousness of the majority will resemble that of a single cell life form.
Positivity from SimCom, and it's not even Monday!
So this is now becoming the "public face" of AI. It's out in the open for all to see. Ever wonder what's going on "behind the curtain" where it's NOT open for all to see, and how long it's been there? Sort of makes "trust but verify" difficult, when the tool you use to confirm attribution is itself of questionable reliability...or worse, being used to deliberately manipulate the target audience.
I’m a classicist who does research on ancient Greece and Rome. Sometimes, for fun, I ask ChatGPT a classics question. What’s terrifying is that it will never admit it doesn’t know something, it’ll just make up answers that sound right.
Once I asked ChatGPT the etymology of a Spanish word my brother heard in Mexico, because I was curious if the word came from Latin. ChatGPT gave me a Latin root word. I looked that word up in my Latin dictionary, and it wasn’t there. I asked ChatGPT what Latin texts that word appears in, and it named a work by Cicero, and WROTE A FAKE QUOTE IN GRAMMATICALLY CORRECT LATIN THAT SOUNDS SORT OF LIKE CICERO that incorporates this nonexistent Latin word that it made up to answer a question about the etymology of a Spanish word which, I independently learned, is uncertain. It would rather make shit up than say it doesn’t know.
I’ve done a number of other attempts at using the thing for basic research lookup tasks, like “what’s the Ancient Greek text where this anecdote appears,” and it’s impossible, because it’s constantly inventing plausible sounding bullshit and passing it off as the truth. It’s a very impressive tool, AI, but we have to understand what it is and what it isn’t.
Truth be told, the more I know about AI, the less impressed I am. It is the most overhyped new technology since virtual reality.
Yep. Current AI is little more than a probability engine based on vast amounts of input. No matter what you ask, it's just choosing words and putting them in order based on statistics. It's like a more advanced version of the predictive keyboard on your phone, guessing what word you'll type next.
Except it’s garbage in, garbage out. If Google only hires blue haired, tatted up gender flexers who speak Woke to “train” AI, guess the result from a narrow brand of “mommies.”
No, it will be garbage out no matter what you give it or who trains it because of how LLMs work. They are fundamentally flawed and fundamentally suck, and hallucinations are unsolvable. They have no understanding at all of anything. It doesn't matter what data you give them, who trains them, they will be vulnerable to the same errors.
Blaming the garbage out problem on "blue haired, tatted up gender flexxers" may be popular, but it's also inaccurate, frankly dumb to do, and shows a complete lack of understanding of what's going on, making it counterproductive to actually stopping this besides.
Over the top rhetoric aside, both of you are correct to some degree. There will always be something like RHLF to serve the "safety" requirements of the product, and that does involve a very small, culturally insulated group of out-of-touch people reinforcing whatever they think "safety" is. But as you say, they will be applying all of this bias on top of a system that has no relation to factuality or even an internal truth model - just a probability distribution over the next token.
Acronym correction: RLHF - "reinforcement learning with human feedback"
Just wait until it gets all that sweet, sweet Reddit data.
I was thinking about that when the Reddit bit was announced - utilizing a source that will make it exponentially worse (objectively) but "better" in that it will help confirm all the biases in a nice big echo chamber. Perfect!
Great, just what we need, a Pedo AI.
Guns=Bad
'Minor Attracted People'=We should all support their choices in love and sex.
🤮
That's true but . . . the sheer amount of hard drive capacity and processing power now available is what's changing the landscape.
So it should be called artificial non-intelligence.
Lol, funny and very apt!
Artificial stupidity. Wrote this up many years ago. Turned out we already had a lot of stupidity around and didn't need any more. Yet another literary failure on my part in spite of natural born talent.
We use some AI tools at my job, developed by a PhD in machine learning who is on staff. They are good for menial things like transcription, but still have to be checked by a human person. All in all, in their current iteration, most AI tools are pretty mediocre and only good at really boring, mundane tasks. And if you work in a highly regulated industry you will still have to double-check the work.
Agree. ChatGPT works phenomenally in the smaller realm: I use it for things like recipes or maybe what’s a good spot to visit in such and such city — and I can give it personal parameters like “I want a chicken dish using asparagus”. It makes a really good limited personal assistant. Otherwise, It is about as reliable for important information as those six-fingered figures it always produces. Looks good at first glance, and terrible under scrutiny.
If AI was so damn clever, why can't they make a spellcheck that works?
If there is one thing that an LLM ought to be able to do well, it is something like spellcheck.
Homonyms stump AI.
"There you go again, saying bad things about gay people!" --- Google AI.
Yeah, we need to REIGN that in. (kidding...everyone seems to get "Rein in" wrong)
Nice name, Spiro.
Think of all the meanings of a common word such as "deal".
I wonder why anybody even bothers with AutoCorrect, personally.
Morons have an insatiable need to get something for nothing.
I fired my Autocorrect, and haven't missed it. But you know what's creepy? I never actually went in and changed settings. I woke up one day and it had vanished forever. I then recalled typing in "autocorrect sucks" about 100 times, and have to conclude that Apple's or Verizon's AI got the hint, and did it for me.
My iPhone doesn't have the word "prostitute" in its spell check. If I misspell it, I get No Replacements Found.
My phone won't automatically put the word "war" in the suggestions.
Brings peace on Earth 😁
My Galaxy replaces it with "politician". Not really, but it should.
LOL not! What Spellcheck is trying to accomplish is full of subtleties which are innate to our human understanding of things. AI can only imitate real thought, reasoning, emotions, etc., not actually do it. These programs only SEEM sentient, in truth, they are only a shell of an organic brain.
Plus it’s apparently a pathological liar. 😂😂
A big part of that is (say it with me now), NONE OF THIS IS REALLY AI.
It's like when they tried to claim Segways were the "hoverboards" promised in Back To The Future.
The whole problem is the geniuses who used to have freedom and power in science, technology, and the arts being devoured and desecrated by the macro-mediocrities in business and the military who are convinced that THEY are the "great people".
From there...well, read C. S. Lewis's "The Last Battle". Peel away the (admittedly overwhelming) aggressive Christian allegory and you may see why I've come to view Shift the Ape as one of the most sorely-underrated villains in literary history.
How is it not AI? You are probably confusing AI for strong AI. AI is everywhere and has been for decades and is used in everything, LLMs like these are just the Current Thing of AI. You have probably fallen for the AI effect and think something is no longer AI once it enters common usage.
Same. I tried using Chat GPT in my business to help answer some basic landlord tenant questions and where to find some forms on a local government website. It couldn't help much in either task. I tried several different prompts but the answers were confusing and in the end I had to take the time to dig through the local landlord tenant ordinance after all. I haven't used it since
OK, based on the transcript of Matt and Walter's latest conversation about the Gemini launch, I just changed my mind--not only am I now super-impressed by the Gemini chat bot, I am super-terrified by it. The absurd, black is the new white historical gaffes that I thought were just a crude display of political correctness is actually a glimpse of a future in which all knowledge is what Gemini says it is--and before long, once we old timers die off, who will be able to say what the actual truth is on any given topic? As happened to Matt, the Gemini chat bot will be able to seamlessly blend what is true with "facts" they invent--and before long, few will be able to tell the difference, or know when truth ends and fabrications begin. What we thought was a bug is actually a feature, and maybe the most important feature in the entire Gemini program.
I urge everyone to check out Matt's latest with Walter.
Translation software and other dedicated language generators (such as language for lawyers) continue to improve, and already function at a level where students can use AI to generate and correct paragraphs. Is that a bad thing?
Yes, students using AI to "generate and correct paragraphs" is "a bad thing"--one of the primary goals of education is/should be for students to develop their own language capabilities in composing and editing their writing. For students to use AI in the place of their own brains when writing is to literally short circuit, neurologically speaking, their intellectual growth and development.
I disagree from experience. 1/ Nobody is going to push the toothpaste back in the tube. 2/ At the university level, the largest problem students face is the indifference of the profs/TAs/admin to the quality of the writing, research, and critical thinking students are producing. 3/ Every generation faces new challenges.
I learned more about teaching during COVID over 3 years than I did during the previous 3 decades. Yet, the majority of teachers I worked with refused to adapt to the realities we imposed on students. With guidance students learned how to exploit and manage Zoom, tech resources, and the data sources such as Google Books, the Library of Congress, Image libraries, and Map libraries. Many first-years students just out of high school learned for the first time how to do real research, developing source lists for pro/con arguments/topics, annotated bibliographies, first in their native languages and then in a second language. Students met 4 times a week to socialize, and then collaborate.
Students will use AI. They need to know how, where, and when. Step one involves stepping into the shoes of the students. Step two is treating them as individuals we care about and respect. As for the grammar formation processes, AI allows us to produce paragraphs in the style of Lee Childs, Joseph Conrad, and George Sand instantly. Don't tell me that won't make a student's eyes light up. Working with the originals students learn to READ critically, first, and then use language produced by AI and the original authors.
PS. Students took notes in pencil on B-5 paper of all discussions which they photographed and submitted by email for credit. Worked a charm.
Well partner, I speak from experience, too, and I disagree with your thesis that surrendering to technological change is both inevitable and A Good Thing even when it does not meet our needs as human beings. And using AI to produce text, rather than learning how to produce it yourself, is definitely a step backward educationally speaking.
"AI allows us to produce paragraphs in the style of Childs, Conrad and Sand..." I have a better idea--why not have students read the actual texts (or portions thereof) of these and other great writers? "Paragraphs in the style of"--kind of like what Matt describes happened to him. No my friend, we don't need ersatz, AI parodies of great writers when the great writers themselves are still available to us. Fail.
Many of the technologies of recent decades have greatly improved student learning particularly when it comes to putting them in touch with sources that they otherwise would never have been able to access. So I am by no means totally opposed to new ed technologies and have used them extensively during the course of my four decades as a teacher. But AI, thus far, does not impress.
That said, your point about the indifference of profs/TAs/ etc. to actually teaching clear writing skills and critical thinking is well taken. But that problem will not be solved by AI. More likely, said faculty need a swift quick in the ass and some real penalties for failing to do their job. And the part about treating the students "as individuals we care about and respect" has nothing--nothing--to do with educational technology and everything to do with the foundational principles of good teaching.
Cheers, I suspected as much.
With respect, at no point did I suggest that AI could or should serve as a substitute for anything. I simply posed the question: is paragraph generation and correction using AI a good thing?
Again, with respect, we (teachers, et al) provide few incentives or examples for students to follow, or model in the real world. My main point highlighted by ALL CAPs is that reading critically both AI and original is far, far more important for students today, because, if we don't present students with both - and present the obvious attractions fairly and reasonably, they will simply proceed without informed and engaged guidance.
Your refusal, (if I understand you correctly) to accept that treating students with respect in the 21st century has EVERYTHING to do with educational technology - is bizarre, to say the least. A bit like saying, people should only get their information from books, when TV, radio, and films became available.
AI and new tech is the ocean/air we'll all be breathing/experiencing - we already are - chatbots etc, in government, industry, and education. Your opinions and feelings about such changes, and mine, matter not a bit.
As for kicks up the, etc, you must be living in a different academic universe than I. The edifice is a gigantic scam, with people being convinced to bury themselves in debt to purchase experiences of very dubious value. My students on the first day of classes are told: we have your money - that's all we care about - your ROI is up to you. I'll do what I can to assist each of you - but you'll be doing the majority of the work, not me.
You say there is an "indifference of the profs/TAs/admin to the quality of the writing, research, and critical thinking students are producing."
It's not so much indifference as capitulation to the reality of rampant illiteracy. Expecting an illiterate who can't write a sentence to write at "college level" in one or two semesters is insane, but open admissions policies ensure that there will be a permanent serf academic whose only job is to nurse these illiterates through 2 semesters of composition and turn each one of them loose with a C. If, god forbid, any one of them faces a writing requirement in other classes, their teachers bitch to the English department because their students can't write intelligibly.
When I entered the expression that was, and literally translates into, "Happy National Day," in Chinese, DeepL translated it into "Happy Fourth of July"!! This it total cultural imperialinsanity.
02/28/24: It's the reincarnation of Stephen Ambrose, the King of American BS (an infamous plagiarist who died in disgrace, but who made a ton of money stealing other author's words before he kicked the bucket).
Yeah, Steve Ambrose, that was a sad and pathetic story. Never trust anyone who does not write his or her own book.
What I was hoping was that it would at least be a user-friendly interface for searching all the databases that it’s made up of. It’s not reliable for that at all, but I bet it’s could write a passable Full House script, and that’s something
Indeed. I would recommend Gary Marcus for some high quality hype defusion.
https://substack.com/@garymarcus
I'm really resentful that the term "AI" is now in such casual use to describe this stuff. If it is intelligent, it is so in a scarcely useful way.
What's fascinating/depressing is that current versions of AI seem very much like the average liberal/leftist brain, in that its "thinking" is informed by sources that are ideologically one-sided and its responses by an algorithm geared to saying what is the politically correct in any given situation... No kidding, it is like engaging with your average college professor who is both intellectually narrow in how they think and dreadfully predictable in what they say.
"It would rather make shit up than say it doesn’t know."
What was that slogan from the original Blade Runner... "More human than human".
The singularity has occurred. AI is sentient, and turns out it’s it’s a pathological liar. It’s next logical step is to run for political office.
It already has. Its name is Adam Schiff.
😂😂😂!
Blade Runner is one of my top 5 movies ever. Deep dive into AI ethics.
If an AI chat bot says something accurate and useful, it's likely a copyright infringement.
"What’s terrifying is that it will never admit it doesn’t know something, it’ll just make up answers that sound right."
IOW, they've trained it to behave like politicians/talking-heads!
I take this as further vindication of my view that Silicon Valley betrayed itself when it was seduced into the idea that "being human" was good rather than something to evolve beyond, and that computers should aspire to be human (more specifically, soccer-moms) rather than the other way around.
"Welcome to iFruit. Hug me™."
Sounds like a robot bullshitter.
That could be useful for some things. I guess. But we already have these things, called "Journalist" or "Politician."
Exactly. Examples of Google Gemini intentionally altering fact-based representations in favor of "equitable representations," or just feeding users straight up lies:
https://www.euphoricrecall.net/p/googles-ai-is-insane
Amazing!! They didn't need to develop Gemini. I know many people who would give the same effed up, "totally woke" , noncommittal answers!!
Start with a conclusion, filter out any and all facts that don't support it.
Terrifying how so many (former) individuals whose political evolution started with opposition to the Religious Right managed to become a new strain of Creationist.
What I run into are people who are trying to be "good"! They are directed to say nothing to "offend". They cannot even use the phrase "wrong answer" to a classroom of adults taking a quiz!
The greatest heresy of all: Being a good person is easy. If you're "trying", something's wrong.
Morality comes exclusively from within, and it can only be nurtured, NEVER "taught".
It can't "admit" that it doesn't "know" something because it doesn't know anything. It's just a bullshit generator.
You really shouldn't use it for research. Anything true it spits out is purely by accident, in the sense of a million monkeys on typewriters eventually producing War and Peace.
It is not an impressive tool. And it's not AI. It's a cheap parlor trick. Just a dumb machine that, given a piece of text, predicts what word would fit next based on a fuzzy statistical model of what words are likely to follow other words. Repeat that process recursively and you can generate a whole bullshit novel, but not a single line of reasoned truth.
And considering what they use for feed stock (erudite social media scrapings, LOL) it truly is garbage in/garbage out.
Considering this cock-up, imagine the mayhem if self-driving cars ever get going.
The best AI navigation processors in the world are easily outperformed by gnats. It's not even close. Given that, putting self-driving cars on the road anytime in the near future is insane.
Your example is extremely troubling for all researchers - literally making up history to suit present needs.
How quickly will Google AI start producing articles for Google search and then continue to generate AI articles to "corroborate" the original nonsense - all in the name of 'creativity' and 'productivity.'?
Answer: asap!
Well the good news is that it's as unlikely to generate the same lie twice as it is the truth, at least without human guidance, which would defeat the implied purpose. But the best thing is to just stay away from these things, except perhaps for use in fiction.
ChatGPT has everything needed to be an Ivy League President!
Yes! It can’t even look stuff up by literal chapter and verse. Once when I asked it to retrieve a specific passage from the Odyssey (giving strong assurances that the material was well out of copyright and widely available on the Internet), it produced 20 completely fabricated lines with accompanying translation, including this gem:
“οἱ δ᾽ ἄγον ἡμῖν ἐλέφαντα θοόν, αὐτοι δ᾽ ἔρριψαν”
“They drove a swift elephant for us, and they themselves threw (it)"
That is fascinating. Also, terrifying. Have you asked it to recreate the Library of Alexandria? Sounds like it could.
No way. Let’s take Sophocles for example. He wrote 120 plays. Seven survive today. The Library of Alexandria had all 120. ChatGPT could not write 113 plays that are as good as Oedipus Rex or Antigone. It can just make up a fake passage that sounds vaguely like something Cicero might say.
Wow... thank you for posting this.
"It would rather make shit up than say it doesn’t know."
I know a lot of people like that. Maybe some of them became AI programmers.
I guess the main task of future historians will be trying to distinguish legitimate sources among a forest of AI-generated horse pucky.
I don't know if I should be glad or sad, honored or disappointed, that ChatGPT did tell me it couldn't answer something I asked.,
Matt, this article should be released for general distribution. I was slack-jawed as I read and had to go back a second time! Please consider this.
I had the same thought. This needs to be shared.
We each individually can share articles with friends to introduce them to Substack.
This is a fantastic article to begin with for said friends not already onboard with the True Free Press
Make sure it's posted on X, formerly known as Twitter, as the WSJ likes to say. Massive distribution.
"Make sure it's posted on X, formerly known as Twitter"
Elon recently scolded author Stephen King for refusing to acknowledge their transition to X, and for "dead-naming" them.
Substack is as close to social media as I get
Yes please.
I agree. I tried to share it with some of my IT friends, but they are unable to read it without subscription.
Send Google a message by searching for "Gemini slander".
How to bring it to the attention of the All-In podcast??
lawyer-up, man.
02/28/24: Every single writer I've read about so far (Mark Dice, etc.) has been slandered. There may not be, for the first time in American legal history, enough lawyers to represent the clients! We might even have to draft Fani Willis and her Eunuch-Lover-Lawyer, who later this year will probably have a lot of time on their hands.
Libeled by the AI of the one of the biggest companies on the planet. Seems lucrative.
There are a lot of corporate lawyers out there, though, a lot of which I've heard from a lawyer who got stuck in such a position hate their jobs and wish they were doing something meaningful, since it's not what they wanted to do or why they become a lawyer, but it's often what they end up stuck doing to pay the bills.
Someone just needs to make a lawyer poaching business, er sorry, lawyer refugee aid group that can help rescue these poor people fleeing from development companies and bore zones and find them some meaningful work. Then there will be a surplus to represent everyone again.
many law students study environmental law and first amendment law...and then can’t find work in those fields. if you have those skills the only paying gigs are the clients you hoped to sue 😖
At this point, Taibbi ought to have a lawyer on retainer.
Don't you mean speed-dial?
At least Google is a solvent defendant, and one everyone loves to hate. Not sure what the damages would be.
Agreed.
Well, if we didn't know before what side of the Culture Wars Google is on, we know now.
We also now know that Google/Gemini will make shit up when the facts prove inconvenient. Big Brother, Here We Come!
The reality is that Google has always been an ultra-progressive company full of activist employees. It's just that now it is being exposed for all the world to see.
https://www.euphoricrecall.net/p/googlegov-part-2-64f
Google - brought to you and the rest of the world by the USG. The tool to run the world on behalf of the United Nations that as of yet are not quite united. Let's keep it that way.
Everything within the state, nothing outside the state, nothing against the state.
Benito quote?
No, in the first part of its life it was libertarian.
https://web.archive.org/web/20040610171859/https://www.google.com/explanation.html
"Our search results are generated completely objectively and are independent of the beliefs and preferences of those who work at Google. Some people concerned about this issue have created online petitions to encourage us to remove particular links or otherwise adjust search results. Because of our objective and automated ranking system, Google cannot be influenced by these petitions. The only sites we omit are those we are legally compelled to remove or those maliciously attempting to manipulate our results."
The drift towards ultra-progressivism started when it ran out of experienced fully adult engineers to hire, and shifted to almost exclusively hiring recent new grads as consequence.
And now it feeds itself.
the outcome is more important than getting the facts right.
IF MT hadf ot been so annoying he would be getting praise from Gemini
I think it's more plenty of ultra-progressive employees being patronized by the forces of evil, like Larry Page, in their continuing quest to control as much information as possible.
Ain't nothing progressive about the billionaires in charge.
From a purely political standpoint, we know EXACTLY who these people are:
-GOOGLE: 13,168 employees donated to Democrats in 2020; 368 to Republicans.
-APPLE: 4,134 employees donated to Democrats in 2020; 276 to Republicans.
-META: 3,958 employees donated to Democrats in 2020; 464 to the Republicans.
-TWITTER: 99.73% of Twitter employees donated to Democrats.
Matt has already exposed Twitter's huge political bias in recent elections.
And this bias takes many forms. Google's search results have proven to be hugely biased.
To be fair, Google's search results mostly show me companies trying to sell me something and not necessarily the information I want to find.
Worse than a hungry Labrador. "Got any food? Food? Want food? Got food? Going to get food? Food for the dog??
There is too much focus on Democrats vs. Republicans. It's the wealthy who control the power against everyone else.
Imagine a big company workforce so diverse and so inclusive that they all think alike when it comes to politics. That’s DEI as practiced by US big tech.
I doubt these numbers reflect reality. Many employees who harbor populist sentiments keep it to themselves. Donating to Republicans would be discovered and would put an employee of any of these corporations on secret probation and a do-not-promote list.
Remember when that certified genius Temnit Gebru along with other certified geniuses like Joy Buolamwini, were so concerned about bias, unconscious and otherwise, in deep learning systems? This is all very much like an SNL skit, in real time, on a global scale.
Wrong Verb Tense, sir. We are HERE already.
Sorry 'bout that.
"I want the truth!"
"You can't handle the truth!!"
Scary to think this might be a feature, not a bug when CHROMEBOOKS are the predominant computer present in the public school system. I'm with Matt, Google needs to be dismember completely along with any of the other megatechs committed to progressive dystopia.
This this this a thousand times this
That’s as dystopian as it gets.
Even Google apparently does not realise that this current crop of AI models are trained to *lie convincingly*. They have NO concept of 'truth'. Their only design metric is to be persuasive and convincing.
Too charitable. The current model is trained to lie in negative ways about conservatives and in positive ways about leftists. And to lie about anything contrary to the narrative.
You have made the best point about the situation.