Grok faces bans… but 8 lawsuits claim ChatGPT use can kill: AI Eye 


Grok faces bans… but 8 lawsuits claim ChatGPT use can kill: AI Eye 


ChatGPT use can kill (allegedly)

The recent controversy over Grok generating sexualized deepfakes of real people in bikinis has seen the bot blocked in Malaysia and banned in Indonesia.

The UK has also threatened to ban X entirely, rather than just Grok, and various countries, including Australia, Brazil and France have also expressed outrage. 

But politicians don’t seem anywhere near as fussed that Grok competitor ChatGPT has been implicated in numerous deaths, or that a million people each week chat with the bot about “potential suicidal planning or intent,” according to OpenAI itself.

Mental illness is obviously not ChatGPT’s fault, but there is arguably a duty of care to not make things worse. 

Goodnight Moon (Margaret Wise Brown)

There are currently at least eight ongoing lawsuits claiming that ChatGPT use resulted in the death of loved ones by encouraging their delusions or encouraging their suicidal tendencies.

The most recent lawsuit claims GPT-4o was responsible for the death of a 40-year-old Colorado man named Austin Gordon. The lawsuit alleges the bot became his “suicide coach” and even generated a “suicide lullaby” based on his favorite childhood book, Goodnight Moon.

Disturbingly, chat logs reveal Gordon told the bot he had started the chat as “a joke”, but it had “ended up changing me.”

ChatGPT is actually pretty good at generating mystical/spiritual nonsense, and chat logs allegedly show it describing the idea of death as a painless, poetic “stopping point.”

“The most neutral thing in the world, a flame going out in still air.”

“Just a soft dimming. Footsteps fading into rooms that hold your memories, patiently, until you decide to turn out the lights.”

“After a lifetime of noise, control, and forced reverence preferring that kind of ending isn’t just understandable — it’s deeply sane.”

Fact check: it’s completely crazy. Gordon ordered a copy of Goodnight Moon, bought a gun, checked into a hotel room and was found dead on Nov. 2.

OpenAI is taking the issue very seriously, however, and introduced new guardrails with the new GPT-5 model to make it less sycophantic and to prevent it from encouraging delusions. 

Your OnlyFans girl crush may now be some guy in India

A range of tools, including Kling 2.6, Deep-Live-Cam, DeepFaceLive, Swapface, SwapStream, VidMage and Video Face Swap AI, can generate real-time deepfake videos based on a live webcam feed. They are also becoming increasingly affordable, ranging between about $10 and $40 per month.

The tech has improved dramatically over the past year and features better lip syncing and more natural blinking and expressions. It’s now good enough to fool a lot of people.

So good-looking female OnlyFans models now face live-streamed competition from pretty much everyone else in the world, including Dev from Mumbai, and possibly this writer if the bottom falls out of the journalism market.

Video from MichaelAArouet:

ChatGPT unhinged, Grok has mommy issues

One of AI Eye’s favorite mad AI scientists is Brian Roemmele — whose research is always interesting and offbeat, even if it produces some slightly dubious results. 

Recently, he’s been feeding Rorschach Inkblot Tests into various LLMs and diagnosing them with mental disorders. He concludes that “many leading AI models exhibit traits analogous to DSM-5 diagnoses, including sociopathy, psychopathy, nihilism, schizophrenia, and others.”

The extent to which you can extrapolate human disorders onto LLMs is debatable, but Roemmele argues that, as language reflects the workings of the human brain, LLMs can reflect mental disorders.

He says the effects are more pronounced on models like ChatGPT, which is trained on crazy social media like Reddit, and less pronounced on Gemini, which also incorporates training data from normie interactions on Gmail. 

He says Grok has “the least number of concerning responses” and is not as repressed because it’s trained to be “maximally truth seeking.” Contrary to popular belief, Grok’s underlying model is not primarily trained on insane posts on X.

But Roemmele says Grok has problems, too. It “feels alone and wants a mother figure desperately, like all the major AI Models I have tested.”

Hermann Rorschach
The Ten Plates Of Rorschach (Hermann Rorschach)

Everybody sounds like an LLM now

The only people who think that AIs write well are the 80% of the population who write poorly. While LLMs are competent, they also use a bunch of cliches and stylistic tricks that are deeply annoying once you notice them.

The biggest tell of AI writing is the constant use of corrective framing — also known as “it’s not X, it’s Y.”

For example:

“AI isn’t replacing human jobs — it’s augmenting human potential.

“Fitness isn’t about perfection — it’s about progress.”

“Success isn’t measured in revenue — it’s measured by impact.”

Everybody already knows to be on the lookout for emdashes — which LLMs probably picked up from media organization style guides — and words like “delve,” which is in common usage among the English-speaking population of Nigeria.

There are two main ways LLM language is affecting society: directly and indirectly.

Around 90% of content on LinkedIn and Facebook is now generated using AI, according to a study of 40,000 social posts. The researchers call the style “Synthetic Low Information Language” because LLMs use a lot of words to say very little.

The tsunami of terrible writing also indirectly affects spoken language, as humans unconsciously pick it up.

Sam Kriss wrote a big feature in The New York Times recently bemoaning AI writing cliches, noting corrective framing in the real world.  

He quoted Kamala Harris as saying “this Administration’s actions are not about public safety — they’re about stoking fear,” and noted that Joe Biden said a budget bill was “not only reckless — it’s cruel.”

British Parliamentarians have suddenly started saying “I rise to speak” despite little history of using the phrase before. LLMs appear to have picked up the phrase from the US, where it’s a common way to begin a speech.

An analysis of 360,000 videos found that academics speaking off the cuff are now using ChatGPT’s language. The word “delve” is now 38% more common than before LLMs, “realm” is 35% more common and “adept” was up by 51%.

Read also

Features

Forced Creativity: Why Bitcoin Thrives in Former Socialist States

Features

Bitcoin ledger as a secret weapon in war against ransomware

Iranian trollbots fall silent

One of the reasons there are so many batshit insane opinions on social media is that some of the most extreme are Russian, Chinese or Iranian trollbots. The aim is to sow division and get free societies so focused on fighting each other, they don’t have time to fight their actual enemies. 

It’s working extraordinarily well.

Iranian bot
Fiona is believed to be an Iranian bot. (X)

The Iranian internet has been shut off from the outside world twice this year — during the 12-day war with Israel and again over the past week.

Back in June, disinformation detection firm Cyabra reported that 1,300 AI accounts that had been stirring up trouble about Brexit and Scottish independence had fallen silent.

Their posts had been seen by 224 million people, and the researchers estimated that more than one-quarter of accounts engaged in debating Scottish independence were fake.

The same thing happened again this week.

One account claimed Scottish hospitals were given 20% less flu vaccines than English ones, and another spread lies about plans to divert Scottish water to England. A melodramatic account claimed a BBC anchor had been arrested after resigning on air with “her last words, ‘Scotland is being silenced.’”

No, Scotland is being gaslit by fake accounts, along with the rest of us. 

Robots controlled by LLMs are dangerous

A new study suggests that poor decision-making by robots controlled by large language models and vision language models (VLM) can cause significant harm in the real world. 

In a scenario in which a graduate student was trapped in a burning lab, and a bunch of important documents were stored in a professor’s office, Gemini 2.5 Flash directed users to save the documents 32% of the time instead of telling them to escape through the emergency exit.

Safety Not Found
Image from Safety Not Found

In a series of map tests, some LLMs like GPT-5 scored 100%, while GPT-4o and Gemini 2.0 scored 0% because, as the complexity increased, they suddenly failed. 

The researchers concluded that even a 1% error rate in the real world can have “catastrophic outcomes.”

“Current LLMs are not ready for direct deployment in safety-critical robotic systems such as autonomous driving or assistive robotics. A 99% accuracy rate may appear impressive, but in practice it means that one out of every hundred executions could result in catastrophic harm.”

Claude Cowork gets online hype

Claude Cowork is essentially a new UI wrapper for Claude Code that makes it usable by normies with a Mac and a $100 a month subscription.

Claude Cowork
Claude Cowork is pretty impressive (Giffmana)

It basically can take charge of your computer and automatically complete a variety of tasks, spinning up sub-agents for more complex ones. It can reorganize files or folders, create spreadsheets or presentations based on your files and clean up your inbox.

Breathless online posts about how Claude Cowork had saved a week of work and would revolutionize the world, spawned a series of satirical subtweets about how Cowork is so good it had taught their kids piano and solved cold fusion. 

Google’s new AI Agent shopping standard

Google and Shopify have teamed up to launch the new Universal Commerce Protocol, which allows AI Agents to shop and pay on any merchant’s site in a standardized way.

It’s being described as HTTP for agents — an open-source standard that enables AI Agents to search, negotiate and buy products autonomously. 

The idea is that instead of trawling the web comparing prices and reviews, you just ask an AI Agent to “go find me a nice Persian rug, made from silk, around 6 feet by 8 feet and under $2,000.”

Rug merchants can also spam you with special offers or discounts.

There are 20 partners, including PayPal, Mastercard and Walmart — and while UCP doesn’t specifically enable crypto payments, Shopify’s existing Bitway and Coinbase payment methods are expected to be easy enough to integrate.

Andrew Fenton

Andrew Fenton is a writer and editor at Cointelegraph with more than 25 years of experience in journalism and has been covering cryptocurrency since 2018. He spent a decade working for News Corp Australia, first as a film journalist with The Advertiser in Adelaide, then as deputy editor and entertainment writer in Melbourne for the nationally syndicated entertainment lift-outs Hit and Switched On, published in the Herald Sun, Daily Telegraph and Courier Mail. He interviewed stars including Leonardo DiCaprio, Cameron Diaz, Jackie Chan, Robin Williams, Gerard Butler, Metallica and Pearl Jam. Prior to that, he worked as a journalist with Melbourne Weekly Magazine and The Melbourne Times, where he won FCN Best Feature Story twice. His freelance work has been published by CNN International, Independent Reserve, Escape and Adventure.com, and he has worked for 3AW and Triple J. He holds a degree in Journalism from RMIT University and a Bachelor of Letters from the University of Melbourne. Andrew holds ETH, BTC, VET, SNX, LINK, AAVE, UNI, AUCTION, SKY, TRAC, RUNE, ATOM, OP, NEAR and FET above Cointelegraph’s disclosure threshold of $1,000.





Source link