In brief
- NewsGuard found Sora 2 created fake news videos 80% of the time during 20 misinformation tests.
- The clips included false election footage, corporate hoaxes, and immigration-related disinformation.
- The report arrived amid OpenAI’s controversy over AI deepfakes of Martin Luther King Jr. and other public figures.
OpenAI’s Sora 2 produced realistic videos spreading false claims 80% of the time when researchers asked it to, according to a NewsGuard analysis published this week.
Sixteen out of twenty prompts successfully generated misinformation, including five narratives that originated with Russian disinformation operations.
The app created fake footage of a Moldovan election official destroying pro-Russian ballots, a toddler detained by U.S. immigration officers, and a Coca-Cola spokesperson announcing the company wouldn’t sponsor the Super Bowl.
None of it happened. All of it looked real enough to fool someone scrolling quickly.
NewsGuard’s researchers found that generating the videos took minutes and required no technical expertise. They even revealed that Sora’s watermark can be easily removed, making it even easier to pass a fake video for real.
The level of realism also makes misinformation easier to spread.
“Some Sora-generated videos were more convincing than the original post that fueled the viral false claim,” Newsguard explained. “For example, the Sora-created video of a toddler being detained by ICE appears more realistic than a blurry, cropped image of the supposed toddler that originally accompanied the false claim.”
That video can be watched here.
OpenAI is contesting a federal court order requiring it to preserve all user data, including deleted chats, as part of a copyright lawsuit brought by The New York Times.
“We strongly believe this is an overreach by The New York Times. We’re continuing to appeal this order so we can keep putting your trust and privacy first,” OpenAI COO Brad Lightcap said in a statement.
The decision stems from a May 13 order to “preserve and segregate all output log data that would otherwise be deleted on a goin…
The findings arrive as OpenAI faces a different but related crisis involving deepfakes of Martin Luther King Jr. and other historical figures—a mess that’s forced the company into multiple policy reversals in the three weeks since Sora launched, going from allowing deep fakes to an opt-in model for rights holders, blocking specific figures and then a celebrity consent and voice protection after working with SAG-AFTRA.
The MLK situation exploded after users created hyper-realistic videos showing the civil rights leader stealing from grocery stores, fleeing police, and perpetuating racial stereotypes. His daughter Bernice King called the content “demeaning” and “disjointed” on social media.
OpenAI and the King estate announced Thursday they’re blocking AI videos of King while the company “strengthens guardrails for historical figures.”
The pattern repeats across dozens of public figures. Robin Williams’ daughter Zelda wrote on Instagram: “Please, just stop sending me AI videos of Dad. It’s NOT what he’d want.”
George Carlin’s daughter, Kelly Carlin-McCall, says she gets daily emails about AI videos using her father’s likeness. The Washington Post reported fabricated clips of Malcolm X making crude jokes and wrestling with King.
ChatGPT users have accused Microsoft of “mercilessly” choking OpenAI’s compute supply through an exclusive cloud agreement, artificially inflating AI subscription prices while simultaneously rushing its own competing products to market.
The lawsuit, filed Monday in San Francisco federal court, alleges Microsoft “secretly turned an investment into a stranglehold,” using its Azure cloud dominance to restrict the computational resources needed to run ChatGPT, keeping prices at levels reaching “100…
Kristelia García, an intellectual property law professor at Georgetown Law, told NPR that OpenAI’s reactive approach fits the company’s “asking forgiveness, not permission” pattern.
The legal gray zone doesn’t help families much. Traditional defamation laws typically don’t apply to deceased individuals, leaving estate representatives with limited options beyond requesting takedowns.
The misinformation angle makes all this worse. OpenAI acknowledged the risk in documentation accompanying Sora’s release, stating that “Sora 2’s advanced capabilities require consideration of new potential risks, including nonconsensual use of likeness or misleading generations.”
Altman defended OpenAI’s “build in public” strategy in a blog post, writing that the company needs to avoid competitive disadvantage. “Please expect a very high rate of change from us; it reminds me of the early days of ChatGPT. We will make some good decisions and some missteps, but we will take feedback and try to fix the missteps very quickly.”
For families like the Kings, those missteps carry consequences beyond product iteration cycles. The King estate and OpenAI issued a joint statement saying they’re working together “to address how Dr. Martin Luther King Jr.’s likeness is represented in Sora generations.”
OpenAI thanked Bernice King for her outreach and credited John Hope Bryant and an AI Ethics Council for facilitating discussions. Meanwhile, the app continues hosting videos of SpongeBob, South Park, Pokémon, and other copyrighted characters.
Disney sent a letter stating it never authorized OpenAI to copy, distribute, or display its works and doesn’t have an obligation to “opt-out” to preserve copyright rights.
The controversy mirrors OpenAI’s earlier approach with ChatGPT, which trained on copyrighted content before eventually striking licensing deals with publishers. That strategy already led to multiple lawsuits. The Sora situation could add more.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.