In brief
- OpenAI unveiled Daybreak, a cybersecurity initiative focused on AI-assisted vulnerability detection and software defense.
- CEO Sam Altman said AI is becoming increasingly capable at cybersecurity and that OpenAI wants to help companies secure systems before attacks occur.
- The announcement comes as Google, Anthropic, and other AI companies expand into cybersecurity tools and services.
OpenAI on Monday launched Daybreak, a cybersecurity initiative aimed at helping developers and security teams identify vulnerabilities, validate fixes, and secure software faster using artificial intelligence.
The announcement underscores a broadening shift as AI companies are increasingly pushing into cybersecurity as advanced models improve at analyzing code, finding software weaknesses, and automating technical tasks.
In a post on X, OpenAI CEO Sam Altman called Daybreak an “effort to accelerate cyber defense and continuously secure software.”
“AI is already good and about to get super good at cybersecurity; we’d like to start working with as many companies as possible now to help them continuously secure themselves,” Altman wrote.
According to OpenAI, Daybreak combines the company’s AI models with Codex, its coding-focused agentic system, to help security teams review code, analyze dependencies, model threats, validate patches, and investigate unfamiliar systems. The company said the goal is to reduce the time between identifying a vulnerability and fixing it.
OpenAI did not immediately respond to a request for comment by Decrypt.
Daybreak comes as cybersecurity researchers and industry experts warn about the threat of AI-powered cyberattacks after the launch of Claude Mythos last month. Using Mythos, Firefox browser developer Mozilla said it was able to find 271 unknown vulnerabilities in the browser.
“AI can now help defenders reason across codebases, identify subtle vulnerabilities, validate fixes, analyze unfamiliar systems, and move from discovery to remediation faster,” OpenAI said in a statement. “Because those same capabilities can be misused, Daybreak pairs expanded defensive capability with trust, verification, proportional safeguards, and accountability.”
The announcement also comes as major AI companies increasingly market their models for cybersecurity and software engineering tasks. OpenAI rival Anthropic has also increasingly marketed its Claude models for coding and security-related tasks as competition intensifies among AI companies seeking enterprise customers.
While experts remain divided over the extent of the threat AI poses, researchers and government agencies have warned that advanced AI models could accelerate cyberattacks by helping hackers automate vulnerability research, malware development, and exploit creation. At the same time, Google researchers recently said large language models are becoming better at identifying and exploiting software weaknesses that traditional security scanners often miss.
OpenAI said it plans to work with government and industry partners before deploying more cyber-capable AI models, as regulators and national security officials attempt to scrutinize advanced AI models before they launch to the public.
“Daybreak is the first glimpse of sunlight in the morning,” OpenAI wrote. “For cyber defense, it means seeing risk earlier, acting sooner, and helping make software resilient by design.”
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
