OpenAI is reducing the efforts dedicated to the control and safety evaluation of its upcoming artificial intelligence models. This is reported by the Financial Times, citing eight internal sources within the company. According to the report, the teams internally tasked with analyzing the potential risks of the new systems have had only a few days to carry out the checks, a much narrower time frame compared to previous standards.
This reduction in the duration and depth of security testing is accompanied by a concerning association: fewer resources employed, less attention to risk containment. Insiders speak of a process that today appears “significantly less rigorous” compared to the past. A signal that fuels alarm among industry experts, at a time when artificial intelligence continues to evolve at a dizzying speed.
OpenAI: the race for AI against China
OpenAI is close to launching a new AI system, internally identified with the code name “o3”, scheduled for next week. Although an official date has not been announced, the speed with which the development and dissemination of this model is proceeding seems linked to the urgency of maintaining primacy in an increasingly competitive market.
Among the most dynamic competitors are emerging players from China, such as DeepSeek, who are accelerating their research and development programs on generative artificial intelligence systems. In a context of increasing global pressure, OpenAI seems intent on prioritizing technological innovation at the expense of thorough checks, fueling an increasingly heated debate between development and ethical oversight.
From training to inference: the risks change
An additional factor of complexity concerns the transition from the training phase of the models, in which vast datasets are used to “teach” the AI how to think, understand, and respond, to that of inference, where the models are operationally put into function to generate content and manage data in real-time.
This operational phase introduces a new series of risks: unexpected behaviors, from inaccuracy in responses to true large-scale technological abuses. In the absence of adequate testing, these potential dangers can emerge directly in interaction with users, without being intercepted in safe and controlled environments.
Investor confidence, despite the doubts
Despite internal concerns and fears raised by the environment of artificial intelligence, investor confidence in OpenAI does not seem to have wavered. At the beginning of April, the company closed a new funding round of 40 billion dollars, led by the Japanese giant SoftBank, bringing the company’s overall valuation to 300 billion dollars.
This result demonstrates how the AI sector continues to attract capital on a global scale. Investors and companies are betting on innovation, even in the face of a weakening of security protocols. However, the gradual abandonment of solid verification structures could represent, in the medium to long term, a boomerang in terms of credibility and technological stability.
Balance between innovation and responsibility
The news that has emerged highlights a growing tension that runs through the entire artificial intelligence sector: the urgency to produce increasingly advanced solutions clashes with the need to oversee ethical implications and systemic risks. The reduction of resources aimed at safety supervision raises fundamental questions: how far can progress be pushed in the absence of clear and effective rules?
At the moment, OpenAI has not publicly commented on or denied what was reported by the Financial Times. But this silence, at a time of evident strategic change, only fuels the uncertainty about how the company intends to manage the delicate balance between responsibility and competitiveness.
The AI industry on the edge
The evolution of artificial intelligence is being played out today on multiple simultaneous fronts: technological development, ethics, safety, and economic competitiveness. OpenAI, as one of the main players in the sector, is at the center of these dynamics, and the decisions made today will have decisive impacts on how AI will be integrated into our daily lives in the future.
While on one hand the speed at which new models are developed may seem exciting, on the other hand growing concerns emerge regarding the ways in which these models are evaluated and released. The stakes are not only technological, but concern the entire balance between progress and collective responsibility.
The scientific community and the global public expect clear answers: the question is not only “What can artificial intelligence do?”, but above all “How can we ensure that it does so in a safe, transparent, and beneficial way for society?”.