Unlock editor summary for free
Rula Khalaf, editor of the FT, picks her favorite stories in this weekly newsletter.
OpenAI said it has begun training its next-generation artificial intelligence software, although the startup has backed away from earlier claims that it wants to build “superintelligent” systems that are smarter than humans.
The San Francisco-based company said on Tuesday that it has begun production of a new AI system “to take us to the next level of capability” and that its development will be overseen by a new safety and security committee.
But as OpenAI races to develop AI, a senior executive at OpenAI appeared to back off on previous comments by its CEO, Sam Altman, that it ultimately aims to build a “superintelligence” far more advanced than humans.
Anna Macanju, OpenAI’s vice president of global affairs, told the Financial Times in an interview that its “mission” is to build artificial general intelligence capable of “cognitive tasks that a human can do today.”
“Our mission is to build AGI; I wouldn’t say that our mission is to build superintelligence,” Makanju said. “Superintelligence is a technology that will be orders of magnitude more intelligent than human beings on Earth.”
Altman told the FT in November that he spent half his time researching “how to build a superintelligence”.
Liz Bourgeois, a spokeswoman for OpenAI, said superintelligence is not the company’s “mission.”
“Our mission is AGI that is useful to humanity,” she said after the FT story was originally published on Tuesday. “To achieve that, we’re also studying superintelligence, which we generally think of as systems even more intelligent than AGI.” She disputed any suggestion that the two were in conflict.
While fending off competition from Google’s Gemini and Elon Musk’s xAI startup, OpenAI is trying to reassure policymakers that it is prioritizing responsible AI development after several senior safety researchers left this month.
Its new committee will be led by Altman and board directors Brett Taylor, Adam D’Angelo and Nicole Seligman and will report to the other three board members.
The company didn’t say what the GPT-4 sequel, which powers its ChatGPT app and received a major upgrade two weeks ago, might do, or when it would launch.
Earlier this month, OpenAI disbanded its so-called Superalignment Team — tasked with focusing on the safety of potentially superintelligent systems — after Ilya Sutzkever, the team’s leader and co-founder of the company, left.
Sutzkever’s departure came months after he led a shock coup against Altman in November that ultimately failed.
The shutdown of the superstack team led to the departure of several employees from the company, including Jan Leicke, another senior AI safety researcher.
Makanju stressed that the “long-term possibilities” of AI are still being worked on, “even if they are theoretical.”
“AGI doesn’t exist yet,” Makanjou added, and said such technology would not be released until it is safe.
Training is the main step in how an AI model learns from the vast amount of data and information provided to it. After it has assimilated the data and its performance has improved, the model is validated and tested before being implemented in products or applications.
This long and highly technical process means that OpenAI’s new model may not become a tangible product for many months.
Additional reporting by Madhumita Murgia in London