OpenAI announced on Tuesday that it has begun training its new flagship artificial intelligence model, a successor to the GPT-4 technology that powers the company's popular online chatbot, ChatGPT.
The San Francisco startup, one of the world's leading AI companies, said in a blog post that it aims to build “artificial general intelligence,” or AGI, a machine that can do anything the human brain can do, and hopes the new model will bring “next-level capabilities.” The new model would power AI products such as chatbots, digital assistants similar to Apple's Siri, search engines and image generators.
OpenAI also said it would create a new safety and security committee to consider how to address risks posed by new models and future technology.
“We are proud to manufacture and release models that lead the industry in both features and safety, but welcome robust discussion at this critical time,” the company said.
OpenAI aims to push AI technology forward faster than its rivals, while also assuaging critics who say the technology is becoming increasingly dangerous, spreading misinformation, causing job losses and even posing a threat to humanity. Experts are divided on when tech companies will reach artificial general intelligence, but companies like OpenAI, Google, Meta and Microsoft have been steadily increasing the power of their AI technology for over a decade, with notable leaps roughly every two to three years.
OpenAI's GPT-4, released in March 2023, will enable chatbots and other software apps to answer questions, write emails, prepare term papers and analyze data. The latest version of the technology, announced this month, is not yet widely available, but it can also generate images and respond to questions and commands in a highly conversational voice.
Days after OpenAI released the updated version, called GPT-4o, actress Scarlett Johansson said the voice was “eerily similar to me.” She said OpenAI CEO Sam Altman had tried to give her permission to use her voice in the product but she declined, and that she had hired lawyers to ask OpenAI to stop using it. The company said the voice was not Johansson's.
Technologies like GPT-4o acquire skills by analyzing vast amounts of digital data, including audio, photos, videos, Wikipedia articles, books, news articles, etc. The New York Times sued OpenAI and Microsoft in December, alleging copyright infringement of news content linked to the AI system.
Digitally “training” an AI model can take months, or even years, and once training is complete, AI companies typically spend several more months testing and fine-tuning the technology before releasing it to the public.
This could mean that OpenAI's next model won't arrive for another nine months to a year or more.
As OpenAI trains new models, the company said its newly formed safety and security committee will work to refine policies and processes to protect the technology. The committee includes Altman and OpenAI directors Brett Taylor, Adam D'Angelo and Nicole Seligman. The company said the new policies could go into effect in the late summer or fall.
Earlier this month, OpenAI announced that its co-founder and one of its safety leaders, Ilya Sutskever, was leaving the company, raising concerns that the company was not doing enough to address the dangers posed by AI.
Dr. Sutskever fired Altman from OpenAI in November, along with three other directors, arguing that Altman was no longer in charge of the company's plans to develop artificial general intelligence for the benefit of humanity. After lobbying by Altman's allies, he was reinstated five days later and has since regained control of the company.
Sutskever leads what OpenAI calls the Superalignment Team, which is looking for ways to ensure that future AI models can do no harm, and like other researchers in the field, she was increasingly concerned that AI could pose a threat to humanity.
The future of the SuperAlignment team is unclear after Jan Rijcke, who led it alongside Dr Sutskever, resigned from the company this month.
OpenAI is incorporating the long-term safety study into a larger effort to ensure its technology is safe, led by another co-founder, John Shulman, who led the team that developed ChatGPT. The new safety committee will oversee Dr. Shulman's research and provide guidance on how the company addresses technical risks.