OpenAI to Double Down on AI Safety
Published: 7.11.2023
OpenAI, the company that created the popular language model ChatGPT, has announced that it is doubling down on its efforts to ensure that AI is safe for humans. The company is creating a new research team, called the Superalignment team, that will focus on developing new methods for preventing AI from "going rogue."
In a blog post, OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote that the vast power of superintelligence could "lead to the disempowerment of humanity or even human extinction." They said that OpenAI's goal is to create AI that is "aligned" with human values, meaning that it would not intentionally harm humans.
The Superalignment team will have three main goals:
- To develop a "human-level" AI alignment researcher.
- To scale up the training of AI alignment researchers.
- To develop methods for ensuring that AI systems are aligned with human values.
OpenAI's announcement has been met with mixed reactions. Some experts have praised the company for its commitment to AI safety, while others have expressed concerns about the feasibility of its plans.
Connor Leahy, an AI safety advocate, said that OpenAI's plan is "fundamentally flawed" because it relies on the creation of a "human-level" AI alignment researcher. He argued that such an AI could run amok before it could be compelled to solve AI safety problems.
Despite these concerns, OpenAI's announcement is a significant step forward in the field of AI safety. The company's commitment to ensuring that AI is safe for humans is a welcome development, and its plans for the Superalignment team could help to make AI a force for good in the world.
OpenAI plans to have the Superalignment team up and running by the end of 2023.
OpenAI has not disclosed how much it will cost to fund the Superalignment team. However, the company has said that it is committed to investing "significant resources" in AI safety.
If successful, the Superalignment team could help to ensure that AI is safe for humans. This would be a major achievement, as it would help to prevent AI from being used for malicious purposes.
This announcement is a positive development in the field of AI safety. The company's commitment to ensuring that AI is safe for humans is a welcome step, and its plans for the Superalignment team could help to make AI a force for good in the world.