Artificial intelligence company OpenAI, responsible for services such as ChatGPT, announced the creation of an independent security committee. The body will be responsible for evaluating and monitoring the company's own activities.
O Safety and Security Committee (Security and Safety Committee, in free translation into Portuguese) already existed, but operated within the structure of OpenAI itself. Now, it operates in a parallel way and without direct relations with employees, although it has contact with representatives of the security teams and the board of directors.
Basically, the advice goes supervise processes involving the development of language models and their application in OpenAI services. He will receive frequent reports from the company and, in the process, may even delay or request revisions of releases if you encounter serious privacy or digital protection concerns.
In terms of operation, it is similar to what Meta adopted a few years ago with its own committee, which evaluates several controversial issues on digital platforms and whose demands are not always met by the brand.
The committee's first job as an independent body was to review the structure of the o1OpenAI's new language model.
Changes at OpenAI
The decision to change was made after a 90-day analysis by the committee itself of the company's transparency practices. Another measure taken was the departure of OpenAI co-founder and CEO Sam Altman, who held a position on the board even though he was the executive manager — that is, he was supervising himself.
The new team will be led by Zico Kolterwho is the director of the Machine Learning Department at Carnegie Mellon University. Other confirmed names are Adam D'AngeloCEO and co-founder of Quora, the retired general Paul Nakasone e Nicole Seligmanwho serves on Sony's board.
The change is another step in the company's likely shift to a for-profit structure, something that is expected to happen in 2025.