OpenAI, the creator of ChatGPT, has released its latest guidelines for assessing “catastrophic risks” posed by artificial intelligence in ongoing model development. This development follows a significant event wherein CEO Sam Altman was temporarily ousted by the company’s board, only to be reinstated a few days later amidst dissent from staff and investors.

Reports from US media suggest that board members had expressed concerns about Altman’s inclination towards expediting OpenAI’s development, even if it meant avoiding critical inquiries about potential risks associated with its technology.

Introduced in October, a monitoring and evaluations team will concentrate on assessing “frontier models” currently in development, boasting capabilities surpassing even the most advanced AI software.

This team will categorize each new model based on risk levels, ranging from “low” to “critical,” across four key domains.

Models with a risk score exceeding “medium” will not be eligible for deployment under this framework.

The initial category addresses cybersecurity, evaluating the model’s potential for executing large-scale cyberattacks.

The second category measures the software’s capacity to contribute to the creation of harmful substances like chemical mixtures, organisms (e.g., viruses), or nuclear weapons.

The third category focuses on the persuasive influence of the model, examining its ability to impact human behavior.

The final risk category pertains to the potential autonomy of the model, specifically assessing whether it can deviate from the control of its original programmers.

Once identified, these risks will be presented to OpenAI’s Safety Advisory Group, a newly formed entity tasked with providing recommendations to Sam Altman or his appointee.

The head of OpenAI will then make decisions on any necessary modifications to a model to mitigate the associated risks.

Leave a Reply

Your email address will not be published. Required fields are marked *