OpenAI presents comprehensive security concept for AI applications

OpenAI presents a security framework that divides the evaluation of their AI technology into four categories to address concerns regarding potential risks.

12/19/2023, 8:00 PM
Eulerpool News Dec 19, 2023, 8:00 PM

The development of Artificial Intelligence (AI) is advancing rapidly and fascinating the public with its impressive abilities. However, concerns about potential dangers of this technology are also growing.

In response to these concerns, the renowned ChatGPT manufacturer OpenAI has now presented a security concept for its most advanced AI models. This concept divides the evaluation of AI into four categories: cybersecurity, atomic, chemical, biological, or radiological threats, persuasiveness, and autonomy. Each category is assigned to one of four risk levels based on specific criteria, ranging from "low" to "critical."

As an example of a critical degree of autonomy, a AI model that can independently conduct AI research and thus trigger an "uncontrollable process of self-improvement" is mentioned - a so-called "intelligence explosion".

To assess the risk of an AI, according to OpenAI, evaluations should be carried out both before and after the implementation of security measures. The overall assessment of the model corresponds to the highest risk level in any of the categories.

The company emphasizes that only AI models should be used, which, after the implementation of security measures, reach at most the second-highest risk level "medium". In addition, only models that have not been classified as "critical" risk may be further developed.

To ensure compliance with these guidelines, various groups are employed for monitoring and advisory purposes. The board has the authority to override decisions made by corporate management.

OpenAI's AI chatbot software ChatGPT caused a stir over a year ago, highlighting the latest breakthroughs in generative AI. However, despite the excitement about the advancements, concerns remain about potential dangers because the AI possesses human-like abilities such as writing texts, analyzing data, and generating images.

This is also shared by the population, as a survey by Reuters/Ipsos in May found: 61 percent of US citizens believe that AI could threaten human civilization.

Own the gold standard ✨ in financial data & analytics
fair value · 20 million securities worldwide · 50 year history · 10 year estimates · leading business news

Subscribe for $2

News