AI

Employees of OpenAI and DeepMind demand more transparency.

OpenAI and DeepMind employees demand better protection for whistleblowers – fear of retaliation grows.

Eulerpool News Jun 5, 2024, 8:24 AM

A group of current and former employees of the leading AI companies OpenAI and DeepMind has pointed out in an open letter the lack of whistleblower protection and the fear of retaliation.

**More than a dozen current and former employees of OpenAI, Google's DeepMind, and Anthropic demanded in a published letter on Tuesday that AI companies should create safe reporting channels for employees to express concerns both internally and publicly. According to the signatories, confidentiality agreements are blocking public discussion about potential dangers of AI.**

The people with the most knowledge about the functioning of AI systems and the associated risks cannot speak freely," said William Saunders, a former OpenAI employee who signed the letter.

Here is the heading translated to English:

"In addition to Saunders, six other former OpenAI employees have signed the letter. Furthermore, four current OpenAI employees, as well as one former and one current DeepMind employee, have added their names. Six of the signatories remained anonymous.

Three leading AI experts supported the letter: AI scientist Stuart Russell, as well as Yoshua Bengio and Geoffrey Hinton, who are known as the "Godfathers of AI" due to their early breakthrough research. Hinton left Google last year to be able to speak more freely about the risks of the technology.

Hinton and others have been warning for years about the potential dangers of AI for humanity. Some AI researchers fear that the technology could spiral out of control and become as dangerous as pandemics or nuclear wars. Others are more moderate in their concerns but still call for greater regulation of AI.

OpenAI agreed in a response to the letter on Tuesday that there should be government regulation. "We are proud of our track record in providing the most capable and safest AI systems and believe in our scientific approach to risk management," said an OpenAI spokeswoman. "We agree that a rigorous debate is essential given the importance of this technology, and we will continue to work with governments, civil society, and other communities worldwide.

DeepMind and Anthropic, which is backed by Amazon, did not respond to requests for comment on Tuesday.

OpenAI, a startup founded in 2015, released ChatGPT to the public in 2022. The chatbot became one of the most viral AI products and helped turn OpenAI into a multi-billion-dollar company. Sam Altman, the head of OpenAI and one of the architects of the AI revolution, has stated that he wants to develop the technology safely.

The signatories of the letter on Tuesday said that current and former employees are among the few who can hold the companies accountable, as there is no comprehensive government oversight of AI companies. One of their concerns is that humans could lose control over autonomous AI systems, which in turn could wipe out humanity.

The signatories call on companies to provide employees with anonymous reporting channels, to refrain from taking action against whistleblowers, and not to force them into agreements that could silence them. They want AI companies to become more transparent and to focus more on safety measures.

OpenAI announced on Tuesday that there is an anonymous integrity hotline and that no technology will be released before the necessary safety measures have been taken.

Former OpenAI employee Daniel Kokotajlo, who also signed the letter, said that companies are ignoring the risks of AI in their race to develop the technology. "I decided to leave OpenAI because I lost hope that they would act responsibly, especially in the pursuit of artificial general intelligence," he said. "They and others have adopted the 'move fast and break things' mentality, and that's the opposite of what is needed for such a powerful and poorly understood technology.

Discover undervalued stocks with Eulerpool.

News