Eulerpool News

NVIDIA dominates the AI boom – but Microsoft's AI chief sees a threatening development

Hardly any technology topic moves the markets as strongly as Artificial Intelligence. Since the launch of ChatGPT at the end of 2022, one growth forecast follows the next – and one company stands at the center like no other: NVIDIA. At the same time, the loudest warnings now come ironically from the innermost circle of the AI industry.

Technology Eulerpool News

NVIDIA establishes itself as a decisive hardware supplier

The AI boom has triggered a massive demand for computing power, especially in cloud and hyperscale data centers. There, NVIDIA, with its high-performance processors and software stacks, dominates the market almost unchallenged. Originally having grown big as a graphics card manufacturer, the company has positioned itself at the top of the AI chip industry thanks to years of groundwork and is estimated to hold around 80 percent market share.

On the stock market, this is reflected accordingly: At the end of October, the company became the first ever to surpass the five trillion US dollar market capitalization mark – a milestone that underscores NVIDIA's status as a symbol of the AI hype.

Microsoft's AI Chief expects AGI by 2030 - and warns of loss of control

While investors celebrate boundless growth imagination, other voices speak out. Mustafa Suleyman, chief AI strategist at Microsoft and co-founder of DeepMind, is now urgently warning against too rapid technological development.

In a podcast conversation, he pointed out that so-called artificial general intelligence – AI systems that achieve human performance in almost all areas – could become a reality within the next five years. In doing so, he agreed with the assessment of Google DeepMind CEO Demis Hassabis.

Suleyman describes AGI as a "precursor to superintelligence" — a developmental level where AI can self-improve, formulate its own goals, and make decisions without human input. This notion, according to Suleyman, is "not reassuring.

Superintelligence as a Risk – and Why It Must Be Prevented According to Suleyman

Suleyman sees a central danger: A system that optimizes itself and is no longer bound to human values would be hardly controllable. Such a development should therefore "not be the goal, but the opposite of what we are working towards," according to his assessment.

He demands clear legal guidelines and technical control mechanisms to ensure that autonomous AI agents do not act independently but remain permanently aligned with human interests.

Microsoft is working on a "humanized" form of AI

Despite His Warnings, Suleyman Emphasized That Microsoft Is Not Developing AI Blindly, but Is Pursuing an Approach Deliberately Focused on Safety, Cooperation, and Human Control. The Goal Is a Form of Superintelligence That Supports Humans and Does Not Become Autonomous.

News