The CEO of Google DeepMind warns AI companies not to fall into the same trap as early social media firms

3 hours ago 1

Demis Hassabis in a blazer and sweater in front of a blue background.

Google DeepMind CEO Demis Hassabis urged caution on AI, saying it must serve people rather than exploit attention. World Economic Forum/Gabriel Lado
  • Google DeepMind CEO said AI must avoid repeating social media's toxic playbook.
  • Demis Hassabis cautioned that chasing clicks and engagement could fuel polarization and harm.
  • Hassabis called for AI to serve individuals, not hijack attention like social media.

Google DeepMind CEO Demis Hassabis has a clear warning as artificial intelligence sweeps into daily life: don't let it fall into the same traps that made social media toxic.

Speaking at the Athens Innovation Summit alongside Greek Prime Minister Kyriakos Mitsotakis last Friday, Hassabis said that while AI is one of the most transformational technologies in history, its rollout demands a far more cautious approach than what he described as Silicon Valley's old "move fast and break things" ethos.

"We should learn the lessons from social media, where this attitude of maybe 'move fast and break things' went ahead of the understanding of what the consequent second- and third-order effects were going to be," Hassabis said.

He warned that AI designed primarily to maximize user engagement — as social media has done for over a decade — risks amplifying the same problems, from hijacking attention spans to mental health issues.

Social media algorithms are built to "grab more and more of your attention, but not necessarily in a way that's beneficial to you as the individual," he said, adding that the stakes are higher given AI's potential reach across industries and societies.

Instead, Hassabis urged regulators and technologists to adopt a scientific method approach: test and understand the systems thoroughly before unleashing them on billions of people.

He argued that AI should be built as a tool to serve people, not to manipulate them.

Hassabis said the goal is to strike the right balance — "being bold with the opportunities, but being responsible about mitigating the risks" — a "continual" tension he believes will persist "all the way to AGI."

AI is already showing some of the same cracks

AI researchers have already seen AI reproduce some of social media's toxic patterns.

In a study published in August, researchers at the University of Amsterdam gave 500 chatbots their own stripped-down social network and saw them splinter into cliques, boost extreme voices, and let a small group dominate the conversation — even without ads or recommendation algorithms.

The team tested six interventions to break the cycle, from chronological feeds to hiding follower counts, but none worked. They concluded that the dysfunction runs deeper than algorithms and is baked into the way social platforms reward emotionally charged sharing.

Meanwhile, AI is becoming more embedded in social media, with virtual influencers going mainstream, brands testing AI faces and voices, and some creators warning that licensing their likeness in perpetuity could undercut their careers.

While OpenAI CEO Sam Altman has argued that addictive social media feeds may be more harmful to kids than AI itself, Reddit cofounder Alexis Ohanian has suggested AI could give users more control over what they see online.

Read Entire Article
| Opini Rakyat Politico | | |