- Reid Hoffman says AI can't be your friend, and pretending it can is "harming the person."
- Hoffman warned on a podcast that AI "friendships" erode human connections.
- His comments come as Mark Zuckerberg publicly pushes to embed AI companions across Meta's apps.
LinkedIn cofounder and AI investor Reid Hoffman is sounding the alarm on a growing trend in the tech world: AI systems being marketed as your new best friend.
"I don't think any AI tool today is capable of being a friend," Hoffman said in a Wednesday episode of the Possible podcast. "And I think if it's pretending to be a friend, you're actually harming the person in so doing."
His comments came amid Meta CEO Mark Zuckerberg's push to embed AI companions across Facebook, Instagram, WhatsApp, and even Ray-Ban smart glasses.
Last month, Zuckerberg told podcaster Dwarkesh Patel he sees AI chatbots as part of the solution to America's so-called loneliness epidemic. He cited statistics suggesting that the average American has "fewer than three friends" but has the capacity for 15.
According to a 2021 report from the Survey Center on American Life, 49% of Americans report having three or fewer friends.
But Hoffman drew a sharp distinction between companions and friends, saying that blurring that line erodes what it means to be human.
"Friendship is a two-directional relationship," he said. "Companionship and many other kinds of interactions are not necessarily two-directional. And I think that's extremely important because it's the kind of subtle erosion of humanity."
He said his theory of friendship was "two people agree to help each other become the best versions of themselves," a dynamic that involves not just emotional support, but also accountability — something no chatbot can reciprocate.
"It's not only, 'Are you there for me?', but I am here for you."
Hoffman praised design choices like Inflection AI's Pi assistant, which explicitly tells users, "I'm your companion," and encourages people to spend time with actual human friends.
"Helping you go out into your world of friends is, I think, an extremely important thing for a companion to do," he said.
As tech companies race to deploy more emotionally intelligent bots, Hoffman argued for more transparency and regulation.
"We as a market should demand it, we as an industry, all MPAs, should standardize around it," he said. "And if there's confusion around this, I think we as government should say, 'Hey, look, if you're not stepping up to this, we should.'"
For Hoffman, the stakes are high. "I think that's a degradation of the quality of elevation of human life," he said. "And that should not be what it's doing."
Hoffman isn't alone in raising the alarm on AI companions.
During a Senate testimony earlier this month, OpenAI CEO Sam Altman voiced similar concerns about AI forming personal bonds with children.
When asked whether he'd want his own child to form a best-friend bond with an AI bot, he said, "I do not."
He said that while adults might seek emotionally supportive relationships with AI, children require a "much higher level of protection" in how these systems interact with them.
"These AI systems will get to know you over the course of your life so well. That presents a new challenge and level of importance for how we think about privacy in the world of AI," said Altman, who became a father in February.