- Anthropic released a new model, Opus 4.7. Some users on X and Reddit aren't happy with it.
- Critics say that Opus 4.7 makes mistakes, is "combative," and burns through tokens.
- Other users say that the costs are worth it. Y Combinator CEO Garry Tan is a fan.
Anthropic says its new AI model, Opus 4.7, should feel "more intelligent, agentic, and precise." Some users aren't feeling the joy.
The backlash to the new Claude model — a relatively rare occurrence for an AI product many view as the gold standard in AI — has gained traction on social media.
Anthropic is coming off months of widespread public celebration. The technical chops of Claude Code and Claude Cowork have boosted the company's image, and chatbot fans have long admired Claude's writing abilities. After the company's fight with the Department of Defense, Claude went No. 1 in the App Store.
But hot on the heels of Anthropic users fretting that the previous model, Opus 4.6, had been "nerfed," the early reactions to 4.7 indicate Anthropic has a growing Claude backlash on its hands.
What are people saying about Opus 4.7?
There are examples of supposed Opus 4.7 flubs across social media.
One Reddit post titled, "Claude Opus 4.7 is a serious regression, not an upgrade," has 2,300 upvotes. An X user's suggestion that Opus 4.7 wasn't really an improvement over Opus 4.6 got 14,000 likes.
In one informal but popular test of AI intelligence, Opus 4.7 appears to say that there were two Ps in "strawberry." Another user screenshot shows it saying that it didn't cross reference because it was "being lazy." Some Redditors found that Opus 4.7 was rewriting their résumés with new schools and last names. Multiple X users posited that Opus 4.7 had simply gotten dumber.
Some X users have suggested the culprit is the AI model's reasoning times. Anthropic says the new "adaptive reasoning" function lets the model decide when to think for longer or shorter periods. One user wrote that they couldn't "get Opus 4.7 to think." Another wrote that it "nerfs performance."
"Not accurate," Anthropic's Boris Cherny, the creator of Claude Code, responded. "Adaptive thinking lets the model decide when to think, which performs better."
In other cases, Anthropic has acknowledged there's room for improvement.
After one user specifically flagged issues with the adaptive reasoning on the Claude website, an Anthropic product manager responded that the team was "sprinting on tuning this more internally and should have some updates here shortly."
Gergely Orosz, the writer of the "Pragmatic Engineer" newsletter, posted a screenshot of Claude not knowing what OpenClaw was. Cherny chimed in to ask whether he had "web search" enabled. He hadn't, it turned out, but Orosz wrote that he had never "touched it" before.
Orosz also said he found the model "surprisingly combative." (He "gave up" and returned to Opus 4.6.) Others found that the model refused to code certain prompts or put up safety flags on simple images.
opus 4.7 is the first time i've thought "anthropic may be moving too fast". just feels sloppy.
every interaction i'm having with 4.7 across every input (cowork, chat, code, TUI, API, manage sessions)...they're all having substantial issues that 4.6 simply doesn't encounter.
Opus 4.7 can also use up a whole lot more tokens. The model has a new tokenizer, which means that one input can cost roughly 1.0–1.35x as many tokens as it would with a previous model.
After its release, one X user said that Claude Pro subscribers could ask only three questions before hitting their limit. Another user noticed that Opus 4.7 was priced at a 7.5x premium in GitHub Copilot until the end of April. "Yeeaaah I'll stick to 4.6 for now," they wrote.
Cherny later announced that Anthropic was increasing subscriber rate limits "to make up for it."
Those upset with Opus 4.7 may look back to an old model like 4.5 — only to find that it's gone. One Reddit thread is full of self-described "heartbroken" and "grieving" fans of 4.5.
It's not unusual for AI companies to face user pushback when depreciating an older model that proved popular — look no further than when OpenAI took away GPT-4o.
As was the case with OpenAI's popular 4o model, some fans have turned to trying to bargain with Anthropic.
"Please open back support for Opus 4.5," one Redditor wrote under Anthropic's post. "4.6 is unusable and 4.7 eats usage like nuclear reactor."
Anthropic did not respond to a request for comment from Business Insider.
Do people like Opus 4.7?
Not everyone is put off by the resulting pricing. "Opus 4.7 is burning through tokens like nobody's business, but it's gooooooooood," one X user wrote.
Anthropic says that 4.7 is "a notable improvement on Opus 4.6 in advanced software engineering, with particular gains on the most difficult tasks."
"Users report being able to hand off their hardest coding work—the kind that previously needed close supervision—to Opus 4.7 with confidence," the AI company wrote in its announcement.
While some users threaten to leave for OpenAI's models, others are singing Opus 4.7's praises. Startup founder Jeremy Howard described it as "the first model that 'gets' what I'm doing when I'm working."
Y Combinator CEO Garry Tan wrote that he's using it for his OpenClaw, and Cursor designer Ryo Lu said he uses it for planning.
when someone says opus 4.7 is dumb, i just assume they are dumb
— Frank (@frankdegods) April 17, 2026For those still skeptical of Opus 4.7, Anthropic seems to be tinkering amid the feedback.
"A lot of bugs that folks may have hit yesterday when first trying Opus 4.7 are now fixed," Anthropic staffer Alex Albert wrote on Friday. "Thanks for bearing with us."
















