top of page

Anthropic blocks OpenAI’s access to Claude: API restriction reveals growing rivalry over GPT-5 development


ree

Anthropic cuts off OpenAI’s API access to Claude for competitive misuse.

In a move that sparked immediate reaction across the AI industry, Anthropic has officially blocked OpenAI from accessing its Claude API, citing violations of commercial terms related to model development. The decision, which came into effect in late July and was first disclosed on August 1, 2025, centers around Anthropic’s claim that OpenAI engineers were using Claude Code, the company's advanced code-generation environment, to benchmark or even help train components of GPT-5—an action Anthropic considers a breach of its usage policy.



The ban doesn't affect everyday users of ChatGPT or Claude. Instead, it targets a behind-the-scenes API integration allegedly used by OpenAI’s technical team to gather insights from Claude’s performance—possibly including behavior on complex prompts, coding tasks, and safety challenges. Anthropic’s commercial Terms of Service, updated in June 2025, explicitly prohibit using their services to develop competing products or perform reverse engineering. The cutoff marks a clear escalation in the competitive dynamics between the two largest independent AI labs currently operating at scale.


The official justification revolves around clause D-4 of the June 2025 commercial terms.

Anthropic’s legal reasoning for the cutoff is tied to a specific clause introduced on June 17, 2025, in their revised commercial Terms of Service. The clause states that customers may not access Claude's services to "build a competing product or service, including to train competing AI models, or reverse engineer the Services." Based on reports, OpenAI’s internal accounts were using Claude Code’s API for large-scale querying—an activity that Anthropic interpreted as a strategic attempt to study or replicate performance characteristics relevant to its next-generation GPT-5 model.



Although OpenAI has not denied the activity, it disputes the interpretation. In a statement to Wired, OpenAI emphasized that benchmarking competitors is common practice across the industry and pointed out that Anthropic has access to GPT-4 via OpenAI’s public-facing API. However, Anthropic’s position is that the scale, frequency, and intent behind OpenAI’s usage crossed a line into direct competitive exploitation.


This isn't the first sign of Anthropic tightening access. In July, the company rate-limited Claude Code for high-frequency users, and earlier this year, it restricted API access to a startup called Windsurf shortly before its acquisition by OpenAI—suggesting that the company has been actively closing off vectors it views as strategically risky.



The immediate consequences are subtle but signal a larger shift in the AI ecosystem.

The direct impact on the average user is negligible—ChatGPT continues to operate normally, and Claude remains accessible through its usual interfaces. However, the implications for AI research, safety testing, and competitive benchmarking are more significant. Historically, cross-evaluation between labs has helped identify vulnerabilities, improve reliability, and ensure ethical safeguards in model outputs. With Anthropic pulling up the drawbridge, the landscape is shifting toward "walled gardens", where each company protects its models as proprietary trade secrets rather than shared scientific achievements.


OpenAI lamented the move, stating that it will continue to allow Anthropic to access its models and that it views the decision as a “step backward for transparency and research openness.” Anthropic, on the other hand, affirmed that it will still permit limited access for “safety audits”, but has not clarified the extent or conditions for such cooperation.



The dispute is not only about competitive intelligence—it reflects the enormous stakes surrounding the upcoming generation of language models. As OpenAI races toward GPT-5 and Claude Opus continues gaining adoption in enterprise and government circles, any advantage in reasoning, coding, or instruction-following could define the next chapter of market dominance.


A deeper sign of AI’s maturing market: less sharing, more defense.

This incident is not isolated. It mirrors a larger trend in the AI space: the gradual closure of once-open ecosystems. As models become central to corporate strategy, companies are locking down their infrastructure, restricting API usage, and asserting tight control over training data and evaluation protocols. What began as a research-driven field is becoming a high-stakes, IP-sensitive race.



Anthropic’s decision to invoke a contractual clause and cut off OpenAI is a loud signal: cooperation now has strict boundaries, and every API call may be monitored for strategic intent. This move may also attract scrutiny from regulators, especially if mutual access and openness begin to disappear across the sector.

What once passed as normal inter-lab evaluation is now being framed as unauthorized competitive behavior—and it’s unclear whether this will become the new normal.


________

FOLLOW US FOR MORE.


DATA STUDIOS

bottom of page