SAN FRANCISCO: OpenAI, Anthropic and Google are tightening coordination against unauthorized copying of advanced AI systems through the Frontier Model Forum, an industry nonprofit whose threat sharing framework now allows member companies to exchange information on vulnerabilities, attack methods and other security risks tied to frontier models. The forum, launched in 2023, was created by Anthropic, Google, Microsoft and OpenAI and now also includes Amazon and Meta, giving the effort a broader industry base as model protection becomes a central concern for major U.S. developers.

The Frontier Model Forum said in March 2025 that all of its member firms had signed a voluntary information sharing agreement covering three categories: vulnerabilities and exploitable flaws, threats involving unauthorized access or manipulation, and capabilities of concern with potential to cause large scale harm. In a progress update published on February 16, 2026, the group said member firms had already used the system to share covered information and that its legal and technical infrastructure was built to protect intellectual property while preserving antitrust compliance.
Pressure on the issue intensified on February 23, when Anthropic publicly accused three Chinese AI companies, DeepSeek, Moonshot AI and MiniMax, of running large scale campaigns to extract capabilities from Claude in violation of its terms of service. Anthropic said the campaigns generated more than 16 million interactions through about 24,000 fake accounts and targeted some of Claude’s strongest functions, including reasoning, coding, tool use, computer vision and computer control. The company described the activity as industrial scale distillation rather than normal commercial usage.
Shared Security Channel
Anthropic said it linked the campaigns to specific labs through IP address correlation, request metadata, infrastructure indicators and corroboration from industry partners that had observed similar behavior on their own platforms. It said the operators used fraudulent accounts and proxy services to reach Claude at scale while evading detection, and that one proxy network managed more than 20,000 accounts at the same time. In MiniMax’s case, Anthropic said it watched traffic pivot within 24 hours of a new Claude release, with nearly half of that traffic redirected to the newest system.
The Frontier Model Forum reinforced those concerns the same day in an issue brief defining adversarial distillation as covert or indirect access to a model’s outputs in order to train another model to replicate its capabilities, often in breach of license terms or terms of use. The brief said the risk is not only duplication of performance, but the possibility that copied capabilities may spread without the safety measures that accompanied the original system. It identified general reasoning, advanced coding, multimodal processing and agentic tool use as prime targets for this kind of extraction.
Growing Scrutiny In Washington
OpenAI has also raised the issue with U.S. lawmakers. In a memo sent to the House Select Committee on the Chinese Communist Party, the company said DeepSeek employees had used obfuscated third party routers and automated access methods to obtain outputs for distillation. OpenAI’s terms of use, updated on January 1, 2026, prohibit both automatic extraction of output and the use of output to develop models that compete with OpenAI. Those restrictions place model output security alongside chip controls and access controls in the wider contest over advanced AI systems.
Together, the disclosures show that leading U.S. developers are treating unauthorized model extraction as a live security and competitive threat, not simply a routine terms of service dispute. The Frontier Model Forum remains focused broadly on AI safety and security, but its information sharing mechanism has now emerged as a formal channel for exchanging warnings about jailbreaks, attack vectors, cyber threat indicators and other attempts to gain unauthorized access to frontier AI models. – By Content Syndication Services.
