Anthropic CEO Dario Amodei acknowledged in a brand new weblog put up that the corporate had obtained a letter from the Division of Protection formally labeling the corporate a provide chain threat. “We don’t imagine this motion is legally sound,” he stated, including that the corporate believes it has “no selection” however to problem it in court docket. Hours earlier than Amodei printed the put up, the Division of Protection introduced it had notified the corporate that its merchandise are thought of a provide chain threat, efficient instantly.
To recall, the Pentagon (often called the Division of the Military beneath the present administration) threatened to present the corporate a designation usually reserved for corporations from hostile nations like China except it agreed to elevate mass surveillance and safeguards towards autonomous weapons. President Trump subsequently ordered federal businesses to cease utilizing Anthropic’s expertise.
Amodei defined that the designation is slim in scope as a result of it exists solely to guard the federal government. That is why bizarre individuals and even Division of Protection contractors can use Anthropic’s Claude chatbot and its AI expertise. Microsoft informed CNBC it should proceed to make use of Claude after its legal professionals concluded it may possibly proceed working with Anthropic on non-defense tasks.
The CEO additionally acknowledged that his firm has had “productive conversations” with the division over the previous few days. He stated he’s contemplating methods to serve the Pentagon with two exceptions: not utilizing its expertise for mass surveillance and the event of totally autonomous weapons, and “guarantee a easy transition the place that’s not potential.” This confirms studies that Anthropic has resumed negotiations with the company to conclude a brand new contract. He additionally apologized for a leaked inner memo wherein he reportedly stated OpenAI’s messages concerning the division’s contract had been “utterly false.”


