US judge says Pentagon’s blacklisting of Anthropic looks like punishment for its views on AI safety

By Jack Queen

March 24 (Pacthman.co) – A U.S. judge said on Tuesday that the Pentagon’s blacklisting of Anthropic looked like an effort to punish the artificial intelligence lab for going public with its concerns about AI safety in the military.

Anthropic’s lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply-chain risk. The government can apply that label to companies that expose military systems to potential infiltration or sabotage by adversaries. 

U.S. District Judge Rita Lin in San Francisco, an appointee of former Democratic President Joe Biden,  said during a court hearing that the designation “looks like an attempt to cripple Anthropic.”

“It looks like DOW is punishing Anthropic for trying to bring public scrutiny to this contract dispute,” Lin said, using an acronym for the Department of War, President Donald Trump’s new name for the Defense Department.  

The hearing concerns Anthropic’s request for a  temporary order blocking the designation while the case plays out. Lin said at the end of the hearing that she would rule in a written order within the next few days. 

The unprecedented designation, which followed Anthropic’s refusal to allow the military to use its Claude AI software for U.S. surveillance or autonomous weapons, blocks Anthropic from certain military contracts. It could cost the company billions of dollars this year in lost business and reputational harm, Anthropic executives said on March 9. 

The company says AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights. 

UNPRECEDENTED SUPPLY-CHAIN LABEL

Anthropic’s designation was the first time a U.S. company has been publicly designated a supply-chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage. 

In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process. 

During Tuesday’s hearing, a lawyer for Anthropic said the Pentagon was using a flawed interpretation of federal procurement law to retaliate against Anthropic for its negotiating position. 

“The logical implication of their position here is they can point to their frustrations in a contract negotiation, the stubbornness of the vendor, and say, ‘because you’re working in an area that touches national security, we’re going to tell the world that we think you might come around in the future and sabotage our systems,'” said attorney Michael Mongan. 

Justice Department lawyer Eric Hamilton said Anthropic’s pushback against lawful uses of its technology convinced the Defense Department that it could not rely on the company going forward and that the designation was appropriate to secure its systems. 

“What happens if Anthropic, through an update, installs a kill switch or installs functionality that allows it to change how the software is functioning when our warfighters need it most? That is an unacceptable risk,” Hamilton said. 

Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts.

(Reporting by Jack Queen in New York;Editing by Noeleen Walder, Rod Nickel and Matthew Lewis)

Leave a Reply

Your email address will not be published. Required fields are marked *