5 big takeaways from Sam Altman's Saturday night AMA on OpenAI's Pentagon deal

22 hours ago 8

By Cheryl Teh

cherylt_headshot

Follow Cheryl Teh

Every time Cheryl publishes a story, you’ll get an alert straight to your inbox!

By clicking “Sign up”, you agree to receive emails from Business Insider. In addition, you accept Insider’s Terms of Service and Privacy Policy.

sam altman staring at camera

Sam Altman went on X on Saturday and answered questions about the OpenAI-Pentagon deal. SAUL LOEB/AFP via Getty Images
  • Sam Altman went on X on Saturday night and told users to ask him anything about OpenAI's Pentagon deal.
  • Altman on Friday night announced that OpenAI will work with the Pentagon and let it use its AI models.
  • Here are five big takeaways from Altman's AMA session.

Sam Altman hopped onto X on Saturday night and told users to ask him anything about OpenAI's agreement with the Pentagon.

Altman, late on Friday, announced that his company had finalized a deal with the Department of War to use its AI models. OpenAI's deal came after Anthropic refused an ultimatum regarding the terms of use of its frontier model, Claude, for deployment in mass domestic surveillance and fully autonomous weapons.

Here are 5 big takeaways from Altman's AMA.

The OpenAI-Pentagon deal was 'rushed,' and Altman knows the 'optics' don't look good

The Pentagon deal was done quickly in "an attempt to de-escalate the situation," Altman wrote on X.

He added in a separate post that the deal had been "rushed."

Still, the "optics don't look good" for OpenAI, he wrote.

"If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry," he wrote.

"If not, we will continue to be characterized as rushed and uncareful," he wrote.

Altman added that he sees "promising signs" for where this will all land for OpenAI.

OpenAI took the Pentagon deal because it 'got comfortable' with the 'contract language'

Altman was asked why the Department of War went with OpenAI over Anthropic. He said he wouldn't speak for his competitor, but did speculate on why OpenAI got the contract inked first.

"First, I saw reporting that they were extremely close on a deal, and for much of the time both sides really wanted to reach one," Altman wrote. "I have seen what happens in tense negotiations when things get stressed and deteriorate super fast, and I could believe that was a large part of what happened here."

He added that OpenAI and the Department of War "got comfortable with the contractual language" as well.

"I think Anthropic may have wanted more operational control than we did," he added.

OpenAI has 3 redlines, but it's open to changing them as tech evolves

Altman said that OpenAI has "three redlines." But those redlines could change — and there could be more of them put in place — as the technology evolves, and "new risks" come into play.

"But a really important point: we are not elected. We have a democratic process where we do elect our leaders," Altman wrote. "We have expertise with the technology and understand its limitations, but I think you should be terrified of a private company deciding on what is and isn't ethical in the most important areas."

"Seems fine for us to decide how ChatGPT should respond to a controversial question," he added. "But I really don't want us to decide what to do if a nuke is coming towards the US."

Altman says Anthropic is on a 'dangerous' path

Altman said OpenAI had been talking to the Department of War for "many months" about non-classified work, before "things shifted into high gear on the classified side."

"We found the DoW to be flexible on what we needed, and we want to support them in their very important mission," he wrote.

"I think the current path things are on is dangerous for Anthropic, healthy competition, and the US," Altman wrote on X as well. "We negotiated to make sure similar terms would be offered to all other AI labs."

He also asked for "some empathy" for the Department of War, given its "extremely important mission."

And, in Altman's words:

Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind."

And then we say

"But we won't help you, and we think you are kind of evil."

I don't think I'd react great in that situation.

I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Altman says AI can help counter big security threats on two fronts

Altman says AI could come in useful on two fronts. Firstly, the US's "ability to defend against major cyber attacks," particularly, an attack that might take down the country's electrical grid.

Secondly, biosecurity is an area where AI could help.

"I do not think we are currently set up well enough to detect and respond to a novel pandemic threat," Altman said.

Read next

Read Entire Article
| Opini Rakyat Politico | | |