top of page

OpenAI Lifts Restrictions on Military and Warfare Use Amid AI Security Concerns

OpenAI Alters Policy: Removes "Military and Warfare" Restrictions Amid Rising Global AI Military Interest


Just a few days ago, OpenAI made significant modifications to its usage policies, removing explicit prohibitions on the use of its technology for "military and warfare" purposes.


The update, dated January 10, aimed to enhance clarity and offer more service-specific guidance, according to the company's changelog.



While still emphasizing the restriction against using large language models (LLMs) for harmful purposes and cautioning against weapon development, OpenAI has notably eliminated references to "military and warfare."


The timing of this change is notable as military agencies worldwide express interest in utilizing AI.


Sarah Myers West, a managing director at the AI Now Institute, remarked on the decision, especially in the context of AI systems being used in the targeting of civilians.


The explicit mention of "military and warfare" in the list of prohibited uses suggested restrictions on collaboration with government entities like the Department of Defense, known for offering lucrative deals to contractors.


Currently lacking a product directly causing harm, OpenAI's technology could, however, be applied to tasks indirectly contributing to potential harm, such as writing code or processing procurement orders.


OpenAI spokesperson Niko Felix emphasized the company's goal to establish universal principles that are easy to remember and apply globally.


While broad principles like "Don’t harm others" remain, the spokesperson clarified that OpenAI "specifically cited weapons and injury to others as clear examples."


However, clarification on whether military use beyond weapons development is prohibited was not provided.


Felix outlined OpenAI's policy, stating it disallowed tool use for harming people, developing weapons, communications surveillance, or causing injury or property damage.


Notably, the company acknowledged national security use cases aligning with its mission, citing collaboration with DARPA for cybersecurity tools.


Update, January 24, 2024, 10:22 AM ET: The story now includes a statement from OpenAI.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page