Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.


  • Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
  • The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
  • The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
  • The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
  • The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
  • ulterno@lemmy.kde.social
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    11 months ago

    That’s what happens when you make an expensive chatbot, designed for chatting and tell it to do thinking.
    It’s not Machine Learning [Artificial][1] Intelligence that will destroy the world, but the intelligence of humans, that is becoming more and more [artificial][2] that will do so.

    [1]: made or produced by human beings rather than occurring naturally, especially as a copy of something natural.

    [2]: (of a person or their behaviour) insincere or affected.