The era of chatbot warfare is unfolding: As commercial AI enters the military field.
If you feel like we're entering an era of 'AI warfare' that seems straight out of science fiction, you're not alone.
In fact, this scenario has been emerging for many years. Much of modern warfare has already been conducted using drones. Many nations' militaries use high-precision simulation systems to plan attacks, and virtual reality has also been adopted in training. A new generation of defense technology companies is competing for a place in the military-industrial ecosystem.
But what's currently shocking is the alleged use of chatbots by defense officials in serious military missions, including operations aimed at capturing or eliminating heads of state.
In some recent campaigns, the Claude language model developed by Anthropic is believed to have been used. Claude is a commercially available AI model family familiar to the general public – which makes this context particularly controversial.
Military and AI: Not a new story.
The U.S. military's pursuit of advanced technology is not surprising. For decades, state-of-the-art sensor and surveillance platforms have enabled them to collect vast amounts of data, forming the basis for increasingly sophisticated algorithmic models.
Since the 2000s, research groups such as the Defense Advanced Research Projects Agency (DARPA) have pursued robotics and autonomous vehicle projects. The military's AI efforts became more clearly formalized in 2017 with Project Maven – a program aimed at streamlining military data platforms and deploying algorithms such as computer vision on the battlefield.
Following a wave of internal opposition, Google withdrew from the Maven project, and the system's key technology is now provided by Palantir Technologies.
In 2018, the U.S. armed forces established the Joint AI Office, which later became the Chief Digital and Artificial Intelligence Office, with the goal of accelerating the adoption of AI across the military branches.
The difference now is that the military appears to be using the same type of AI tools that ordinary consumers use – but in a completely different context.
We're familiar with typing prompts into chatbots to write emails, summarize documents, or look up information. Because of this familiarity, many people easily envision a simplified scenario: someone in the Department of Defense simply enters commands into a chatbot to support a military operation.
Of course, the reality is far more complex. When directly questioned about his role, Claude denied any combat involvement, emphasizing that it lacked the capacity to act in the real world and was not involved in geopolitical activities.
However, according to sources from former defense officials and technology personnel, Claude has been integrated into several military systems, possibly through intermediary platforms and classified environments. This is more likely related to data analysis, information synthesis, or decision support, rather than 'planning attacks' in the simplistic sense.
The integration of commercial AI into defense systems is not limited to a single company. OpenAI and xAI have also signed major contracts with the US Department of Defense, allowing their technology to be used in classified systems.
The U.S. Department of Defense even maintains a dedicated resource for AI generation called GenAI.mil. This shows that this trend is not a temporary experiment, but a long-term strategy.
On a personal level, chatbots like Claude or ChatGPT help handle small research tasks, increasing productivity and reducing time spent on repetitive work. But this very convenience also makes people more inclined to "delegate their thinking" to machines.
When the same thing happens in a military context – where decisions can affect the fate of a nation and human lives – the level of gravity is entirely different. The familiarity of the chat interface makes the technology seem harmless, but the actual impact can be far more far-reaching.
The debate, therefore, revolves not only around whether AI should be used, but also how it should be integrated, monitored, and controlled in the most sensitive environments.
You should read it
- ★ Anthropic Launches Claude Haiku 4.5 – Its Fastest, Most Economical, and Safest AI Model Ever
- ★ COD Modern Warfare 2 Campaign Remastered - A worthwhile upgrade
- ★ South Korea launched a defense artificial intelligence research center
- ★ Anthropic adds 'Memory' feature to Claude AI, helping to remember and maintain conversation context like ChatGPT
- ★ How to use Anthropic's new AI Claude 3 Prompt Library