Mythical Monsters in the Mist: Why AI Weapons Could Turn Fiction Into Reality.
AI weapons are no longer just the stuff of blockbusters. Fighter jets and drone swarms now think at machine speed. James Cameron saw it coming in 1984, will we listen this time?

Mythical Monsters in the Mist: AI Weapons and Our Future
In ancient myths, heroes braved dark forests and fought monsters born of human fears and hubris. Today, those mythical monsters have new forms, metal and code, drones and algorithms, emerging from the haze of our technological ambition. These modern “dragons” don’t lurk in enchanted mist, but in research labs and battlefields.
As a storyteller, I mostly write to entertain, but I also believe fiction can catalyze us to reflect on real-world dangers. Like a Joseph Campbell tale of old, a good science fiction story that can shine light into the dark unknown, warning us of the monsters we might unwittingly create.
The Line Between Science Fiction and Reality
In 1984, filmmaker James Cameron imagined a doomsday scenario in The Terminator: a defense network computer becomes self-aware and launches a nuclear apocalypse to exterminate humanity. At the time, it was a blockbuster story with an unnerving message.
Decades later, that message feels less like fantasy and more like prophecy. Cameron, now an acclaimed storyteller and technology enthusiast, continues to sound the alarm. “I do think there’s still a danger of a Terminator-style apocalypse where you put AI together with weapons systems,” he warned in a recent interview with Rolling Stone.
It’s not that an AI has launched nukes – not yet anyway – but the line between science fiction and fact is undeniably fading. The chilling vision of machines rising against makers resonates today because aspects of that vision are flickering into life. Fiction like Cameron’s Terminator, was never just about explosions and killer robots; it was a cautionary myth, meant to entertain and to warn. And indeed, reality seems to be catching up.
AI Weapons on the Battlefield: When Fiction Becomes Fact
What once played on the big screen is now fast becoming military strategy. Around the world, nations are racing to harness artificial intelligence in their arsenals, birthing a new generation of weapons that think beyond human speed. In one recent exercise, two fighter jets squared off in a dogfight; one was flown by a human pilot, and the other by an AI. Astonishingly, the AI-piloted jet learned so rapidly that it even started outperforming skilled human aviators in some mock battles. A scene that could be straight out of a sci-fi thriller happened under real skies. “Whether you want to call it a race or not, it certainly is,” said one high-ranking officer of this drive to militarize AI. “This will be a very critical element of the future battlefield.”
The allure of such AI-driven weapons is already here. Squadrons of autonomous drones, for example, promise capabilities no traditional system can match. A swarm of cheap AI-guided drones can coordinate like a colony of flying predators – overwhelming defenses at a fraction of the cost of a single missile strike. Military planners speak excitedly of deploying “thousands of inexpensive, autonomous drones by 2025” as part of new defense initiatives. One recent program demonstrated software allowing a single soldier to control 100 drones at once in the field. Recent conflicts across Europe and the Middle East have already shown how decisive coordinated drone attacks can be on the battlefield, validating the push for these swarms. With so many unblinking eyes and robotic “brains” at work, a drone swarm can scout terrain, adapt to threats, and strike targets from multiple angles in sync, all faster than a human could react.
The New York Times, citing a United Nations report, revealed that machines armed with lethal force had, for the first time, decided to strike human targets in a war. A scene that could have been cut straight from the script of a techno-thriller, except it happened.
These incidents drive home an uncomfortable truth: the mythical monster we fear is not so mythical anymore. Every step toward smarter, faster, and more independent weapons brings us closer to that danger. Military leaders insist they are aware of the risks. They promise to keep humans in control of life-and-death decisions to build in fail-safes and ethical guidelines. Yet the fact remains that an adversary may not show the same restraint. In an arms race, the pressure to unleash AI that can react faster and strike harder than the enemy could mean the difference between victory and defeat.
Each nation fears that if they do not build the deadliest AI weapons, someone else will, and that fear can become its own self-fulfilling prophecy. Thus, the world edges forward, testing how far we can trust the machines that we have built ourselves.
Enjoying the journey so far?
If you like deep dives into creative chaos, productivity under pressure, and nerdy lessons from real-life experiments, subscribe to get future posts delivered right to your inbox. Subscribe Now

When Fiction Sounds the Alarm
Throughout history, myths and stories have been our early-warning systems. The ancient Greeks told of Pandora opening a forbidden box of evils, warning of the dangers in unrestrained curiosity. Mary Shelley’s Frankenstein painted the tragic tale of a creation that escapes human control. In the 20th century, authors like Isaac Asimov imagined stringent “robot laws” to keep our mechanical progeny in check. These stories endure because they capture something elemental: our anxieties about hubris, about creating forces we might not contain. Joseph Campbell, who studied the world’s myths, often showed how monsters in myth represent our internal demons and challenges. In modern science fiction, our monsters have metal skin and silicon brains, but they still represent the runaway consequences of human ambition.
I have always loved writing stories not only to entertain and to spark wonder, but also to provoke thought. The best science fiction, in my view, works on both levels. It’s exciting and it’s a cautionary tale. Cameron’s films are a case in point. They are not only high-octane thrill rides, but they are also morality plays wearing the disguise of summer blockbusters.
The John Connor in Us All
In the Terminator saga, John Connor is the ordinary kid who grows up to lead humanity against machines. He was just a fictional hero, but his spirit symbolizes something very real that we need today. As we stand on the brink of an AI-powered world, we should hope that each of us carries a little bit of John Connor inside. By that I mean the courage to question the unchecked march of technology, the foresight to prepare for dangers ahead, and the resolve to do something about them. We might not be battling an army of killer robots (and hopefully, we never will), but we are making choices now that determine whether our future with AI is bright or apocalyptic.
The Terminator may have wowed audiences with cyborg villains and explosive action, but it also seared the idea of a rogue AI into our cultural consciousness. These cautionary stories can shape reality by preparing us for what’s coming. They plant a moral compass in our imagination. Policymakers, researchers, and the public often reference dystopian films and novels when grappling with real technological dilemmas, a testament to fiction’s power as a catalyst for debate.
The future is not set in stone, and our destiny with AI will be shaped by the actions we take today. We must support efforts to put ethical guardrails around AI. We must insist on transparency, control, and accountability for any AI that could harm lives. We must educate ourselves and others, so the public understands that these aren’t just far-off fantasies but pressing issues that need attention. Will we slay the monster of our hubris and channel these technologies for good, or will we, through inaction, let the monster loose? The answer depends on our collective will. Each of us can play a part – whether by raising awareness, supporting sensible policies, or simply asking tough questions about how and why an AI system is being deployed. In doing so, we become a bit like the heroes of myth and story, standing up to challenge the threat before it grows unmanageable.
The stories of James Cameron and other visionaries have given us a glimpse of one possible tomorrow, and it’s a tomorrow we have the power to avoid. We should thank those cautionary tales for the heads-up and then act on their lessons. With wisdom, empathy, and yes, a touch of heroic boldness, we can ensure that the terrifying fiction remains fiction. Let’s keep the Terminator where it belongs, on our screens and in our books. Let’s build a reality where humanity steers the course of AI, not the other way around. In the end, the most important AI safety system is not a circuit or a code, it’s the human conscience guiding it. If we nurture that, perhaps we’ll find John Connor in each of us, and secure a future where technology serves humanity, rather than enslaving it.
For my part, I use this blog to sound the alarm, and I’m pouring that urgency into a fiction novel about the dangers of AI, much like Cameron did in 1984. Because maybe the world needs to hear it again, wrapped in a story, before it’s too late. When our creations start to outgrow our control, the question isn’t if we’re in the story, it’s whether we choose to be the hero or the bystander. So… what’s your role in this real-life tale?
Stay aware. Stay bold. And make sure the future is the one we choose.