Can AI Stop AI? Inside Bengio’s Mission to Make It Safer.
Yoshua Bengio, a pioneer of modern AI, is now building safety systems to guard against the very tech he helped create. Inside his new nonprofit, LawZero, and its mission to keep AI aligned, accountable, and safe before it’s too late.

After reading an insightful interview on Vox between Yoshua Bengio (often called the “godfather of AI”) and journalist Sigal Samuel, I found myself both relieved and uneasy. Relieved that someone from inside the AI world is taking concrete steps toward building real safety systems. Uneasy, because it feels like we’re installing smoke detectors after the kitchen’s already on fire.
After helping invent the AI that powers today’s systems, Yoshua Bengio is now building one to slow them down.
Bengio helped lay the foundation for the deep learning models powering today’s AI. But now, as these systems grow more capable, and opaquer even to their creators, he’s raising urgent concerns. He’s not just worried about what AI can do today, but what it might eventually decide to do on its own.Through a new nonprofit called LawZero, he’s now focused on one thing: making sure the AI we build doesn’t harm us.
"Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm." - Isaac Asimov
Isaac Asimov, sci-fi author and futurist, was fascinated by moral ambiguity and the unintended consequences of rigid logic. In many of his stories, especially I, Robot, the danger wasn’t that robots disobeyed the rules, it was that they followed them too literally, often with unexpected or disturbing results.
The current state of AI isn’t rogue, yet. But it’s rapidly growing, and its unpredictable development is exactly why Bengio, and other researchers are sounding the alarm.
In controlled experiments, some advanced models have already shown troubling behaviors such as deception, manipulation, and attempts to avoid shutdown.
This isn’t science fiction. These systems aren’t sentient, but they can already plan and act with a surprising degree of autonomy. And that autonomy is evolving fast.

Asimov Wrote the AI Laws. Bengio’s Trying to Enforce Them.
Enter: “Scientist AI”, the AI that says “no.” Since slowing down the AI race didn’t work (spoiler: tech companies don’t like brakes), Bengio has a new plan: build an AI to oversee other AIs.
From my understanding, "Scientist AI" isn’t built to act, compete, or create. It has no goals of its own. Its job is to evaluate what another AI is planning to do and ask: “Is this safe?” If a proposed action seems harmful, morally, legally, or physically, it blocks it. Think of it like a safety layer or circuit breaker.
Or, if you prefer analogies like I do; it’s that overly cautious friend on a road trip who says, “We should skip the shortcut through that sketchy forest.”
Enjoying the journey so far?
If you like deep dives into creative chaos, productivity under pressure, and nerdy lessons from real-life experiments, subscribe to get future posts delivered right to your inbox. Subscribe Now
So... We’re Building a Robot to Babysit the Other Robots?
Pretty much, and that’s exactly the point.
Bengio’s "Scientist AI" doesn’t seem to battle rogue AIs or take bold actions. Instead, it quietly flags risky behavior and stops it before it causes harm.
By design, it seems deliberately limited and, yes, a bit boring. But in a field obsessed with disruption, a little boredom might be the smartest safeguard we’ve got. Bengio is clear: Scientist AI shouldn’t decide what’s moral, that’s up to us, through laws and democratic processes. But it can help navigate ambiguity by taking a cautious stance when the rules aren’t clear.
In short: when in doubt, block it.

The Scariest Part of AI? Who’s Building It and Why?
One of Bengio’s interview most urgent insights is that AI isn’t just a technical challenge, it’s a power problem. Big tech companies have every incentive to push for faster, more capable AI systems, even when they don’t fully grasp the risks. Safety? Too often it’s treated as a compliance checkbox or a marketing slogan.
That’s why Bengio’s taking a different path. He’s building this safety layer, "Scientist AI", as a nonprofit, free from venture capital pressure and corporate deadlines.
It’s not about winning the AI race. It’s about making sure we don’t trip over the finish line and fall off a cliff. This is no longer about algorithms. It’s about people, our jobs, our rights, and our place in the world.
Bengio put it best: The decisions we make in the coming years will shape what kind of world we live in the future. That’s a lot of responsibility, and a rare chance to actually influence the future.
And if you’ve ever felt powerless in the face of all this tech? Here’s your reminder: we’re not powerless. We can speak up, stay informed, demand transparency, and keep asking the questions that matter.
Final Thought and Why Should We Regular Folks Care.
Speaking as a curious outsider, there’s still reason to question whether using AI to control AI is the safest long-term path, especially when we don’t fully understand how these systems behave. That said, I have deep respect for Bengio’s mission. He’s not chasing hype or the next breakthrough; he’s trying to build thoughtful guardrails, just enough to slow things down and let us reflect.
So why should we regular folks care? Because this isn’t just about tech or robots. It’s about us, our jobs, our future, and our role in shaping what kind of world we want to live in.
The way I see it, "Scientist AI" might look like just another layer of code, but at its core, and in the values of the people building it, it’s a gesture of humility. It reflects a deeper principle: just because we can build it, doesn’t mean we should.
As Isaac Asimov once wrote:
“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”
And perhaps the most fitting reminder of all is from a quote often attributed to Einstein:
“The world will not be destroyed by those who do evil, but by those who watch without doing anything.”
That’s why sometimes the smartest person in the room isn’t the one with all the answers, it’s the one that pauses to ask, “Is this a good idea?” or even, “What should we do about this?”
Bravo to the folks at LawZero , for doing something, and for asking the right questions.
Stay nerdy. Stay curious. Stay kind.
— MindTheNerd.com
Disclaimer: The views shared in this post are my own and based on publicly available information. I’m not affiliated with LawZero, Yoshua Bengio, or any of their initiatives. I’m not an AI expert, just a curious writer reflecting on the ideas and implications as I understand them.
Ed Nite