Skip to Main Content
Faint pattern of 1s and 0s on top of hexagons

Balancing AI speed with human vigilance

Faint pattern of locks, 1s and 0s on top of hexagons

Headlines involving industry titan Amazon made the team at Mondas pause a beat. Reports of service disruptions at Amazon Web Services (AWS) and within their e-commerce platform being linked to agentic AI tools, which, in one instance, autonomously decided to delete and recreate a production environment, leading to a 13-hour interruption.

Amazon’s official stance may be considered by some a controversial view – they attribute these issues not to AI autonomy but to user error and misconfigured access controls. Could this distinction become the norm for modern cyber security?

As businesses race to hit aggressive AI adoption targets, with some reports suggesting targets as high as 80% for developers, the gap between rapid deployment and robust oversight is widening. For the modern enterprise, the Amazon case serves as a vital thought – how can you save time with automation without losing control of the kill switch?

Human-in-the-Loop Guardrails

Amazon’s response to these incidents has been a pivot toward more stringent human-in-the-loop (HITL) policies. Despite significant corporate restructuring and workforce reductions, the company is mandating that senior engineers sign off on AI-assisted changes.

At Mondas, we view this as a necessary evolution of the SDLC (Software Development Life Cycle). Relying on AI to write and deploy code without a human sanity check can create excessive agency. 🔗 The OWASP Top 10 for LLM Applications now ranks Insecure Output Handling and Excessive Agency as critical risks, where an AI is given too many permissions and acts on a prompt that leads to destructive system changes.

AI Governance

The challenge for most businesses isn’t whether to use AI, but more in how to wrap it in a security framework that doesn’t stifle productivity. We focus on three pillars of AI governance:

(1) Identity & Access Management (IAM) for Agents

If an AI tool has the same permissions as a Senior Architect, it needs to be subject to the same, or even stricter, multi-person approval workflows.

(2) Deterministic Guardrails

Moving beyond probabilistic AI (where the tool thinks it’s doing the right thing) to deterministic controls that physically prevent an agent from executing certain commands, such as deleting a database.

(3) The Oversight Deficit

With fewer staff, the human team must be better informed. We help teams use best-in-class software to audit AI-generated code, ensuring that the time saved in writing isn’t lost in disaster recovery.

Leadership through Guardrails

The outages at Amazon prove that even arguably the world’s most sophisticated infrastructure is susceptible to the friction between AI speed and human oversight. True leadership in this space isn’t about being first to automate; it’s about being the most resilient while doing so.

If you’re navigating the complexities of AI integration or concerned about user access control within your cloud infrastructure, Mondas is here to help. We provide the expertise and tools to ensure your journey into automation is secure, compliant, and backed by expert human oversight, get in touch with us today.

We can put a Free Dark Web Leak Report in your inbox within 30 minutes, click here to get in touch.

Article First Published 17/03/2026