Skip to Main Content
Faint pattern of 1s and 0s on top of hexagons

Is your browser the new Shadow AI blind spot?

Faint pattern of locks, 1s and 0s on top of hexagons

In discussion with Chief Information Security Officers (CISOs), we’ve found their focus on “Shadow AI” is often on a specific perimeter – the network edge. Their aim is to monitor, control, or block traffic heading to public Large Language Models (LLMs) like ChatGPT, Claude, Gemini or any number of new AI models popping up daily.

But the landscape has shifted with Neural Processing Units (NPUs) and on-device processing, the AI model has moved inside the browser.

At Mondas, we view this shift toward Browser-Native AI as a critical pivot point for enterprise security governance. It offers productivity potential, but creates a blind spot that traditional Data Loss Prevention (DLP) tools were not built to see.

Evolution of the Threat Landscape

Traditionally, Shadow IT detection relied on traffic analysis. If an employee tried to paste proprietary code into a web-based chatbot, a Cloud Access Security Broker (CASB) or firewall could theoretically flag the domain or inspect the packet.

Browser-native AI changes the physics of this interaction. Major browsers are now embedding small language models (SLMs) directly into their engines. These models run locally on the user’s hardware. When a user asks their browser to “summarise this confidential PDF” or “rewrite this sensitive email,” the data may technically never leave the device.

While this sounds secure on paper offering data privacy through locality it does introduce complex governance issues like:

The DLP Visibility Gap

Network-based security tools can’t inspect data transfers that don’t happen. If the inference occurs on the laptop’s CPU/NPU, the network sees nothing. The security team has no audit trail of what data was processed, how it was manipulated, or what the output was.

The Extension Ecosystem

Browsers are extensible platforms. A malicious browser extension could leverage the legitimate, local AI API to harvest sensitive data without triggering standard alarms, hiding its activity within trusted browser processes.

Hallucinations and Integrity

Without centralised control over which models are used, employees may rely on lightweight, quantised local models that are more prone to hallucinations (errors) than their cloud-based counterparts. This poses a data integrity risk when these outputs are used for business-critical decisions.

The “Convenience Trap”

The primary driver of Shadow AI is rarely malicious intent; it’s simply friction. Employees want to work faster. When a browser offers a one-click “Help me write” button within a corporate web portal or CRM, employees will use it.

If an organisation has spent months vetting and securing a corporate AI instance, but the browser offers a faster, un-vetted alternative right in the toolbar, the path of least resistance wins. This creates a fragmented data environment where the organisation loses sight of where its intellectual property is flowing.

Regaining Control

Banning browsers is rarely a realistic option, and disabling useful features can hinder productivity. This means the approach needs to shift from blocking traffic to endpoint visibility. Securing browser-native AI might need a focus on an adapted posture:

Endpoint-Level Telemetry

Security relies on understanding the context on the device itself. Infosec teams need to move beyond network scanning to endpoint agents that can distinguish between authorised application use and unmonitored browser inference.

Browser Governance Policies

Organisations need to enforce strict enterprise policies at the browser level (e.g., via Group Policy or MDM). This includes managing which GenAI features are enabled within the browser and whitelisting only audited extensions.

Human-Centric Security

There’s a need to educate staff that “local” does not automatically mean “secure.” Understanding that a browser-based model can still hallucinate or be hijacked by malware is a vital part of modern security awareness.

Industry Insights

Browser-native AI changes the physics of this interaction. Highlighted by The Hacker News 🔗recent security report, we are seeing a massive surge in Shadow GenAI where data never touches the network. Major browsers are now embedding small language models (SLMs) like 🔗Gemini Nano directly into their engines. These models run locally on the user’s hardware.

When a user asks their browser to “summarise this confidential PDF,” the inference occurs on the laptop’s CPU/NPU. The network sees nothing. This aligns with the emerging “secure enterprise” concepts being discussed by vendors like 🔗Microsoft, but requires a strict adherence to 🔗NCSC guidance on Shadow IT to ensure governance isn’t bypassed entirely.

Conclusion

The integration of AI into the browser is inevitable and, in many ways, beneficial. But it does render the traditional “perimeter” model of security obsolete.

At Mondas, we believe the future of security lies not in building higher walls, but in smarter visibility. As AI models become smaller, faster, and more local, our defence mechanisms need to be equally agile. The goal isn’t to stop the innovation, it’s to ensure that when your data enters the “black box” of a local model, you still hold the keys. Get in touch today to discover how we could support your journey to better AI governance and overall security.

This article was brought to you by Michelle Claridge, our Head of Governance Risk and Compliance at Mondas. Michelle is a GRC expert who has brought AI specific security measures to organisations on their journey to ISO 42001, learn more about Michelle on 🔗LinkedIn here.