Large Language Models (LLMs) are becoming indispensable tools for financial institutions. From automating customer service and analysing market trends to detecting fraud and optimising investment strategies, their capabilities are vast and transformative. But, with great power comes great responsibility. We explore the potential that AI systems fall into the wrong hands. A breach involving LLMs in financial institutions is a cybersecurity issue that demands immediate attention.
An LLM Under Attack
If we create an imaginary financial institution, say ‘BobsBank’, that relies on an advanced LLM to process sensitive client data, manage internal communications, and even execute low-value transactions based on specific prompts. One morning, an astute cybercriminal group, ‘ScatteredPig’, discovers a subtle vulnerability within BobsBank’s LLM API, allowing them to inject malicious prompts without detection.
Initially, ScatteredPig test the waters by manipulating customer service responses, subtly redirecting clients to phishing sites disguised as BobsBank’s legitimate portal. The LLM inadvertently becomes an accomplice to cyber crime.
The stakes escalate. With deeper access, ScatteredPig begin to extract proprietary information. They prompt the LLM to summarise confidential internal memos, reveal details of upcoming merger and acquisition talks, or even list the most frequent and common vulnerabilities identified in recent internal security audits. The LLM, trained on this internal data, obliges, seeing these as legitimate information retrieval requests.
The ultimate blow comes when ScatteredPig exploit the LLM’s transaction capabilities. By crafting highly specific and complex prompts, they bypass traditional security protocols that the LLM was designed to assist, not override. They instruct the LLM to initiate small, undetectable micro-transactions across hundreds of dormant accounts, aggregating these funds into untraceable cryptocurrency wallets. The LLM, acting on what it interprets as valid instructions, executes these commands, effectively orchestrating a digital heist that goes unnoticed for days, even weeks.
The aftermath is catastrophic. BobsBank suffers immense financial losses, a devastating blow to its reputation, and a complete erosion of customer trust. Regulatory bodies impose colossal fines, and the institution’s stock plummets. The incident serves as a warning that those tools designed for efficiency and security can become weapons if not properly protected.
Is ISO 42001 as an AI Management System Standard, the answer?
This made-up scenario does demonstrate the need for a robust framework to manage AI risks. At Mondas, we are advocates of working towards compliance – known also as Artificial Intelligence Management System (AIMS). This international standard provides a set of requirements for establishing, implementing, maintaining, and continually improving an AI management system. For financial institutions, adopting ISO 42001 could be the way to govern the LLMs. Learn more about 🔗ISO 42001:2023 on ISO.org.
How does ISO 42001 address the risks highlighted in our scenario?
- Risk Assessment and Treatment: ISO 42001 mandates a thorough identification and assessment of AI-specific risks, including those related to LLM vulnerabilities, data poisoning, prompt injection, and unauthorised access. This proactive approach allows institutions to implement controls before a breach occurs.
- AI System Design and Development: The standard promotes the integration of security and ethical considerations from the very outset of AI system development. This means designing LLMs with built-in safeguards against malicious prompts, ensuring data integrity, and establishing clear boundaries for their operational scope.
- Data Governance: Given that LLMs are only as good, and as secure, as the data they are trained on, ISO 42001 places a strong emphasis on data governance. This includes ensuring data quality, privacy, and protection against manipulation, which could otherwise lead to biased or insecure LLM outputs.
- Transparency and Explainability: While LLMs can be black boxes, ISO 42001 encourages mechanisms for understanding and interpreting AI decisions. This is crucial for auditing LLM actions, identifying anomalous behaviour, and ensuring accountability in financial operations.
- Monitoring and Review: Continuous monitoring of AI system performance and security is a core component. This would enable institutions like GlobalTrust to detect unusual LLM activities, such as attempts to access sensitive data outside its normal parameters or execute unauthorised transactions, far more quickly.
- Incident Response and Recovery: In the event of an AI-related security incident, ISO 42001 provides guidelines for a structured response, containment, and recovery process, minimising damage and ensuring business continuity.
Partner in AI Security and Compliance
At Mondas, we understand the intricate challenges that LLMs present to financial institutions, or indeed any industry that handles transactions and customer data. Drawing from experience advising financial institutions on AI governance, our team has worked with many organisations to get their cybersecurity posture positioned ready for ISO 42001 compliance and certification. We help financial organisations implement robust AI management systems, conduct comprehensive risk assessments, and develop secure AI architectures that protect against sophisticated cyber threats.
Don’t wait for a digital heist to expose your vulnerabilities. Proactive compliance with ISO 42001 ensures your LLMs are not just powerful tools but secure assets, safeguarding your institution’s future in the digital age. If you’d like to find out more about our approach to ISO 42001 certification, get in touch today.
Today’s article was authored by Lance Nevill a seasoned cybersecurity specialist at Mondas, who works in AI governance and ISO 42001 compliance.


