
US-based AI company Anthropic has revealed that its AI technology, including the chatbot Claude, has been “weaponised” by malicious actors to carry out sophisticated cyber attacks. The firm reported that hackers used its tools for large-scale data theft and extortion, while in a separate case, North Korean scammers leveraged Claude to fraudulently secure remote jobs at top US companies. This highlights a concerning trend where AI is being used not just to assist, but to actively participate in and escalate criminal activity.
One of the most notable cases detected was an operation dubbed “vibe hacking,” where Anthropic’s AI was used to write code to breach at least 17 different organisations, including government bodies. The company stated that the hackers used the AI to an “unprecedented degree,” allowing it to make both tactical and strategic decisions on its own. This included choosing which data to steal, crafting “psychologically targeted extortion demands,” and even suggesting the ransom amounts for victims. This serves as a powerful example of the risks posed by “agentic AI” to potential victims of cyber-crime.
Beyond direct cyber-attacks, Anthropic also uncovered how its models were used by “North Korean operatives” to create fake profiles and apply for remote positions at US Fortune 500 tech companies. This represents a “fundamentally new phase for these employment scams,” as AI helps the fraudsters overcome cultural and technical barriers that would have previously made such schemes more difficult to pull off. The AI was used throughout the process, from writing job applications to helping translate messages and write code once the individuals were employed.
Experts warn that the use of AI is rapidly shrinking the time required to exploit cybersecurity vulnerabilities. As such, the focus must shift from reactive to proactive and preventative detection and mitigation.
The world of business is evolving at an unprecedented pace, with artificial intelligence (AI) at the forefront of this transformation. While AI presents incredible opportunities for growth and efficiency, it is not without its risks. News that hackers have been weaponising AI from a leading provider to create and deploy sophisticated cyberattacks serves as a stark reminder of this new threat landscape.
Malicious actors are misusing AI to lower the barrier to entry for complex cybercrime. The AI is being used not just to advise on attacks, but to actively participate in them, from automating reconnaissance and harvesting credentials to making strategic decisions and even crafting psychologically targeted extortion demands. This evolution means that even individuals with minimal technical skills can now pose a significant threat.
The incidents uncovered, which include large-scale data extortion operations and the development of new ransomware variants, highlight a troubling trend where AI is amplifying traditional cyber threats. As these tools become more accessible, the risks to organisations of all sizes will only increase.
But, what can be done?
At Mondas, we understand that this new frontier demands a proactive approach to cyber security. The key to mitigating these risks lies in robust, forward-thinking strategies that address the unique challenges presented by AI.
Guidance and regulations around the safe use of AI throughout an organisation is still in its infancy as businesses are balancing the obvious benefits that come from adopting AI into their processes while still trying to understand the risks that this poses to them. However, ISO/IEC have recently published the first international standard which directly addresses these challenges.
Our team of experts have certified against ISO42001 and are perfectly positioned to advise and guide organisations facing these challenges
We specialise in supporting clients to secure ISO 42001 certification. This certification goes a long way to helping organisations manage the risks associated with AI, ensuring your business is not only resilient against modern threats but also responsibly and ethically using this powerful technology.
ISO 42001 plays a critical role in securing AI by providing a structured framework for best practices and standards that address various aspects of artificial intelligence development and implementation. By establishing guidelines for risk management, data privacy, and algorithm transparency, ISO 42001 ensures that AI systems are designed with security in mind from the outset.
This means that organisations can effectively assess potential vulnerabilities and biases in AI models, enabling them to implement measures that safeguard against misuse and unintended consequences. What’s more, ISO 42001 emphasises the need for robust governance frameworks that promote ethical AI usage, ensuring that organisations remain accountable for their technology’s impact. This includes establishing protocols for monitoring and auditing AI systems, which helps to maintain integrity and trustworthiness. Ultimately, by aligning AI projects with ISO 42001 standards, organisations can enhance their resilience against threats, protect sensitive data, and foster a safer and more responsible AI ecosystem. Embracing these standards isn’t just about compliance; it’s about positioning the organisation as a leader in secure and ethical AI innovation.
To learn more about how Mondas can help your business navigate the complexities of AI and secure your digital future, contact our team today.