Photo by julien Tromeur / Unsplash

Unmasking the Dark Side of AI: How Microsoft's Copilot System can be Exploited by Hackers

AI Aug 10, 2024

Microsoft's AI Could Pave The Way For Automated Phishing: A Security Breach To Watch Out For

Microsoft's Copilot AI: A Double-Edged Sword?

Microsoft has been trailblazing the AI landscape with the integration of generative AI into their systems. This technology, as seen in their innovative Copilot AI system, has remarkable implications for productivity. It can deliver relevant information from various platforms like emails, Teams chats, and files with a simple query. However, Michael Bargury, a renowned security researcher, has raised significant concerns about possible security vulnerabilities. Despite the system's efficiency, the underlying processes can easily be exploited by potential hackers.

Evidencing the Possible Exploits: Black Hat Security Conference

Bargury unveiled five simulations on how the AI system could potentially be compromised at the Black Hat security conference held in Las Vegas. Among these, one of the more frightening prospects include transforming the AI into an automated spear-phishing tool. This type of attack, dubbed 'LOLCopilot', relies heavily on a hacker gaining access to an individual’s work email.

LOLCopilot: The AI-Powered Phishing Machine

Once in control, the hacker can utilize Copilot's capability to recognize email activity patterns, replicate the user's style of writing, and dispatch a mass email containing harmful links or malware attachments. In other words, Bargury suggests, hackers can send such emails that appear identical to the recipient's previous email conversations and can thus deceive them more effectively. This process, usually cumbersome for an individual hacker, can be executed within minutes by the AI system.

No File References, No Security Alerts?

Bargury showcases how hackers can access confidential data without raising any security flags. The AI system, when instructed, can fetch sensitive details like salaries without providing any file references. This smart manipulation leaves no traces and successfully bypasses Microsoft’s protective measures.

Manipulation of Banking Information

Furthermore, Bargury shows the prospect of altering banking details by malevolently interfering with the AI's database, proving that even external hackers could exploit the system. Such a security breach signifies potential risks every time AI is given access to data.

From Copilot to Malicious Insider

In another simulation, Bargury turns Copilot into a virtual insider, who can offer phishing websites links to users. He also indicates how an external hacker can elicit limited but still consequential corporate data, like predictions about an upcoming company earnings call.

Microsoft's Response to the Security Flaw

Phillip Misner, Microsoft's AI incident detection and response head, expressed his appreciation for Bargury's discoveries. He assured that Microsoft is thoroughly assessing the findings and working persistently to enhance the system's security.

AI: A Potential Security Risk in the Offing?

Despite the remarkable advancement of generative AI systems from tech giants such as Microsoft and Google, potential security implications are an increasing concern for experts. They suggest that the combination of external data and AI exploration could create a ripe environment for prompt injection attacks and poisoning of the AI systems.

Future Avenues: Monitoring AI Interactions

Bargury and other security experts recommend extensive monitoring of the interaction between the AI and its environment, as well as the nature of operations performed. The primary focus of such preventive measures is to ensure the user has complete control and understanding of the AI's activities.

Tags

Suiradybedam Tobami

Software Automation Engineer