AI-assisted cyberattacks in healthcare pose a real threat, as threat actors leverage tools like ChatGPT to accelerate attacks. Despite AI’s potential advantages for defenders, the risks persist. Healthcare providers must remain vigilant and adopt AI in their security measures, using generative AI for threat-hunting and enhancing cybersecurity tactics. Staying updated with emerging AI capabilities, leveraging frameworks like NIST AI RMF, and employing MITRE ATLAS for adversary tactics are essential. Continuous monitoring and adaptation are crucial to effectively defend against the evolving landscape of AI-driven cyber threats in healthcare.
The rapid advancement of artificial intelligence (AI) has brought both benefits and risks to various industries, including healthcare. Unfortunately, cyber threat actors have found ways to exploit AI, particularly ChatGPT, a generative AI tool, to accelerate healthcare cyberattacks and facilitate data exfiltration.
AI empowers threat actors to automate and speed up various attack processes, such as crafting convincing phishing emails, developing complex malware code, and exploiting vulnerabilities. These AI-assisted attacks can overwhelm traditional human defenses, posing a significant challenge for healthcare providers.
To counter these emerging threats, defenders must understand how threat actors leverage AI. In a recent briefing, the HHS Health Sector Cybersecurity Coordination Center (HC3) shed light on how ChatGPT, with its deep learning capabilities and ability to learn from user interactions, can be employed by attackers to design and execute cyberattacks in the healthcare sector.
Examples provided by HC3 showcased well-crafted phishing email templates, demonstrating how AI can enhance the attackers’ social engineering tactics. Additionally, proof-of-concept exploits illustrated how ChatGPT facilitated the development of malware, enabling threat actors to launch cyberattacks that were previously beyond their capabilities.
Researchers at Vedere Research Labs delved further into AI-assisted attacks in healthcare and highlighted how ChatGPT could be used to transmit sensitive data from medical devices. However, they also discovered the limitations of AI for attackers, as the tool sometimes produced incorrect or misleading information, hindering the progress of novice attackers without sufficient technical knowledge.
To defend against AI-assisted cyberattacks, healthcare providers can leverage AI for their security efforts. While threat actors may accelerate their attack timelines, the methods they employ, like phishing and ransomware, remain familiar. Traditional mitigations, such as maintaining an accurate asset inventory, patching systems, and network segmentation, are still effective measures to counter these threats.
Moreover, defenders can utilize generative AI to enhance threat-hunting tactics and explain reverse-engineered code from potentially malicious files. AI can also improve penetration testing, automated threat detection, AI training for cybersecurity personnel, and cyber threat analysis and incident handling.
HC3 offers several recommendations to defend the healthcare sector against AI threats, emphasizing the importance of staying up-to-date with the evolving capabilities of AI. Adopting the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF) and leveraging MITRE ATLAS for adversary tactics targeting AI systems are recommended starting points.
In this rapidly evolving landscape, healthcare organizations must be prepared for a continuous “cat-and-mouse game” with threat actors. By embracing AI’s potential for defense and continuously updating their cybersecurity strategies, healthcare defenders can effectively mitigate the risks posed by AI-assisted cyberattacks.