The application of artificial intelligence (AI) to achieve predictive modeling is foreseen in almost every field. Cybersecurity is no exception. MIT researchers have demonstrated a high level of accuracy in detecting suspicious web activities where a relatively larger set of data is available and “analyst-in-loop” method is used for enhanced learning of the AI system.
People have started to explore AI for medical device cybersecurity. The goal would be to monitor data traffic at a network node where medical devices are connected. The data flowing through a node could have three types of information: (a) clinical (c) patient and (b) operational machine parameters. A learning algorithm would use the data flowing through a monitored node to create a model of what is considered as normal behaviors, a Normality Model. Because this is a learning system, as new operational conditions are introduced being labeled as “safe,” the system will tune to accommodate those conditions as Normal. However, when the data pattern at the node changes from what’s considered Normal, let’s say in case of a cyber-attack, the system would detect it as an anomaly and flag it. The subsequent system would tie an anomaly to a potential root cause(s). The final output should be an accurate identification of a cyber-attack!
If all worked well, hospitals would benefit from early detection of cyber-attacks. They could quickly respond by either isolating the impacted device(s)/system(s) or implementing other means to minimize potential consequences of such attacks.
While it may sound straight forward, one needs to consider many things before applying AI to medical devices cybersecurity. The key question revolves around accuracy in detecting cyber-attacks most of the time if not every time – efficacy!
Data for Normality Model:
A large set of data is required for the AI system to develop a tight normality model per node. Typically, data gathered from medical devices is sparse and noisy e.g. most medical devices do not even operate in a continuous and routine fashion. The challenge compounds when there are multiple such devices connected to a single node. In such cases, it requires an even larger set of data to cover various scenarios of normal behaviors. In the absence of such data, the probability of having false positives and false negative increases substantially, hindering the accuracy of detecting an attack!
Profiles for Root Cause Analysis:
Just knowing that an unusual data pattern is noted at a node isn’t enough. It quickly needs to be tied to an actual root cause for an action. This can either happen manually via having analysts online or in an automated manner. In either case, having a clear understanding of anomaly/symptom to potential cause(s) relationship is important for a faster mitigative response. Typically, this information isn’t available; it would also take very long to build such profiles. Ever changing attack models makes it difficult to create and maintain these profiles.
Different Attack Mechanisms:
It is easier for an AI system to recognize an attack when it has seen there before; the anomaly is easily detected and tied to a root cause based on the previous experience. Efficacy of an AI system suffers when a new attack mechanism is used; the data pattern observed at a node may or may not be identifiably different. In case of cyber-attacks, it isn’t uncommon to see a different attack vector each time.
Various researches suggest that attackers spend increasingly more efforts in R&D to discover new attack mechanisms. Malicious actors are getting more and more sophisticated in their attack model, which makes it harder to detect attacks based on anomaly detection. A focused attacker is able to keep the detection difficult by simulating device behavior more closely there by evading detection from anomaly based tools.
Organizational Fatigue from False Alarms:
Operational leaders know how important it is for their teams to trust a particular application to increase utilization. If an AI system generated many false alarms, an organization will quickly learn to ignore the alerts generated from the system. This will significantly reduce organization’s ability to respond to a new potential cyber-attack.
Not Fully Preventive:
Monitoring a device or a node via an AI system is useful to find a faster response once an attack is accurately detected. It currently, however, doesn’t prevent an attack. Organizations must also take preventive steps including holistic approach to cybersecurity.
When considering AI for medical devices or any other Internet of Things (IoT) devices, one should think about both the efficacy and associated ROI. While evaluating efficacy, it might help to look at accuracy in an uncontrolled (i.e. not a test lab with pre-determined parameter) environment.
There are, however, use cases where the value of AI based monitoring for cybersecurity is immediate. For example, a legacy device with continuous data flow and known vulnerabilities would be an ideal candidate. When some of these capabilities are built into a device rather than applying to a network node, accuracy of detecting attacks on that device would likely increase; the implementation cost would also reduce yielding better ROI.
MediTechSafe has developed a proprietary solution to help hospitals manage their cybersecurity, medical devices and clinical networks related risks. If you are a healthcare provider (or a biomed services provider) interested in learning more about MediTechSafe’s solution, you could reach us at firstname.lastname@example.org.