Filling the Cybersecurity Skills Gap with Artificial Intelligence

Keep up with the latest on information governance as this key strategy emerges for addressing a myriad of information management challenges in healthcare. This blog will highlight the trends and opportunities IG presents for ensuring information is treated as an organizational asset.


By Ryan Stolte

 

You may have read headlines about the cybersecurity skills shortage. The Global Information Security Workforce Study reveals the shortage of cybersecurity professionals will hit 1.8 million by 2022. The report also notes that hiring is on the rise, with 70 percent of employers globally wanting to increase their cyber security staff. Healthcare, manufacturing, and retail hiring managers are particularly interested in expansion, with nearly 40 percent in each sector wishing to increase their workforce by 15 percent or more. However, finding seasoned cyber security professionals isn’t easy—nor is finding less experienced “green” ones. New Enterprise Strategy Group (ESG) research shows only nine percent of millennials are interested in  cybersecurity careers.

With highly sensitive patient data to protect, and more vectors through which to access that data than ever before, the shortage should be of utmost concern to healthcare organizations. Some may have a portfolio of cyber tools in place, but the tools most likely sit in siloes, collecting vast amounts of disjointed data and lacking the security analysts needed to piece the data together to see the full threat picture. Think of it like a Magic Eye picture. Each tool produces one pixel, but that pixel needs to be combined with the rest of the pixels for a human to understand the full picture—and, the human needs to allot time to stare at the picture to figure out what it is really showing. In many healthcare organizations today, their tools are producing pixels, but the pixels do not come together and there are not enough humans to stare at them even if they did. As a result, critical threats are missed, and investigators waste time chasing false positive or minimal risk alerts.

Fortunately, organizations can make human analysts more productive and effective through governance efforts and new innovations. Within enterprise organizations—healthcare included—there are Security Operations Centers (SOCs) where security analysts receive threat alerts from their cyber tools and decide if the threat is suspicious and should be bumped to security investigators for mitigation. However, security analysts that are already spread thin do not have the time to search through and analyze the vast mountain of data. That’s where artificial intelligence (AI) comes in. Yes, AI is a buzzword; it has become so popular that it may come up in conversations well beyond  cybersecurity. So, before I explain how AI applies to  cybersecurity, let’s define it.

The purpose of AI in  cybersecurity is to have a machine come to as accurate a conclusion as a human would if given the same input of information. AI simulates a human’s thought process. In today’s state of technology, the focus of AI in  cybersecurity is force -multiplication of human analysts. AI can complement security investigators in the SOC by serving as the security analyst. For example, an alert comes in showing that “Ryan”—who has access to patient records—starts paying significantly more attention to records of celebrities. Without AI, a human analyst would not recognize in all that log data if the behavior is unusual. AI would see the alert, compare the behavior to Ryan’s, his peers, and his overall team and determine the behavior is unusual and affects a high-value asset (patient records). AI would then escalate the alert to investigators. This would all be done in a matter of minutes, rather than the longer amount of time it would take a human analyst to come to this conclusion.

AI also gets smarter as it analyzes more alerts. Part of the data being analyzed by the AI are prior actions taken by human analysts in similar situations. Using the example above, if human investigators previously closed similar profiles of incidents as acceptable, the AI can use that to automatically deprioritize the new occurrence. The next time a similar event alert came to surface, the AI would know it’s a minimal risk event and not send it to investigators. As it analyzes more events, the AI learns the indicators of real threats vs. those that are false positives or that pose minimal risk. AI can also make use of all that historical data that already exists in most organizations, allowing it to immediately build baselines and be productive from day one.

It’s important to note that even with AI, we still need humans. Human investigators play a key role in deciding how to act on a threat. AI can cut through the noise and provide investigators with the most critical threats to focus on, along with a body of evidence showing why. At that point, investigators must pass judgement on the events and decide how to mitigate them. Think of the human investigator as the “supervisor” of the AI. AI only needs the supervisor to pass judgement on the really important matters.

Even if we had a plethora of security analysts to go around, they still would not be able to catch active threats quickly enough. There is only so much data a human can digest. If a human looked through thousands of lines of data on a spreadsheet, she may not be able to find a pattern. And if she did, it would take many days to find it. An AI can search through all that data and come to a conclusion within minutes. That’s why in healthcare and all other industries, AI is a key to detecting and stopping threats.

 

Ryan Stolte is co-founder and CTO at Bay Dynamics.

 

Editors Note: Privacy and Security is a competency that falls within the Information Governance Adoption Model (IGAM™). The IG program should address ways to meet the challenges presented by cybersecurity and consider new innovations such as AI to enhance security efforts. Integrating solid processes, enabling technologies, and new innovations into the privacy, security, and cybersecurity plan will help identify potential risks and areas of opportunity in a more proactive manner. IG will allow organizations to detect these risks faster and protect their sensitive information more expeditiously.

Submit a Comment

Your email address will not be published. Required fields are marked *

Share This

Share This

Share this post with your friends!