This monthly blog highlights and discuss emerging trends and challenges related to healthcare data and its ever changing life cycle.
By Nathan Patrick Taylor, MS, MPH, CHDA, CPHIMS
“The Robots are coming, the Robots are coming!” This modified version of Paul Revere’s warning call seems to be popping up more and more often over the last few months. Artificial intelligence (AI) is such a hot topic, it has seemingly supplanted data science as the new subject of conversation on everyone’s minds. I can already imagine the new headlines as AI’s workplace arrival grows nearer, akin to that of Thomas H. Davenport and D.J. Patil’s Harvard Business Review 2012 article “Data Scientist: The Sexiest Job of the 21st Century.” Davenport and Patil were certainly onto something; data science has become increasingly prevalent and relevant in the years since 2012. So when it comes to AI, is it hype or is it a reality?
Well, I’m a details kind of guy and, as with most things in life, the devil truly is in the details. To start, we need to make a distinction between machine learning and AI. The best presentation I’ve heard on the topic so far was given by Jeremy Achin last month (watch the video here, from timestamp 5:40 to 9:09). If you’re too busy to watch the video, the main takeaway for me was that AI is designed to automate and support human activities. In the realm of analytics, that means AI can help us become more efficient by performing some of the time-consuming and repetitive tasks in our work. I’ll add that AI should also help detect problems within our data and be proactive in identifying areas of concern, rather than relying on manual alerts. Part of the idea behind this post was sparked from last month’s blog post by Kimberly, Angela, and Sandra. In that post, they mentioned how their previous dashboard development was a “labor-intensive process and was subject to errors and inconsistency.” For analytics to take the next leap forward, we need a little AI in our lives to optimize our work and reduce errors. That’s where ‘workflow automation’ comes in.
One way to think about workflow automation is to imagine a digital version of an assembly line used to create physical products. The workflow is consistent, repeatable, and efficient, and has few defects. These are the same desired traits we seek in our digital processes. On the machine learning and predictive analytics side, similar automation tools exist that speed up the process of creating and deploying (a key feature) machine learning models. The job of developing a machine learning model with hand-written code could take as long as several months before we even deploy it out to users. Automation can reduce the time to just a few days. Also, we are facing a shortage of high-quality data analytics staff. Semi-automated tools can help boost productivity on teams where resources are limited. Several months ago I attended a presentation by a large health network, I was shocked to learn that their data team was just a little bit larger than my team, given that they were about four times the size of the organization I work for (measured in bed count and employees).
So, where do the “Luddites” come into all this? I’ve noticed two groups that are most resistant to the wave of data automation. The first group is the traditional Luddites who are stuck in Excel purgatory and refuse to learn a new tool. The warning is clear: the robots are coming, and you better learn to work with them. The other side of the spectrum are the more sophisticated programmers and developers that are deadset against any tool that automatically creates reusable code. Now, I totally understand that innovation comes from writing custom code, but that’s not my focus. I merely want to take routine data tasks and automate them. Another rebuttal to the custom code argument is that most of the workflow automation tools do have the ability to write your own code if needed. Several of the tools I use can accept both Python and R code in the workflow. So if you want to develop an innovative task, you can. My belief about the programmers ‘rage’ against automation is that they think analysts will lose reliance on IT. As more self-service tools become available, we won’t need full IT support. I see that as another positive takeaway—fewer bottlenecks to faster insights. It’s time to heed look ahead to what these new technologies will bring, and invest some resources in reviewing tools that provide semi-automated data workflows aiming to increase efficiency and reduce errors.
Nathan Patrick Taylor (TaylorNathan@msn.com) is director of data science and analytics at Symphony Post Acute Network.