HI Professionals Should Explore Differences and Connections between Human and Artificial Intelligence
In The Atomic Human: Understanding Ourselves in the Age of AI, author Neil David Lawrence asserts that humans need to understand how their own intelligence is different from artificial intelligence to make the most of both.
The Journal of AHIMA spoke with Lawrence, who serves as the DeepMind Professor of Machine Learning at the University of Cambridge in the Department of Computer Science and Technology, senior fellow at the Alan Turing Institute, and visiting professor at the University of Sheffield. In this transcribed interview, Lawrence shares his belief that there is a particular need for this greater understanding of intelligence in the healthcare industry.
Question: How is human intelligence different from artificial intelligence
Answer: “Comparing modes of intelligence is a difficult thing to do because intelligence is a word like beauty that isn’t defined from first principles. But comparing ‘information flows’ is much easier. In terms of communication rates, humans can share information with each other at around 2,000 bits per minute. A bit is the equivalent of a coin toss, or the outcome of any 50/50 event. Machines, on the other hand, communicate at around 600 billion bits per minute. That’s 300 million times faster. This is the equivalent of the difference between walking pace and light speed. The machine’s ’intelligence’ is based on access to more information than we can possibly imagine.”
Q: What specifically do healthcare professionals need to understand about their own intelligence to fully utilize artificial intelligence?
A: “Despite the advantages of the machine, as humans, we understand our own context better. And our intelligence evolved to accommodate other humans. We also have ‘skin in the game.’ For example, clinicians are humans themselves and share vulnerabilities with their patients. In addition, clinicians and health information (HI) professionals are part of wider institutions, sets of individuals who hold them to standards. This gives them a perspective on the patient that the machine, despite its vast access to data, can never have. So the challenge is to balance the information coming from the machine with that coming from the human. This is an evolving area. It’s hard for healthcare professionals to correctly weigh information coming from the machine until they’ve had some experience working with it.
The challenges we face are akin to challenges we face with GPS [global positioning system] for navigation. When we use GPS in a landscape we’re familiar with, it operates as an augmentation. We can choose to ignore it when it makes obviously faulty suggestions. But if we use GPS in a landscape we’re unfamiliar with, then we become dependent on it. There are also challenges around how we learn such landscapes if we are dependent on GPS to navigate.”
A: “It’s less about easing concerns and more about ensuring that the concerns are correctly calibrated and focused. The quality of the national and international debate has been extremely poor in this space across the last couple of years.
One big challenge I perceive is the gap in the innovation economy between the solutions we need (in health, education, civil administration, etc.) and the ones we’re provided with by the tech industry. This gap is large and widening. While I’m not concerned about technical ‘existential threats’ of this technology or the elimination of human labor, I am worried about disruption to our existing systems, our sets of institutions and our ways of work. Arguably, this presents a socio-technical existential threat. To address this, we need clinicians and HI professionals (as well as teachers, civil administrators, lawyers, accountants, regular citizens) to be more closely engaged in steering the technology. As I recently described in a Financial Times op-ed and at the Bennett Public Policy Lecture, the solutions we get from companies are rarely properly adapted to the challenges we face. I see this as a severe problem.”
Q: Part of AI’s potential lies in its ability to take care of lower-level cognitive tasks, freeing up doctors, health information management professionals and others to concentrate on higher level tasks. While this looms as a great efficiency benefit, are there any drawbacks? If so, can you describe the negative impact?
A: “Yes, the low-level tasks help us understand the intellectual landscape we’re working in. For example, a doctor’s notetaking during a patient encounter could help when it comes time to make a clinical decision. An HI professional’s manual review of clinical data could enable more advanced analysis of evidence-based practices. That’s why the emphasis has to be on better understanding our intelligence and how the many professionals in our society are applying it. These are transformative technologies that can bring great benefit, but without careful deployment, we run the risk of the intellectual equivalent of driving into a harbor because the GPS said to do so.”
Q: Is it more difficult to conduct high-level tasks such as clinical decision making without having done the low-level documentation/notetaking?
A: “Yes, this makes total sense to me and is why we need to work closely supporting professionals in how these technologies are deployed. I have a philosophy for how to do this that I refer to as the ‘attention reinvestment cycle.’
The attention reinvestment cycle is a radical rethinking of how humans harness technological progress. Drawing on lessons from Data Science Africa (DSA), Accelerate Programme for Scientific Discovery, Data Trusts Initiative and ai@cam, this model proposes a virtuous cycle where time saved through automation is deliberately reinvested in solving pressing challenges. Four key principles emerge from these initiatives:
• Agility in institutional infrastructure
• Sharing credit
• Scaling human capital through ’see one, teach one, organize one’
• Bidirectional learning between technologists and domain experts.
We need all citizens to be active participants in this technology otherwise we'll fall into a form of autocracy (what I call the digital oligarchy in my book). So far digital technologies have undermined the human decision-making side of our professionals, but it's that essence of humanity that makes them professionals.”
John McCormack is a Riverside, IL-based freelance writer who specializes in healthcare IT, clinical, and policy issues.