We are not ready for AI

Artificial intelligence (AI) is transforming healthcare at an astounding pace. Vantage Market Research, an American firm specializing in emerging markets, estimates the global AI market in health will climb from US$6.6 billion in 2021 to US$95.7 billion by 2028. That’s an astounding 46.1% compound annual growth. What does this mean for the healthcare consumer?

Henry Ford advised, “Before everything else, getting ready is the secret to success.” He lived in a different time, when his assembly lines operated in a simple operational sequence, one workstation after another. Today, getting ready for anything doesn’t seem to be an option. In healthcare, the pace at which AI technologies are reshaping the sector is both exciting and inscrutable.

On the bright side, diagnostics are already undeniably improved. As pathology adopts AI tools, earlier detection of cancer is possible. Medical errors in diagnosis are certain to decline.

AI is changing the cost structures of new drug development. Biopharmaceutical companies can more efficiently identify effective drugs, reducing costly and time-consuming clinical trials that don’t lead to marketable drugs – currently about 90% of drugs in development.

Robotic-assisted surgery is another area of profoundly improved outcomes for patients. According to the Mayo Clinic, “robots help doctors perform complex procedures with a precision, flexibility and control that goes beyond human capabilities.”

Some companies have developed the potential to create your digital twin. Lifestyle changes and adjustments to medications can be modeled to help forecast the effect on chronic disease conditions like type 2 diabetes. It will be interesting to learn whether meeting one’s future self will motivate people lose weight and eliminate other risk factors for lifestyle illnesses.

Graphic pictures of diseased lungs and toothless gums on cigarette packages are effective in reducing tobacco usage. Maybe an image of one’s own fatty liver or amputated leg will have the same effect in helping people choose healthier food.

On the darker side, there are risks to the explosion of AI technologies that are yet to be well studied or managed.

One concern is the introduction of systemic bias into decision making. For example, when AI models use data limited by ethnicity or gender, both computers and doctors alike may arrive at sub-standard results for some patients.

Another concern is personal privacy. The irony is that protecting privacy stifles AI. But the real challenge is that society has not yet figured out how to protect privacy in a world where cameras capture images everywhere. Most individuals have no idea where, when, and how to offer or remove consent for their private data to be collected.

Legal experts acknowledge that regulation of AI lags behind. The slow process of law is problematic but can’t be blamed. Instead, innovators, businesses, governments, and consumers need to think through their own responsibilities, seeking to understand risks, identify ethical questions, and invite discourse on social or moral consequences.

Data scientists need closer scrutiny. They are known for their “black box” algorithms which nearly always lack transparency. How many are trained or take even a small interest in the implications of their work? Do buyers of their services do any better?

Imagine the company that adopts technologies to compare individual employee’s mental health data with large population datasets, then uses machine learning to match people with specialists or make health appointments for them. Care to talk with a chatbot about how you are feeling?

Maybe welcome? Maybe ill-advised? Like it or not, these new AI services and products are in the marketplace and probably already part of your healthcare.

Asking if you are ready is a moot point.

Sign-up at www.docgiff.com to receive our weekly e-newsletter. For comments, contact-us@docgiff.com. Follow us on Instagram @docgiff and @diana_gifford_jones

-Advertisement-