A recent study published in The Lancet Gastroenterology and Hepatology, found that physicians using AI to help detect cancers got about 20% worse in their ability to do this work on their own, without AI assistance, after just three months of using it compared to their skill levels prior to AI support. It appears that they may have gotten “lazier and worse at their jobs as a result of the technology’s assistance.”
Now this was a study that surveyed only 19 physicians from four endoscopy practices, but also, these were experienced doctors who had each performed at least 2000 endoscopies prior to the study. Imagine if they were newer with their knowledge less deeply imbedded.
It shouldn’t be surprising. After all, how many cell phone numbers of friends and family members can you remember? How quickly can you do long division or other calculation you’ve long been relying on a calculator (i.e., your phone now) to do? How comfortable are you getting around without GPS? But, is this trend to rely on AI helping us or harming us?
AI Can Be Helpful

More the 2.3 million people are diagnosed with breast cancer each year, according to the World Health Organization (WHO). And, we know that early detection helps to save lives. So, when two studies recently showed that AI was able to perform similarly to highly trained radiologists. This is helpful, particularly when access to skilled professionals is limited or to reduce workloads.
Radiology is one of the areas where AI has been most implemented and is being tested. But it is also being tried and tested in drug design and development, and even in patient consultations and assessments.
AI can help healthcare professionals see more patients, as the AI helps screen, chart, flag issues, and create treatment plans. Some of you may even have seen a healthcare professional who is using AI to record your consultation (note that they NEED to ask for your consent to do this first!) and write your chart notes. All of this can be helpful in improving efficiencies, and who doesn’t want to be more efficient?
AI Can Be Harmful

I’m not an early adopter. I used Map Quest printouts, paper agenda books, and paper files for longer than many of my friends, though I’m more than happy to have GPS, my phone calendar, and Jane app now–SOOOOO much better! Still, I’m cautious, so I have to say that I land more on the AI-can-be-harmful side than the helpful side, at least for now.
As mentioned at the start of this blog, AI can make us lazy. Well, okay, it doesn’t “make” us lazy. We all have some tendency toward laziness (I think), so if there’s something we don’t really like to do and someone or something else can do it, it’s nice to rely on that. What happens when we lose our skills? Use it or lose it, we say. I never had a sense of direction, so I never lost any ability to get from one place to another on my own once I started using my phone for directions. But I’m less thrilled with the idea that my doctor or other healthcare provider might be relying on a medical equivalent of ChatGPT (or, even worse, on ChatGPT) to figure out my treatments.
As it is, too many of my patients tell me that their doctors tell them there’s nothing they can do when that person feels unwell because all the tests showed up “normal.” Did they use their diagnostic skills? Ask questions, dig deeper?
On top of that, there are what’s widely known as AI “hallucinations.” This is when AI makes up stuff. Random stuff. That can be a big deal when it comes to patient charting. If it charts completely random words like “unicorn hair is magic,” then in reading that patient’s chart, you can easily chalk it up to weird AI. But it sometimes makes up things that are feasible, like a high blood pressure reading or “can include racial commentary, violent rhetoric and even imagined medical treatments“–now that’s terrifying. Healthcare providers are responsible for reading what AI writes and making sure to edit it thoroughly but what happens when they don’t because they got too busy or too lazy or they simply miss an error?
Like many relationships, it’s complicated. Studies have shown that it improves the clinician’s performance for some individuals but worsens it for others. And we don’t yet know the what, why, or how.
Traditional and Modern
I’m not a traditionalist. I may practice a form of medicine with a very long history that uses the word “traditional” in its name, but it’s also a modern medicine. It has changed, grown, and evolved over the many years. For that, we should all be thankful, as sharpened stones are much more painful and less sanitary than the sterile surgical stainless steel acupuncture needles we have today. I also prefer the silicone cups (safer) over glass fire cupping (though that’s visually more spectacular), and especially over the bamboo cups (impossible to clean well) we used when I studied in China.
However, I’m not yet ready to use the new AI tools that are out there for TCM diagnosis. How is AI going to truly do a TCM assessment? In TCM, we have the “four pillars” of gathering information for a diagnosis: 1. look (at their eyes, their complexion, their joints, their hair and nails, their demeanor and energy, etc.); 2. listen (to their voice, cough, breath, etc.) and smell (harder to do now with so many things we do to cover that but it can still be useful sometimes); 3. feel (for temperature, texture, bumps, muscles and joints, etc.); and 4. ask (this is the biggest one most of us use, and it’s important to ask the right questions, but it’s even more important to listen). I worry that people will lose their skills. Maybe it’ll help some fine tune and strengthen their skills. For others, they won’t actually learn their skills fully or they’ll lose them.
I think we should keep what works, try to keep our brains and skills sharp (like an acupuncture needle!), and trust that there are some things that we living beings can still do better than the technology we’ve created. It may change in the future (I sure hope not), but for now, we can use our intuition, emotions, and living Qi to best connect, understand, share, and heal others.
What do you think?