Date of the last update: 30.12.2021
The use of artificial intelligence is rapidly growing in medical practice and research. The technology holds enormous promise, but its drawbacks should not be dismissed.
Table of Contents:
- Your medical care may already be fueled by AI
- The promise of AI in medicine
- The challenges of using AI in healthcare
You can read this article in 2 minutes.
You may not realize it, but your doctor could already be using AI in their daily practice. Physicians use the technology, for instance, to verbally dictate patient reports into an automated transcription program. This saves them time that they can use instead to focus on the patient, to have a more in-depth conversation or conduct a closer examination. Medical providers use AI to analyze large amounts of patient data, to help with administration, and fix inefficiencies in patient care. During the Covid-19 epidemic, researchers designed an algorithm to predict patients’ oxygen needs.
Researchers are studying how AI can help with diagnosing all kinds of conditions, including cancer. The computer, fed with massive amounts of images of diseased cells, learns how to identify them. A recent study from New York University showed that AI combined with the trained eye of a specialist is better at recognizing breast cancer than either method on its own.
In general, experts say that AI will be particularly transformative in the field of radiology.
Device makers are incorporating AI to make their diagnostic imaging better — and faster. The technology fuels building 3-D images, which can help with identifying the exact spot that needs to be biopsied in a cancer patient, or provide a precise model of a face for a patient who needs facial reconstructive surgery.
There are still significant challenges to the adoption of AI in medicine. The technology is far from perfect, riddled with biases and bad or inconsistent data. It needs to be rigorously tested and examined. But this kind of scrutiny will be difficult. Governments don’t yet have blueprints on how to regulate algorithms, which are notoriously difficult to understand, and are closely guarded by the companies that create them.