Hardly a day passes without a report of some new, startling application of artificial intelligence. Two recent articles in the journal Nature described its application to weather forecasting which, currently, is difficult and time-consuming, because meteorologists analyze individually weather variables such as temperature, precipitation, pressure, wind, humidity, and cloudiness. However, new AI applications can significantly speed up the process.
The first article describes how a new AI model, Pangu-Weather, can predict worldwide weekly weather patterns much more rapidly than traditional forecasting methods, but with comparable accuracy. The second demonstrates how a deep-learning algorithm was able to predict extreme rainfall more accurately and more quickly than other methods.
A July 5 article in Technology Review magazine offered some examples of how AI is advancing various scientific disciplines:
Scientists at McMaster and MIT, for example, used an AI model to identify an antibiotic to combat a pathogen that the World Health Organization labeled one of the world’s most dangerous antibiotic-resistant bacteria for hospital patients. A Google DeepMind model can control plasma in nuclear fusion reactions, bringing us closer to a clean-energy revolution. Within health care, the U.S. Food and Drug Administration has already cleared 523 devices that use AI — 75% of them for use in radiology.
Less momentous, but fascinating, was a recent article by Ethan Mollick, a professor at the Wharton School at the University of Pennsylvania, which described the ability of the newest, “multimodal” version of chatbot GPT-4 to “see,” “hear,” and “understand” what is being presented to it. In an experiment in which the chatbot is asked to design a new, trendy women’s shoe, it offers several possible alternatives and then, when prompted, serially and skillfully refines the design.
The chatbot can’t (yet) do everything perfectly, however. I love this passage from Mollick’s article: “I also gave it the challenge of coming up with creative ideas for foods in my fridge based on an original photo (it identified the items correctly, though the creative recipe suggestions were mildly horrifying).”
AI is also being applied to military intelligence and strategy. As early as fall 2021, when experts were still undecided about Russian President Vladimir Putin’s intentions toward Ukraine, AI accurately predicted the invasion. As described in an article in Foreign Policy, analysts used AI to aggregate small but significant pieces of data that together made possible an accurate prediction. The details included such things as the observation that weapons systems moved to the border regions in 2021 for what Russia claimed were military drills remained there, as if pre-positioned for future forward advances. Even Russian officers’ spending patterns were factored in: Their purchasing goods “at local businesses made it obvious they weren’t planning on returning to barracks, let alone home, anytime soon.”
Scripps Research cardiologist Eric Topol recently reviewed several promising medical applications of AI, from the ability to make never-before-possible diagnoses from chest X-rays to replacing human scribes who summarize patients’ office visits. The influential Mayo Clinic, the largest integrated, nonprofit medical practice in the world, has created more than 160 AI algorithms in cardiology, neurology, radiology and other specialties, 40 of which are already being employed in patient care.
I’ve become an AI fan, especially after my own interaction with it last month. Let me explain …
Like most physicians, I’m a great believer in preventive medicine. Along with being strongly pro-vaccine and a believer in regular mammograms and monitoring of blood pressure and certain blood tests, I also endorse colonoscopies to detect early cancers in the colon or rectum. Recently, I came face to face (so to speak) with AI during my own colonoscopy.
And yes, I know that colonoscopies are a hard sell, because the procedure, or, to be more accurate, the required preparation for it – the “cleanout” – isn’t pleasant. However, it’s important; it could save your life.
Colorectal cancer is the third leading cause of cancer in the United States, according to the NIH. It is insidious, because it can progress for a long time before it becomes symptomatic, and by then becomes harder to treat. Colorectal cancer usually starts from polyps or other precancerous growths in the rectum or the colon. As part of screening, clinicians perform colonoscopies to detect changes or abnormalities in the lining of the colon and rectum. A colonoscopy involves threading an endoscope – a thin, flexible tube with a camera at the end – through the rectum and throughout the entire length of the colon, allowing the doctor to see signs of cancer or precancerous lesions. (The patient is anesthetized, so it’s not painful, or even uncomfortable.)
Although colorectal cancer now mostly occurs in people over age 50, incidence rates are rising for young adults. Incidence and death rates are projected to double by 2030. By then, it is estimated that more than one in 10 colon cancers will be diagnosed in people younger than 50. Colonoscopy screenings, which should begin at 45 years old, may reduce the colorectal cancer mortality by 60-70%.
My recent routine screening colonoscopy was notable in two ways. First, the gastroenterologist – the specialist who does the procedure – prescribed a new cleanout regimen, called SUTAB, which consists in part of a large number of tablets, instead of the old one, which required drinking vast amounts of a disgusting liquid. It was still no picnic but was somewhat more palatable, literally and figuratively. The less said about that, the better …
The second notable aspect of the experience was something I learned from chatting with the gastroenterologist. While were discussing the new frontier of artificial intelligence’s contributions to medicine, he mentioned that he and his colleagues had begun to use a new AI tool called “GI Genius” to assist with colonoscopies – specifically to help detect abnormalities, such as polyps or adenomas (precancerous lesions), in the colon.
Here’s how it works, according to the FDA:
The GI Genius is composed of hardware and software designed to highlight portions of the colon where the device detects a potential lesion. The software uses artificial intelligence algorithm techniques to identify regions of interest. During a colonoscopy, the GI Genius system generates markers, which look like green squares and are accompanied by a short, low-volume sound, and superimposes them on the video from the endoscope camera when it identifies a potential lesion. These signs signal to the clinician that further assessment may be needed, such as a closer visual inspection, tissue sampling, testing or removal, or ablation of (burning) the lesion.
The FDA’s Center for Medical Devices and Radiological Health approved it in April 2021 on the basis of a multicenter, prospective, randomized, controlled study in Italy with 700 subjects 40-80 years old who were undergoing a colonoscopy for colorectal cancer screening. In the study, colonoscopy plus GI Genius was able to identify lab-confirmed adenomas or carcinomas in 55.1% of patients compared to 42.0% of patients with standard colonoscopy, an improvement of 13%.
In subsequent clinical studies, the module showed sensitivity of 99.7% with fewer than 1% false positives. My doc said that he occasionally found polyps that GI Genius missed, and vice versa, but that the module was getting smarter and more accurate as more examples of colonoscopies were being fed into its database.
Welcome to the Brave New World of AI.
Henry I. Miller, a physician and molecular biologist, is the Glenn Swogger Distinguished Fellow at the American Council on Science and Health. He was the founding director of the FDA’s Office of Biotechnology.