Artificial intelligence, or AI, is everywhere—from virtual assistants to facial recognition software. Studies suggest that AI can perform as well as or better than humans at key healthcare tasks, such as diagnosing disease. So what exactly is AI? Learn about AI and how it can help doctors diagnose and treat medical conditions, deliver care, and more.
Artificial Intelligence at the NIH
Image by NIH
Artificial Intelligence
Artificial Intelligence, Brain, Think
Image by geralt/Pixabay
Artificial Intelligence, Brain, Think
Image by geralt/Pixabay
Artificial Intelligence (AI)
What are the types of AI and how do they differ?
Artificial Intelligence: A feature where machines learn to perform tasks, rather than simply carrying out computations that are input by human users.
Early applications of AI included machines that could play games such as checkers and chess, and programs that could reproduce language.
Machine Learning: An approach to AI in which a computer algorithm (a set of rules and procedures) is developed to analyze and make predictions from data that is fed into the system.
Machine learning-based technologies are routinely used every day, such as personalized news feeds and traffic prediction maps.
Neural Networks: A machine learning approach modeled after the brain in which algorithms process signals via interconnected nodes called artificial neurons.
Mimicking biological nervous systems, artificial neural networks have been used successfully to recognize and predict patterns of neural signals involved in brain function.
Deep Learning: A form of machine learning that uses many layers of computation to form what is described as a deep neural network, capable of learning from large amounts of complex, unstructured data.
Deep neural networks are responsible for voice-controlled virtual assistants as well as self-driving vehicles, which learn to recognize traffic signs.
How is AI being used to improve medical care and biomedical research?
Radiology: The ability of AI to interpret imaging results may aid in detecting a minute change in an image that a clinician might accidentally miss.
Imaging: One example is the use of AI to evaluate how an individual will look after facial and cleft palate surgery.
Telehealth: Wearable devices allow for constant monitoring of a patient and the detection of physiological changes that may provide early warning signs of an event such as an asthma attack.
Clinical care: A large focus of AI in the healthcare sector is in clinical decision support systems, which use health observations and case knowledge to assist with treatment decisions.
How are NIBIB-funded researchers using AI in their biomedical research?
Early diagnosis of Alzheimer’s disease (AD) using analysis of brain networks
AD-related neurological degeneration begins long before the appearance of clinical symptoms. Information provided by functional MRI (fMRI) neuroimaging data, which can detect changes in brain tissue during the early phases of AD, holds potential for early detection and treatment. The researchers are combining the ability of fMRI to detect subtle brain changes with the ability of machine learning to analyze multiple brain changes over time. This approach aims to improve early detection of AD, as well as other neurological disorders including schizophrenia, autism, and multiple sclerosis.
Prediction of blood glucose levels using wearable sensors
NIBIB-funded researchers are building machine learning models to better manage blood glucose levels by using data obtained from wearable sensors. New portable sensing technologies provide continuous measurements that include heart rate, skin conductance, temperature, and body movements. The data will be used to train an artificial intelligence network to help predict changes in blood glucose levels before they occur. Anticipating and preventing blood glucose control problems will enhance patient safety and reduce costly complications.
Enhanced image analysis for improved colorectal cancer screening
This project aims to develop an advanced image scanning system with high detection sensitivity and specificity for colon cancers. The researchers will develop deep neural networks that can analyze a wider field on the radiographic images obtained during surgery. The wider scans will include the suspected lesion areas and more surrounding tissue. The neural networks will compare patient images with images of past diagnosed cases. The system is expected to outperform current computer-aided systems in the diagnosis of colorectal lesions. Broad adoption could advance the prevention and early diagnosis of cancer.
Smart clothing to reduce low back pain
Smart, cyber-physically assistive clothing (CPAC) is being developed in an effort to reduce the high prevalence of low back pain. Forces on back muscles and discs that occur during daily tasks are major risk factors for back pain and injury. The researchers are gathering a public data set of more than 500 movements measured from each subject to inform a machine learning algorithm. The information will be used to develop assistive clothing that can detect unsafe conditions and intervene to protect low back health. The long-term vision is to create smart clothing that can monitor lumbar loading; train safe movement patterns; directly assist wearers to reduce incidence of low back pain; and reduce costs related to health care expenses and missed work.
Source: National Institute of Biomedical Imaging and Bioengineering (NIBIB)
Additional Materials (19)
Artificial intelligence
Androrithms= humanistic
Image by Gerd Leonhard/Wikimedia
Automated Worm Sorter - 4
Georgia Tech associate professor Hang Lu holds a microfluidic chip that is part of a system that uses artificial intelligence and cutting-edge image processing to automatically examine large number of nematodes used for genetic research.
Image by Georgia Tech/Gary Meek
Artificial Intelligence & AI & Machine Learning
PLEASE CREDIT THIS IMAGE PROPERLY AS PER INSTRUCTIONS BELOW IF YOU CHOOSE TO USE IT
Artificial intelligence and machine learning are increasingly big news. Want to write an article on AI or computers getting smarter? This image could suit it.
Want to use this image?
Feel free to use this photo for your website or blog as long as you include credit. It should include a clickable link to my website as per below. Please do not link to the Flickr profile.
Image via www.vpnsrus.com
Image by mikemacmarketing/Wikimedia
AI Could Transform Eye Care Screening, but Are the Tools up to the Task?
Diabetic Retinopathy is the leading cause of vision loss among diabetics and the #1 cause of blindness in the US. It comes about due to changes driven by diabetes in the blood vessels in the retinal area of your eye. This is the lining at the back of your eye that translates light into images.
Swelling, bleeding, or fluid leaking can occur, which can cause scarring and damage to the retina, resulting in damaged vision.
The importance of early screening is vital: it can mean the difference between suffering vision loss or even blindness.
Image by StoryMD
Using artificial intelligence to diagnose diabetic retinopathy before it causes blindness
Video by University of Iowa Health Care/YouTube
The 7 Types of AI - And Why We Talk (Mostly) About 3 of Them
Video by IBM Technology/YouTube
What is Machine Learning?
Video by IBM Technology/YouTube
The 5 Types of AI
Video by AI Uncovered/YouTube
AI vs Machine Learning
Video by IBM Technology/YouTube
How does artificial intelligence learn? - Briana Brownell
Video by TED-Ed/YouTube
Using artificial intelligence to read kidney biopsies: Mayo Clinic Radio
Video by Mayo Clinic/YouTube
Artificial Intelligence, 'Virtual Biopsies' May Be The Future Of Understanding Brain Tumors
Video by CBS Boston/YouTube
The Future of Artificial Intelligence and Its Impact on Anesthesiology
Video by International Anesthesia Research Society/YouTube
Inventor Builds Robotic Stuffed Animals for Kids with Cancer | Extraordinary People | New York Post
Video by New York Post/YouTube
A.I. Reveals Autism-Linked Changes in "Junk" DNA | SciShow News
Video by SciShow/YouTube
Artificial Intelligence In 5 Minutes | What Is Artificial Intelligence? | AI Explained | Simplilearn
Video by Simplilearn/YouTube
Machine Learning: Living in the Age of AI | A WIRED Film
Video by WIRED/YouTube
The danger of AI is weirder than you think | Janelle Shane
Video by TED/YouTube
What Is Artificial Intelligence? Crash Course AI #1
Video by CrashCourse/YouTube
Artificial intelligence
Gerd Leonhard/Wikimedia
Automated Worm Sorter - 4
Georgia Tech/Gary Meek
Artificial Intelligence & AI & Machine Learning
mikemacmarketing/Wikimedia
AI Could Transform Eye Care Screening, but Are the Tools up to the Task?
StoryMD
1:52
Using artificial intelligence to diagnose diabetic retinopathy before it causes blindness
University of Iowa Health Care/YouTube
6:50
The 7 Types of AI - And Why We Talk (Mostly) About 3 of Them
IBM Technology/YouTube
8:23
What is Machine Learning?
IBM Technology/YouTube
10:37
The 5 Types of AI
AI Uncovered/YouTube
5:49
AI vs Machine Learning
IBM Technology/YouTube
4:57
How does artificial intelligence learn? - Briana Brownell
TED-Ed/YouTube
11:24
Using artificial intelligence to read kidney biopsies: Mayo Clinic Radio
Mayo Clinic/YouTube
2:42
Artificial Intelligence, 'Virtual Biopsies' May Be The Future Of Understanding Brain Tumors
CBS Boston/YouTube
24:27
The Future of Artificial Intelligence and Its Impact on Anesthesiology
International Anesthesia Research Society/YouTube
3:13
Inventor Builds Robotic Stuffed Animals for Kids with Cancer | Extraordinary People | New York Post
New York Post/YouTube
5:53
A.I. Reveals Autism-Linked Changes in "Junk" DNA | SciShow News
SciShow/YouTube
5:28
Artificial Intelligence In 5 Minutes | What Is Artificial Intelligence? | AI Explained | Simplilearn
Simplilearn/YouTube
41:17
Machine Learning: Living in the Age of AI | A WIRED Film
WIRED/YouTube
10:30
The danger of AI is weirder than you think | Janelle Shane
TED/YouTube
11:46
What Is Artificial Intelligence? Crash Course AI #1
CrashCourse/YouTube
AI for Health
Spectrum of Medical Imaging
Image by Martin Tornai
Spectrum of Medical Imaging
This composite image was created in recognition of The Society of Photo-Optical Instrumentation Engineers (SPIE) designation of 2015 as the “International Year of Light.” Medical imaging scientists and engineers contribute to the quest for the improved human condition by providing imaging that spans the entire usable electro-magnetic spectrum (seen at bottom). From the humblest of beginnings with simple magnification of objects and focusing with lenses, to the biological observation of cells with rudimentary microscopes in the visible spectrum, medical imaging today has both extended away from the visible spectrum towards longer and shorter wavelengths (low and high energies) and exploited the hyperspectral nature of the visible which was once considered “commonplace” or rudimentary. Considering yet a different cross-section through the various imaging modalities across the spectrum, the medically useful information gained spans anatomic, physiologic, and molecular regimes. Systems and algorithms have been developed as single-modality acquisition systems, and also as multiple-modalities with conjoint reconstruction, each informing the other in the quest to optimize image quality and information content. In addition, the role of light-based technologies in enabling advanced computations in tomographic reconstructions, computer-aided diagnosis, machine learning (neural networks, deep learning, etc.) , and 3D visualizations as well as in data-enriched storage of medical images has further extended the potential knowledge “seen” through imaging. The breadth of the electromagnetic spectrum offers limitless opportunities for improvement and inquiry, in our quest to answer human life’s difficult questions. This montage was first published on the cover of the SPIE’s Journal of Medical Imaging (July-Sept. 2015, vol. 3, no. 3) and is but a small tribute to the multitude of medical imaging scientists who have contributed to the wealth of new knowledge as we interrogate the human condition. The gray-scale whole body images (from Left to Right) are acquired from longer wavelength modalities to shorter wavelength modalities spanning the (currently) usable electromagnetic spectrum. The background images include: a coronal Magnetic Resonance Image (MRI) T2 weighted image slice, a posterior surface Infra Red (IR) heat map, a coronal non-contrast Computed Tomography (CT) slice, and a coronal 18F-fluoro-deoxyglucose Positron Emission Tomography (FDG-PET) slice. The selection of color inset images includes (from Top-to-Bottom, then Left to Right): [1] diffusion tensor magnetic resonance imaging (MRI) used to image long white matter tracts within and around the human brain illustrating neuronal connectivity (coloring refers to bluish=vertical direction; green=horizontal A-P direction; red=horizontal lateral direction) (courtesy of Arthur Toga, PhD); [2] registered and superimposed coronal proton-MRI (gray-scale) and a hyper polarized 129Xe ventilation MRI scan (color scale) within the same patient’s lungs, prior to their receiving a bronchial stent (courtesy of Bastiaan Driehuys, PhD); [3] coronal slice through a microwave-based image reconstruction of the Debye parameter ε (epsilon) in a heterogeneously dense human breast phantom (courtesy of Susan Hagness, PhD); [4] juxtaposed optical image of the retina and vertical and horizontal Optical Coherence Tomograph (OCT) slices showing pigmentosa retinopathy (courtesy of Jean-Michel Muratet, MD); [5] functional 3-D photoacoustic imaging of melanoma (gray scale) in vivo, surrounded by highly optically absorptive blood vessels (red) (courtesy of Lihong Wang, PhD); [6] optical micrograph of HeLa cells stained for microtubules (blue) and co-stained with DAPI for actin visualization (red); HeLa cells are the world's first stem cell line, originally derived from the aggressive cervical cancer cells of Henrietta Lacks (courtesy Tom Deerinck/NCMIR); [7] visualization of the pelvis highlighting the colon, which is imaged with contrast x-ray computed tomography (CT), and after digital surface rendering can be used in virtual colonoscopy, thus avoiding a physical visible light-based colonoscopy; [8] registered and fused sagittal 99mTc-sestamibi dedicated molecular single photon emission computed tomography (SPECT, heat color scale) and low-dose CT slice (gray scale) of a patient’s uncompressed, pendant breast containing two surgically confirmed DCIS loci in the posterior breast (courtesy of Martin Tornai, PhD); [9] classic, early x-radiograph of Wilhelm Röntgen’s wife’s hand (adapted from http://en.wikipedia.org/wiki/100_Photographs_that_Changed_the_World); [10] transverse pelvic CT (gray scale) with multiple superimposed simulated x-ray photon radiotherapy beams, and their cumulative 20Gy (red) deposited dose isocontours at the focus of a tumor (courtesy of Paul Read, MD, PhD); [11] false-colored scanning electron micrograph of human immunodeficiency virus (HIV) particles (yellow) infecting a human H9 T-cell (turquoise) (adapted from http://biosingularity.com/category/infection/); [12] transverse 18F-dopa positron emission tomography (PET) slice through a Parkinson patient’s brain showing decreased in vivo quantitative uptake in the right caudate and putamen (courtesy of Christaan Schiepers, MD, PhD); [13] registered and fused transverse head CT (gray scale) and registered superimposed PET image (color) of a patient immediately after radio-therapeutic proton irradiation showing endogenously created 15O-positron emitter (2 min half-life) within the nasopharyngeal tumor focus (courtesy of Kira Grogg, PhD). [14] transverse (long axis) cardiac 99mTc-sestamibi SPECT slice illustrating thinned myocardial apical wall in a procedure performed thousands of times daily around the world (adapted from www.medscape.com). These images are but a small sampling of the breadth of the spectrum of light that is currently being investigated the world over by passionate scientists and engineers on a quest for the betterment of the human condition. All contributors for these images used herein have given their permission for their use, and are graciously thanked.
Image by Martin Tornai
Artificial Intelligence and Your Health: How Computers Are Helping Medicine
There’s a lot of talk about artificial intelligence, or AI, these days. AI is everywhere—from virtual assistants to facial recognition software. The technology is even assisting doctors and scientists. So what exactly is AI? And how is it helping advance scientific research?
“AI is basically trying to teach computers to ‘think’ in the same way as the human brain,” says Dr. Despina Kontos, an AI researcher at Columbia University.
One approach to AI uses a process called machine learning. In machine learning, a computer model is built to predict what may happen in the real world. The model is taught to analyze and recognize patterns in a data set. This training enables the model to then make predictions about new data. Some AI programs can also teach themselves to ask new questions and make novel connections between pieces of information.
“Computer models and humans can really work well together to improve human health,” explains Dr. Grace C.Y. Peng, an NIH expert on AI in medicine. “Computers are very good at doing calculations at a large scale, but they don’t have the intuitive capability that we have. They’re powerful, but how helpful they’re going to be lies in our hands.”
Researchers are exploring ways to harness the power of AI to improve health care. These include assisting with diagnosing and treating medical conditions and delivering care.
Mining Medical Images
One area that AI is already being used daily is medical imaging. Computers help doctors comb through CT and MRI scans for signs of problems like heart disease and cancer.
“AI can look at images very closely, in a way that’s much more detailed than we can do with the human eye,” Kontos says. That means that the computer may be able to pick up on subtleties that a person might miss.
In medicine, catching early signs of certain diseases can be the difference between life and death. Kontos and her team are testing ways AI can be used to identify women who are at high risk for developing breast cancer. They’re using AI to analyze different features in mammograms—X-ray pictures of the breast—such as breast density. Women who have a higher risk of breast cancer can take preventative steps, like more frequent screenings. This approach could help lead to earlier diagnosis and more successful treatment.
The team is also testing whether they can use AI to individualize breast cancer treatment based on imaging results that show how breast tumors are responding. AI may better reveal who needs more intensive treatment, like chemotherapy, and who can safely skip it.
“That way, we could spare women who don’t need intensive treatment from unnecessary side effects,” Kontos explains.
Connecting People With Care
Over the last year, advanced “chatbots,” like ChatGPT, have burst on the scene. These AI programs are designed to have realistic conversations with people. People are starting to use the technology to find health information.
Many chatbots are a form of “generative AI.” This type of AI can create new content based on what it learns from analyzing existing data. Such chatbots use what’s called large language models, which are trained on huge data sets that are gathered from across the internet. The training teaches them to predict what words are most likely to appear after another.
It may be tempting to ask these tools to answer medical questions. “But these chatbots don’t actually understand what you’re asking,” Peng says. “They’re just looking at the phrases and making predictions about what comes next.” So it’s important to use caution if you’re using them to seek health advice.
“These tools also don’t have a lot of context,” explains Dr. Ellen Fitzsimmons-Craft, a mental health researcher at Washington University in St. Louis. “They may be able to state something that’s the right medical advice in a general sense. But that may not be the right medical advice for you personally.”
“We don’t always know what information these tools like ChatGPT are trained on,” Fitzsimmons-Craft adds. “We don’t know if they’re getting information from reputable sources or not.”
Still, the idea of using chatbots in medicine has promise, explains Fitzsimmons-Craft. Right now, there is a shortage of health care providers in many fields, including mental health. Chatbots may be able to fill in some gaps.
“Not many people follow through with recommendations provided after a mental health screening,” Fitzsimmons-Craft says. “And we don’t have enough providers to connect with every one of those people.”
Fitzsimmons-Craft and her team are developing a chatbot to help guide people with eating disorders toward seeking care. Their chatbot is made using a rule-based model, with limited AI. Rule-based means that human experts write entire conversations beforehand. Then, the chatbot picks what to say based on the responses of the person using it.
“Nothing this chatbot says should come as a surprise,” Fitzsimmons-Craft explains. “That’s in contrast to generative AI, where you have to work a lot harder to build in guardrails.”
Her team is now testing their chatbot to see which conversation pieces are the most effective. They’ll then test whether it can increase the number of people seeking help after a screening for eating disorders.
Other NIH-funded researchers are studying whether chatbots can help in additional areas, like suicide prevention and encouraging heart-healthy diet changes.
Looking Toward the Future
As AI moves into more areas of health care, many ethical issues will need to be addressed, explains Kontos. “These systems learn from human data, so they may learn our biases,” she says.
For example, in the past, research studies have included far more men than women. This means that the resulting data may not be as accurate for guiding women’s health care. So, if this information is fed into AI models, that bias will follow.
Recognizing biases before they reach the machines may provide a chance to break this cycle. “Can we end up training the machines better because we learned from the mistakes that we have in our own society about training people?” Peng asks.
NIH-funded researchers are working on these issues, as well as many other ways to use AI in medicine. These include modeling the ways a virus might spread between countries and predicting if new drugs will be safe.
All of these projects need human imagination and computing power. So AI is not a replacement for people, says Fitzsimmons-Craft. “AI is just another tool in the toolbox, that’s offering another form of help.”
Testing AI for Health
NIH funds studies to test AI in many areas of health, including:
Predicting who’s at high risk for breast cancer.
Connecting people with quality medical information via chatbots.
Modeling disease spread across countries.
Identifying new drug candidates.
Diagnosing Alzheimer’s disease before symptoms develop.
Predicting changes in blood sugar levels before they occur in people with diabetes.
Creating “smart clothing” that can reduce back pain by warning the wearer about unsafe movements.
Improving colonoscopies so colon cancers can be detected and treated at earlier stages.
Source: NIH News in Health
AI and Cancer
Lung cancer cells driven by the KRAS oncogene, which is highlighted in purple.
Image by National Cancer Institute / Huntsman Cancer Institute at the Univ. of Utah / Eric Snyder
Lung cancer cells driven by the KRAS oncogene, which is highlighted in purple.
In this image from a genetically engineered mouse model, lung cancer driven by the Kras oncogene shows up in purple. As a key driver in many types of cancer, the Kras gene makes a promising target for new cancer therapies.
Image by National Cancer Institute / Huntsman Cancer Institute at the Univ. of Utah / Eric Snyder
Can Artificial Intelligence Help See Cancer in New, and Better, Ways?
March 22, 2022, by Nadia Jaber
Two identical black and white pictures of murky shapes sit side-by-side on a computer screen. On the left side, Ismail Baris Turkbey, M.D., a radiologist with 15 years of experience, has outlined an area where the fuzzy shapes represent what he believes is a creeping, growing prostate cancer. On the other side of the screen, an artificial intelligence (AI) computer program has done the same—and the results are nearly identical.
The black and white image is an MRI scan from someone with prostate cancer, and the AI program has analyzed thousands of them.
“The [AI] model finds the prostate and outlines cancer-suspicious areas without any human supervision,” Dr. Turkbey explains. His hope is that the AI will help less experienced radiologists find prostate cancer when it’s present and dismiss anything that may be mistaken for cancer.
This model is just the tip of the iceberg when it comes to the intersection of artificial intelligence and cancer research. While the potential applications seem endless, a lot of that progress has centered around tools for cancer imaging.
From x-rays of whole organs to microscope pictures of cancer cells, doctors use imaging tests in many ways: finding cancer at its earliest stages, determining the stage of a tumor, seeing if treatment is working, and monitoring whether cancer has returned after treatment.
Over the past several years, researchers have developed AI tools that have the potential to make cancer imaging faster, more accurate, and even more informative. And that’s generated a lot of excitement.
“There’s a lot of hype [around AI], but there’s a lot of research that’s going into it as well,” said Stephanie Harmon, Ph.D., a data scientist in NCI’s Molecular Imaging Branch.
That research, experts say, includes addressing questions about whether these tools are ready to leave research labs and enter doctors’ offices, whether they will actually help patients, and whether that benefit will reach all—or only some—patients.
What is artificial intelligence?
Artificial intelligence refers to computer programs, or algorithms, that use data to make decisions or predictions. To build an algorithm, scientists might create a set of rules, or instructions, for the computer to follow so it can analyze data and make a decision.
For example, Dr. Turkbey and his colleagues used existing rules about how prostate cancer appears on an MRI scan. They then trained their algorithm using thousands of MRI studies—some from people known to have prostate cancer, and some from people who did not.
With other artificial intelligence approaches, like machine learning, the algorithm teaches itself how to analyze and interpret data. As such, machine learning algorithms may pick up on patterns that are not readily discernable to the human eye or brain. And as these algorithms are exposed to more new data, their ability to learn and interpret the data improves.
Researchers have also used deep learning, a type of machine learning, in cancer imaging applications. Deep learning refers to algorithms that classify information in ways much like the human brain does. Deep learning tools use “artificial neural networks” that mimic how our brain cells take in, process, and react to signals from the rest of our body.
Research on AI for cancer imaging
Doctors use cancer imaging tests to answer a range of questions, like: Is it cancer or a harmless lump? If it is cancer, how fast is it growing? How far has it spread? Is it growing back after treatment? Studies suggest that AI has the potential to improve the speed, accuracy, and reliability with which doctors answer those questions.
“AI can automate assessments and tasks that humans currently can do but take a lot of time,” said Hugo Aerts, Ph.D., of Harvard Medical School. After the AI gives a result, “a radiologist simply needs to review what the AI has done—did it make the correct assessment?” Dr. Aerts continued. That automation is expected to save time and costs, but that still needs to be proven, he added.
In addition, AI could make image interpretation—a highly subjective task—more straightforward and reliable, Dr. Aerts noted.
Complex tasks that rely on “a human making an interpretation of an image—say, a radiologist, a dermatologist, a pathologist —that’s where we see enormous breakthroughs being made with deep learning,” he said.
But what scientists are most excited about is the potential for AI to go beyond what humans can currently do themselves. AI can “see” things that we humans can’t, and can find complex patterns and relationships between very different kinds of data.
“AI is great at doing this—at going beyond human performance for a lot of tasks,” Dr. Aerts said. But, in this case, it is often unclear how the AI reaches its conclusion, so it’s difficult for doctors and researchers to check if the tool is performing correctly.
Finding cancer early
Tests like mammograms and Pap tests are used to regularly check people for signs of cancer or precancerous cells that can turn into cancer. The goal is to catch and treat cancer early, before it spreads or even before it forms at all.
Scientists have developed AI tools to aid screening tests for several kinds of cancer, including breast cancer. AI-based computer programs have been used to help doctors interpret mammograms for more than 20 years, but research in this area is quickly evolving.
One group created an AI algorithm that can help determine how often someone should get screened for breast cancer. The model uses a person’s mammogram images to predict their risk of developing breast cancer in the next 5 years. In various tests, the model was more accurate than the current tools used to predict breast cancer risk.
NCI researchers have built and tested a deep learning algorithm that can identify cervical precancers that should be removed or treated. In some low-resource settings, health workers screen for cervical precancer by inspecting the cervix with a small camera. Although this method is simple and sustainable, it is not very reliable or accurate.
Mark Schiffman, M.D., M.P.H., of NCI’s Division of Cancer Epidemiology and Genetics, and his colleagues designed an algorithm to improve the ability to find cervical precancers with the visual inspection method. In a 2019 study, the algorithm performed better than trained experts.
For colon cancer, several AI tools have been shown in clinical trials to improve the detection of precancerous growths called adenomas. However, because only a small percentage of adenomas turn into cancer, some experts are concerned that such AI tools could lead to unnecessary treatments and extra tests for many patients.
Detecting cancer
AI has also shown the potential to improve cancer detection in people who have symptoms. The AI model developed by Dr. Turkbey and his colleagues in NCI’s Center for Cancer Research, for instance, could make it easier for radiologists to pick out potentially aggressive prostate cancer on a relatively new kind of prostate MRI scan, called multiparametric MRI.
Although multiparametric MRI generates a more detailed picture of the prostate than a regular MRI, radiologists typically need years of practice to read these scans accurately, leading to disagreements between radiologists looking at the same scan.
The NCI team’s AI model “can make [the learning] curve easier for practicing radiologists and can minimize the error rate,” Dr. Turkbey said. The AI model could serve as “a virtual expert” to guide less-experienced radiologists learning to use multiparametric MRI, he added.
For lung cancer, several deep learning AI models have been developed to help doctors find lung cancer on CT scans. Some noncancerous changes in the lungs look a lot like cancer on CT scans, leading to a high rate of false-positive test results that indicate a person has lung cancer when they really don’t.
Experts think that AI may better distinguish lung cancer from noncancerous changes on CT scans, potentially cutting the number of false positives and sparing some people from unneeded stress, follow-up tests, and procedures.
For example, a team of researchers trained a deep learning algorithm to find lung cancer and to specifically avoid other changes that look like cancer. In lab tests, the algorithm was very good at ignoring noncancerous changes that look like cancer and good at finding cancer.
Choosing cancer treatment
Doctors also use imaging tests to get important information about cancer, such as how fast it is growing, whether it has spread, and whether it is likely to come back after treatment. This information can help doctors choose the most appropriate treatment for their patients.
A number of studies suggest that AI has the potential to gather such prognostic information—and maybe even more—from imaging scans, and with greater precision than humans currently can. For example, Dr. Harmon and her colleagues created a deep learning model that can determine the likelihood that a patient with bladder cancer might need other treatments in addition to surgery.
Doctors estimate that around 50% of people with tumors in the bladder muscle (muscle-invasive bladder cancer) have clusters of cancer cells that have spread beyond the bladder but are too small to detect with traditional tools. If these hidden cells aren’t removed, they can continue growing after surgery, causing a relapse.
Chemotherapy can kill these microscopic clusters and prevent the cancer from coming back after surgery. But clinical trials have shown that it’s hard to determine which patients need chemotherapy in addition to surgery, Dr. Harmon said.
“What we would like to do is use this model before patients undergo any sort of treatment, to tell which patients have cancer with a high likelihood of spreading, so doctors can make informed decisions,” she explained.
The model looks at digital images of primary tumor tissue to predict whether there are microscopic clusters of cancer in nearby lymph nodes. In a 2020 study, the deep learning model proved to be more accurate than the standard way of predicting whether bladder cancer has spread, which is based on a combination of factors including the patient’s age and certain characteristics of the tumor.
More and more, genetic information about the patients’ cancer is being used to help select the most appropriate treatment. Scientists in China created a deep learning tool to predict the presence of key gene mutations from images of liver cancer tissue—something pathologists can’t do by just looking at the images.
Their tool is an example of AI that works in mysterious ways: The scientists who built the algorithm don’t know how it senses which gene mutations are present in the tumor.
Are AI tools for cancer imaging ready for the real world?
Although scientists are churning out AI tools for cancer imaging, the field is still nascent and many questions about the practical applications of these tools remain unanswered.
While hundreds of algorithms have been proven accurate in early tests, most haven’t reached the next phase of testing that ensures they are ready for the real world, Dr. Harmon said.
That testing, known as external or independent validation, “tells us how generalizable our algorithm is. Meaning, how useful is it on a totally new patient? How does it perform on patients from different [medical] centers or different scanners?” Dr. Harmon explained. In other words, does the AI tool work accurately beyond the data it was trained on?
AI algorithms that pass rigorous validation testing in diverse groups of people from various areas of the world could be used more widely, and therefore help more people, she added.
In addition to validation, Dr. Turkbey noted, clinical studies also need to show that AI tools actually help patients, either by preventing people from getting cancer, helping them live longer or have a better quality of life, or saving them time or money.
But even after that, Dr. Aerts said, a major question about AI is: “How do we make sure that these algorithms keep on working and performing well for years and years?” For example, he said, new scanners could change features of the image that an AI tool relies on to make predictions or interpretations, he explained. And that could change their performance.
There are also questions about how AI tools will be regulated. Upwards of 60 AI-based medical devices or algorithms have earned FDA approval as of 2020. But even after they are approved, some machine learning algorithms shift as they are exposed to new data. In 2021, FDA issued a framework for monitoring AI technologies that have the ability to adapt.
There are also concerns about the transparency of some AI tools. With some algorithms, like the one that can predict gene mutations in liver tumors, scientists don’t know how it reaches its conclusion—a conundrum known as the “black box problem.” Experts say this lack of transparency prohibits critical checks for biases and inaccuracies.
A recent study, for example, showed that a machine learning algorithm trained to predict cancer outcomes zeroed in on the hospital where the tumor image was taken, rather than the patient’s tumor biology. Although that algorithm isn’t used in any medical settings, other tools trained in the same way could have the same inaccuracy, the researchers warned.
There are also worries that AI could worsen gaps in health outcomes between privileged and disadvantaged groups by exacerbating biases that are already baked into our medical system and research processes, said Irene Dankwa-Mullan, M.D., M.P.H., deputy chief health equity officer of IBM Watson Health.
These biases are deeply embedded in the data used to create AI models, she explained at the 2021 American Association for Cancer Research Science of Cancer Health Disparities conference.
For instance, a handful of medical algorithms have recently been shown to be less accurate for Black people than for White people. These potentially dangerous shortcomings stem from the fact that the algorithms were mainly trained and validated on data from White patients, experts have noted.
On the other hand, some experts think AI could improve access to cancer care by bringing expert-level care to hospitals that lack specialists.
“What [AI] can do is, in a setting where there are physicians who maybe don’t have as much expertise, potentially it can bring their performance up to an expert level,” explained Dr. Harmon.
Some AI tools could even bypass the need for sophisticated equipment. The deep learning algorithm for cervical cancer screening developed by Dr. Schiffman, for example, relies on cell phones or digital cameras and low-cost materials.
Despite these concerns, most researchers are optimistic for the future of AI in cancer care. Dr. Aerts, for example, believes these hurdles are surmountable with more work and collaboration between experts in science, medicine, government, and community implementation.
“I think [AI technologies] will eventually be introduced into the clinic because the performance is just too good and it’s a waste if we don’t,” he said.
Source: Cancer Currents: An NCI Cancer Research Blog. March 22, 2022, by Nadia Jaber.
Additional Materials (7)
Artificial intelligence aids in lung cancer detection
Video by WPLG Local 10/YouTube
Using AI to detect lung cancer before it develops
Video by 23 ABC News | KERO/YouTube
AI Technology to Detect Lung Cancer Early | Jenifer Marks, MD
Video by HealthONE/YouTube
AI in Medicine: Possible Applications and Potentials
Video by KI-Campus/YouTube
High-tech hospital uses artificial intelligence in patient care
Video by NBC News/YouTube
AI could revolutionize cancer detection, according to MIT, Mass General research
Video by NBC News/YouTube
Leveraging AI to Improve Cancer Detection
This infographic illustrates how researchers are leveraging artificial intelligence (AI) to improve the detection of cervical precancer. In an NCI study in Costa Rica, an AI-based screening approach called automated visual evaluation was more sensitive in detecting cervical precancers than visual evaluation and Pap smear.
Image by National Cancer Institute (NCI)
1:37
Artificial intelligence aids in lung cancer detection
WPLG Local 10/YouTube
2:55
Using AI to detect lung cancer before it develops
23 ABC News | KERO/YouTube
3:24
AI Technology to Detect Lung Cancer Early | Jenifer Marks, MD
HealthONE/YouTube
1:51
AI in Medicine: Possible Applications and Potentials
KI-Campus/YouTube
2:36
High-tech hospital uses artificial intelligence in patient care
NBC News/YouTube
3:15
AI could revolutionize cancer detection, according to MIT, Mass General research
Image by U.S. Government Accountability Office from Washington, DC, United States/Wikimedia
Artificial Intelligence, Machine Learning and Genomics
With increasing complexity in genomic data, researchers are turning to artificial intelligence and machine learning as ways to identify meaningful patterns for healthcare and research purposes.
The Big Picture
The genomics field continues to expand the use of computational methods such as artificial intelligence and machine learning to improve our understanding of hidden patterns in large and complex genomics data sets from basic and clinical research projects.
Machine learning analyses could benefit disease research and genomic tools like CRISPR.
NHGRI is identifying and shaping its unique role in the convergence of genomic and machine learning research.
What is artificial intelligence?
There are many definitions for artificial intelligence (AI). One such definition for AI is “AI is a science and a set of computational technologies that are inspired by — but typically operate quite differently from — the ways people use their nervous systems and bodies to sense, learn, reason, and take action.” AI can be created as software or tools that are able to mimic human intelligence in certain contexts or even exceed it in others.
To be able to build AI, scientists need large, well-explained datasets that first help them understand the techniques and processes used by humans to analyze and interpret complex scenarios. The field of AI is a dynamic one, and researchers are consistently developing new techniques and tools.
What are machine learning and deep learning?
Machine learning (ML) and deep learning are fields of study frequently mentioned in the context of AI. Both kinds of learning are subfields of AI. Machine learning is a process by which machines can be given the capability to learn about a given dataset without being explicitly programmed on what to learn.
Machines can usually learn in either a supervised or unsupervised manner. Under supervised learning, scientists provide machines with separate training and test data sets. The training data has defined categories (e.g., people with coronary heart disease and those without) that the machine can use to infer hidden qualities of the data and distinguish the categories from each other. It is then able to use this knowledge to work on the test data and make informed predictions (e.g., which people in a population are likely to develop coronary heart disease).
In an unsupervised learning setting, machines can recognize patterns in large datasets and make predictions about the real world without requiring any additional help from humans.
When machines can learn in an unsupervised manner, they are considered to be learning “deeply.” Deep learning is a relatively modern technique used to implement machine learning. A deep learning algorithm takes a dataset and finds patterns and critical information by imitating how a human brain’s neurons interact with each other. The algorithms are artificial neural networks — a computing system that simulates the brain’s ability to weigh the importance of some data versus others, and handle bias.
Why is there a need for AI/ML in genomics?
As of 2021, 20 years have passed since the landmark completion of the draft human genome sequence. This milestone has led to the generation of an extraordinary amount of genomic data. Estimates predict that genomics research will generate between 2 and 40 exabytes of data within the next decade.
DNA sequencing and other biological techniques will continue to increase the number and complexity of such data sets. This is why genomics researchers need AI/ML-based computational tools that can handle, extract and interpret the valuable information hidden within this large trove of data.
Infographic illustrating the concept large numbers and prefixes (from left to right):
Thousand: Kilo
Million: Mega
Billion: Giga
Trillion: Tera
Quadrillion: Peta
Quintillion: Exa
Sextillion: Zetta
Septillion: Yotta
What are some ways in which AI/ML are being used in genomics?
Although the use of AI/ML tools in genomics is still at an early stage, researchers have already benefited from developing programs that assist in specific ways.
Some examples include:
Examining people’s faces with facial analysis AI programs to accurately identify genetic disorders.
Using machine learning techniques to identify the primary kind of cancer from a liquid biopsy.
Predicting how a certain kind of cancer will progress in a patient.
Identifying disease-causing genomic variants compared to benign variants using machine learning.
Using deep learning to improve the function of gene editing tools such as CRISPR.
These are just a few ways by which AI/ML methods are helping predict and identify hidden patterns in genomic data. Scientists are also using AI/ML to predict future variations in the genomes of the influenza and SARS-CoV-2 viruses to assist public health efforts.
Source: National Human Genome Research Institute (NHGRI)
Additional Materials (4)
Curing Disease With Genetics And AI
Video by Forbes/YouTube
Illumina AI - ChatGPT for your genome...
Video by Wes Roth/YouTube
Genomics Sequencing: Fighting Disease With Artificial Intelligence | Intel
Video by Intel/YouTube
CRISPR.ML - Machine learning meets gene editing
Video by Microsoft Research/YouTube
12:41
Curing Disease With Genetics And AI
Forbes/YouTube
10:50
Illumina AI - ChatGPT for your genome...
Wes Roth/YouTube
4:04
Genomics Sequencing: Fighting Disease With Artificial Intelligence | Intel
Intel/YouTube
1:59
CRISPR.ML - Machine learning meets gene editing
Microsoft Research/YouTube
AI and Heart Disease
CT of the blood vessels and heart
Image by Bác sĩ Nguyễn Minh Đức
CT of the blood vessels and heart
CT of the blood vessels and heart
Image by Bác sĩ Nguyễn Minh Đức
Artificial Intelligence Predicts Heart Disease Risk From CT Scans
Cardiovascular disease remains the leading cause of death in the United States. If people at high risk of heart attack, stroke, and related conditions could be identified, medications and lifestyle changes may help reduce their risk of disease and death.
Traditionally, heart disease risk has been assessed using clinical measurements. These include body mass index (BMI)—a ratio of weight to height. Another commonly used measurement is the Framingham risk score (FRS), which incorporates age, sex, blood pressure, blood cholesterol, and related information. However, these tools are not precise. They can miss people at high risk and misidentify others who are not.
A research team led by Drs. Perry J. Pickhardt of the University of Wisconsin and Ronald Summers from the NIH Clinical Center has been developing computer programs to estimate disease risk from CT scans taken for other purposes. Tens of millions of people undergo such scans every year for reasons ranging from accidents to surgical planning.
The researchers previously showed that CT scans could be re-used to diagnose osteoporosis. In their new study, they tested whether artificial intelligence (AI) algorithms they’d developed to re-analyze CT scans could predict the risk of heart disease better than BMI or the FRS.
The team used CT scans of the abdomen previously taken for colorectal cancer screening, from more than 9,200 men and women without symptomatic heart disease. Participants had an average age of 57. The AI programs measured calcification in the aortic artery, muscle density, the ratio of fat deep in the body to that under the skin, liver fat, and bone-mineral density as seen on the scans.
The researchers collected follow-up information for all participants for an average of almost 9 years. They then assessed whether their AI measures correlated with later development of heart disease or death. Results were published on March 2, 2020, in Lancet Digital Health.
Over the follow-up period, 20% of study participants experienced a heart attack or stroke, developed heart failure, or died. All five body-composition measures assessed by AI differed substantially between people who had and had not developed heart disease.
AI scores of calcification in the aortic artery alone were better than the FRS at predicting heart disease risk. All five measures alone were more predictive than BMI taken at the start of the study. In general, combining more than one of the AI measurements increased the ability to predict later heart disease risk from an abdominal scan. Adding the FRS to the AI measurements did not improve their predictive performance.
“We found that automated measures provided more accurate risk assessments than established clinical biomarkers,” Summers explains.
But because CT imaging comes with some risks, including exposure to small amounts of radiation, the researchers don’t propose taking CT scans solely for heart-disease risk assessment.
“This opportunistic use of additional CT-based biomarkers provides objective value to what doctors are already doing,” Pickhardt says. “This automated process requires no additional time, effort, or radiation exposure to patients.”
Source: NIH Research Matters
Additional Materials (7)
How artificial intelligence can help fight heart disease
Video by CNBC Television/YouTube
How artificial intelligence is helping to detect heart disease | ABC News
Video by ABC News (Australia)/YouTube
Can Mayo Clinic use AI to diagnose heart disease?
Video by Mayo Clinic/YouTube
Coronary Calcium Scan
Coronary Calcium Scan. Figure A shows the position of the heart in the body and the location and angle of the coronary calcium scan image. Figure B is a coronary calcium scan image showing calcifications in a coronary artery.
Image by NHLBI/NIH
Can artificial intelligence identify a human heart problem?
Video by Mayo Clinic/YouTube
Novel AI platform matches cardiologists in detecting rheumatic heart disease
Video by Children's National Hospital/YouTube
Google A.I. Can Detect Risk of Heart Disease Through Eye Scan
Video by Atlanta Journal-Constitution/YouTube
4:22
How artificial intelligence can help fight heart disease
CNBC Television/YouTube
2:09
How artificial intelligence is helping to detect heart disease | ABC News
ABC News (Australia)/YouTube
1:29
Can Mayo Clinic use AI to diagnose heart disease?
Mayo Clinic/YouTube
Coronary Calcium Scan
NHLBI/NIH
1:35
Can artificial intelligence identify a human heart problem?
Mayo Clinic/YouTube
1:14
Novel AI platform matches cardiologists in detecting rheumatic heart disease
Children's National Hospital/YouTube
1:03
Google A.I. Can Detect Risk of Heart Disease Through Eye Scan
Atlanta Journal-Constitution/YouTube
AI and Pregnancy
Pregnancy
Image by Blausen.com staff (2014). "Medical gallery of Blausen Medical 2014"
Pregnancy
Pregnancy
Image by Blausen.com staff (2014). "Medical gallery of Blausen Medical 2014"
The New Role of Artificial Intelligence in NIH Pregnancy Research
Studies spotlight health disparities, heart health, and obesity
Pregnancy can be an exciting experience for many women. But it also can be difficult and confusing, especially for women with a high-risk pregnancy.
Factors like a woman's age, her lifestyle, and preexisting conditions can all contribute to a high-risk pregnancy. During a high-risk pregnancy, a woman and her unborn child are at increased risk of problems during pregnancy or labor, like early birth or miscarriage.
"In a wealthy nation like the U.S., a healthy pregnancy and childbirth should be the norm, but every 12 hours, a woman dies from complications from pregnancy or giving birth," says Diana Bianchi, M.D., director of the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). "Black, American Indian, and Alaska Native women are about three times as likely to die from a pregnancy-related cause, compared to white women. Research also shows that up to 60% of these deaths are preventable."
Dr. Bianchi and her colleagues at NICHD have worked to increase the focus on maternal health research and address these health disparities. One promising approach is to incorporate more data to better understand and address the pregnancy health issues of women of color.
"Black, American Indian, and Alaska Native women are about three times as likely to die from a pregnancy-related cause."
- Diana Bianchi, M.D.
"I am particularly excited about emerging technologies and big data analytic methods, such as artificial intelligence," says NICHD researcher Nahida Chakhtoura, M.D. "I hope this will help integrate genome data [and] nutritional, social, and behavioral data so that we can learn more and inform prevention efforts and address health disparities."
NICHD also wants to better support pregnant women who have high-risk pregnancy conditions like gestational diabetes and obesity.
Recently, NICHD researchers launched a study to see whether drugs that treat heart disease might be effective for preventing preeclampsia. Preeclampsia is when a woman develops high blood pressure after her 20th week of pregnancy. The condition has many of the same risk factors as cardiovascular diseases, which could offer clues for treating pregnant women. NICHD-supported research is also testing whether drugs such as pravastatin—normally used to treat high cholesterol—can help prevent preeclampsia.
"One of the goals of our 2020 strategic plan is to improve pregnancy outcomes to maximize the lifelong health of women and their children," Dr. Chakhtoura says. "NICHD recently launched the Pregnancy for Every Body Initiative, since plus-size women are at a higher risk for pregnancy complications."
This online resource educates plus-size women and their providers on how to have open, nonjudgmental conversations about obesity during pregnancy. Its ultimate goal is to make sure that plus-size women know that pregnancy might affect them differently, so they can have healthy, safe pregnancies.
Source: NIH MedlinePlus Magazine
Additional Materials (4)
S2E6 | Improving Maternal Health through AI and Biomedical Science | Health Pulse Podcast
Video by SAS Software/YouTube
Maternal Mortality in the U.S. - AI Can Help
Video by PeriGen/YouTube
How AI can make health care better
Video by The Economist/YouTube
The Advent of AI in Healthcare
Video by Cleveland Clinic/YouTube
22:21
S2E6 | Improving Maternal Health through AI and Biomedical Science | Health Pulse Podcast
Send this HealthJournal to your friends or across your social medias.
Artificial Intelligence
Artificial intelligence, or AI, is everywhere—from virtual assistants to facial recognition software. Studies suggest that AI can perform as well as or better than humans at key healthcare tasks, such as diagnosing disease. So what exactly is AI? Learn about AI and how it can help doctors diagnose and treat medical conditions, deliver care, and more.