Skip to main content

News

ECE’s Hien Van Nguyen Leads Team Launching New Era in Radiology Training
By
Laurie Fickman
To train the next generation of specialists and speed up hospital workflows, Hien Van Nguyen, associate professor of electrical and computer engineering, is jumping in front of a radiologist's gaze, predicting where the doctor will next look.
To train the next generation of specialists and speed up hospital workflows, Hien Van Nguyen, associate professor of electrical and computer engineering, is jumping in front of a radiologist's gaze, predicting where the doctor will next look.

New AI System Predicts a Radiologist’s Next Glance — Before the Doctor Makes It

On a daily basis, a radiologist in the U.S. (the physician who specializes in reading and interpreting medical images) pores over 150 to 200 X-rays. Their specialty is so important that researchers are now trying to get in front of where they will look next — what area of the image they will next scrutinize — to open a powerful window into how they think and to discover the origins of diagnostic errors.

Understanding this gaze behavior is not just about their next move; it’s about replicating their expert attention to train the next generation of specialists.

“We’re not just trying to guess what a radiologist will do next; we’re helping teach machines and future radiologists how to think more like experts by seeing the world as they do,” reports Hien Van Nguyen, associate professor of electrical and computer engineering, in Nature Scientific Reports.

Nguyen’s newly developed AI system, MedGaze, is designed to emulate how radiologists visually interpret chest X-rays. By replicating radiologists’ expert attention, MedGaze is useful for increasing the efficiency of hospitals through understanding which cases take more time and mental effort, helping manage workflow. It also improves the accuracy of AI-powered diagnostic systems focusing on image areas that human experts would prioritize.

“MedGaze is a non-invasive, non-interfering software system trained to mimic how expert doctors visually examine chest X-rays, including where they look, how long they look there and what sequence they follow,” Nguyen explained. “We call this a ‘Digital Gaze Twin,’ and it works by analyzing both the images and the radiology reports doctors write.”

MedGaze learns patterns from thousands of previous eye-tracking sessions where radiologists’ gaze paths were recorded while interpreting X-rays. It then uses this knowledge to predict where a radiologist is likely to look next when reading a new X-ray.

The modeling of human gaze behavior is a critical problem in computer vision, with significant implications for designing interactive systems that can anticipate the user’s attention. In medical imaging, particularly with chest X-rays, predicting scan paths plays a pivotal role in enhancing diagnostic accuracy and efficiency.

“Unlike previous computer vision efforts that focus on predicting scan paths based on specific objects or categories, our approach addresses a broader context of modeling scan path sequences for searching multiple abnormalities in chest x-ray images. Specifically, the key technical innovation of MedGaze is its capability to model fixation sequences that are an order of magnitude longer than those handled by the current state-of-the-art methods,” Nguyen said.

According to Nguyen, the methodology behind MedGaze is designed to be broadly applicable. While the current focus is on chest X-rays, future research will adapt this framework to model expert gaze and scanning behavior in other imaging modalities such as MRI and CT scans.

“This opens the door to a unified, AI-driven approach for understanding and replicating clinical expertise across the full spectrum of medical imaging,” he said.

The research team includes UH graduate students Akash Awasthi and Mai-Anh Vu, and professors Carol Wu and Rishi Agrawal from MD Anderson Cancer Center.

Share This Story: