News

Lee's research focuses on Decision-Making preferences
August 9, 2021
By
Stephen Greenwell
A new paper from Taewoo Lee, an Assistant Professor of the Industrial Engineering Department at the Cullen College of Engineering, examines the decision-making preferences using past decision data, using a novel, data-driven inverse optimization method.
A new paper from Taewoo Lee, an Assistant Professor of the Industrial Engineering Department at the Cullen College of Engineering, examines the decision-making preferences using past decision data, using a novel, data-driven inverse optimization method.

A new paper from Taewoo Lee, an Assistant Professor of the Industrial Engineering Department at the Cullen College of Engineering, examines the decision-making preferences using past decision data, using a novel, data-driven inverse optimization method. 

“Quantile Inverse Optimization: Improving Stability in Inverse Linear Programming” is scheduled to print in an upcoming edition of Operations Research, which is published under the umbrella of the Institute for Operations Research and the Management Sciences (INFORMS). However, the pre-print version has already been cited six times, since its online publication in 2021.

Operations Research is read not only by Industrial Engineering professionals, but perhaps more so by researchers in business schools and mathematicians,” Lee said. “The journal is on The Financial Times top 50 journal list of management and economics (also known as the FT50) and is considered one of the most prestigious journals in the field of operations research and management science (OR/MS), in which I do research. I hope this work might be of interest to broader audience beyond IE.”

According to Lee, the paper is built on his previous research on inverse optimization and machine learning, using human decision data to infer decision models. His co-author for the paper is Zahed Shahmoradi, a recent UH doctoral graduate in Industrial Engineering, who is joining the University of Texas Health Science Center at Houston as a post-doctoral researcher.

Lee noted that with the recent advances in machine learning and artificial intelligence and growing availability of data, there is more of an opportunity to execute decision-making via data. However, this still usually relies on a human setting parameters and deciding what criteria are and aren’t important. 

 “This study focuses on learning decision-making preferences from past decision data—instead of directly predicting the decisions themselves—and using the learned preferences to derive automated decision models,” Lee said. “This is particularly important when the decision data have been generated by rational decision-makers who make decisions through some kind of logical yet autonomous decision-making processes. For example, consider learning the task of 'driving.' When we learn how to drive, instead of naively mimicking the turn-by-turn driving actions of the 'expert,' we consider many important criteria, such as not driving too fast, maintaining a reasonable distance from nearby cars, and so on. As we develop our own driving behaviors, each driver displays a unique trade-off between some of these criteria. Now, if we want to develop a driving decision support tool or even an autonomous vehicle that closely mimics a certain driver’s behavior, it is these latent preferences of this driver that we want to learn.”

 Lee said the challenge here is that human decisions are inherently noisy and inconsistent, which undermines the reliability of the preference-learning process. Their new preference-learning method via inverse optimization accommodates any type of decision data and attempts to prune out outliers and errors in the data. Further, this method can adapt to changes in the decision-maker’s preferences over time.

“A novel contribution of this work is that it explicitly accounts for data noise and inconsistency, which is a common issue in data involving human decisions. By integrating recent advances in robust machine learning techniques with the inverse optimization framework, this new method can automatically detect noise in the decision data such as errors and outliers while learning a decision model, hence stable under data imperfection.”

This paper applies this preference learning method to the diet recommendation application where past diet data from each individual is used to infer a personalized, adaptive diet recommendation system that adheres to the nutritional guidelines while being consistent with the user’s preferences. This method are also used for the transportation optimization problem where past decision data is used for predicting how the cost of each transportation route is perceived by each decision-maker, which is then used for predicting their decisions in the future.”

Lee noted that the research builds off his prior work in cancer therapy optimization, and also continues an interest he first had while completing his graduate study.

“While my advisor and I were working on the optimal design of radiation therapy plans for cancer treatment, we noticed that the design process needed to capture complex trade-offs over different criteria such as delivering sufficient radiation to the patient’s prostate while sparing the nearby healthy bladder and rectum. To address this, I developed a first inverse optimization technique for cancer therapy that quantifies such complex trade-off preferences from the collaborator cancer center’s past successful treatments. Since the success of this method for cancer therapy, I have been expanding this area of research towards many different applications, such as diet recommendation and disease screening.”

Faculty

Share this Story: