Dilranjan Wickramasuriya and Hamid Fekri Azgomi, doctoral students of Rose Faghih, assistant professor of electrical engineering at the UH Cullen College of Engineering, made several presentations at two different IEEE (Institute of Electrical and Electronics Engineers) conferences in November.
Three of these presentations were projects from the “State-Space Estimation with Physiological Applications” course designed and taught by Faghih where the two Ph.D. students served as teaching assistants. The class allows students to work on a real-world biomedical engineering problem where they apply the tools learned in class. The problem is broken down into smaller parts and several milestones are set for the students during the semester.
Azgomi used the Cullen Travel Fellowship Grant he won to travel to California and present the papers at the 53rd IEEE Asilomar Conference on Signals, Systems and Computers. Faghih also organized an invited session on “Signal Processing Advances in Neural Modeling” at the conference.
Wickramasuriya presented work at the IEEE-EMB Special Topics Conference on Healthcare Innovations and Point-of-Care Technologies (HI-POCT) held this year at the National Institutes of Health (NIH) in Bethesda.
Attending both conferences provided valuable experiences for both students, said Faghih. “They were able to network with other researchers in the same field, get a flavor for some of the hot-topics in different areas and share experiences with other faculty and students alike,” she said. “The HI-POCT conference in particular was an opportunity to see first-hand the perspectives of the clinical community on different research topics, which are quite different to the way engineers see things.”
Likewise, the Asilomar conference afforded the opportunity to gain insights into advanced signal processing research perspectives, Faghih added.
Here are brief descriptions of their research publications:
Real-Time Seizure State Tracking Using Two Channels: A Mixed-Filter Approach (M. B. Ahmadi, A. Craik, H. F. Azgomi, J. T. Francis, Jose L. Contreras-Vidal and R. T. Faghih) - 53rd IEEE Asilomar Conference on Signals, Systems, and Computers
Epilepsy affects several million people worldwide. Unfortunately, the condition is resistant to medication for quite a number of epilepsy patients. As a result, a lot of research has focused on the automated detection of epileptic seizures from electroencephalography (EEG) signals. EEG measures the electrical activity from networks of neurons firing within the brain. When an epileptic seizure occurs, the neurons fire abnormally. The occurrence of a seizure can be preceded by a pre-seizure phase and followed by a post-seizure phase.
In this research, the authors modeled a seizure state variable as being related to both a binary and continuous-valued EEG feature. The two features were chosen to maximize the chance of detecting seizures for each patient. By using a control-theoretic formulation and appropriate statistical tools, the seizure state was tracked using just these two features. The data was separated into different segments for training, validation and testing. Since the method estimates the occurrence of a seizure using a continuous-valued state variable, the intensity of the seizure could also be determined. The method could eventually be used to anticipate the occurrence of an epileptic seizure and apply corrective control before it happens.
Emotional Valence Tracking and Classification via State-Space Analysis of Facial Electromyography (T. Yadav, M. M. U. Atique, H. F. Azgomi, J. T. Francis and R. T. Faghih) - 53rd IEEE Asilomar Conference on Signals, Systems, and Computers.
Human emotion can be categorized along two different axes named valence and arousal. Valence denotes the pleasure-displeasure axis of emotion while arousal captures the accompanying activation or excitement.
Changes in physiological signals accompany different emotions. For instance, subtle variations can occur in heart rate, breathing and facial muscles with emotion. In this research, the authors developed a model that related an internal unobserved emotional valence state to a binary and continuous-valued feature extracted from a facial electromyography (EMG) signal. An EMG signal captures the electrical activity associated with a particular muscle.
The method was tested out on a dataset where subjects were shown music videos to elicit different emotions. The music videos were chosen from a variety of genres for this purpose. The formulated model was able to accurately predict emotional valence across a number of trials. The method could eventually be used in a smart living space where different types of music are automatically played to a person depending on his/her emotions and mood.
The ability to recognize a person’s emotions automatically and correctly can have far-reaching impacts on long-term monitoring for patients with neuropsychiatric disorders such as Post-Traumatic Stress Disorder (PTSD) and major depression. Automated emotion recognition could also help develop the next generation of living spaces and learning environments that are sensitive to emotion and mood.
Emotion Recognition by Point Process Characterization of Heartbeat Dynamics (A. S. Ravindran, S. Nakagome, D. S. Wickramasuriya, J. L. Contreras-Vidal and R. T. Faghih) – IEEE-EMB Special Topics Conference on Healthcare Innovations and Point-of-Care Technologies (HI-POCT).
As noted above, the valence and arousal axes can be used to account for variations in human emotion. A third axis, known as dominance, relates to the degree of control that is felt.
In this work, the authors developed a method to classify high and low levels of valence, arousal and dominance based on heart rate variations alone. A person’s heart beats at a rate of about 72 beats a minute. This can be measured using electrocardiography (EKG) or by measuring someone’s pulse. The heartbeats can be modeled as a stream of binary events where a ‘1’ occurs where there is a heartbeat and a ‘0’ occurs elsewhere.
By modeling the inter-arrival times between the ‘1’s (i.e., the timings between heartbeats) as a binary point process, different features of heart rate were extracted, such as different heart rate statistics as well as how fast beat-to-beat changes were occurring (i.e., frequency-domain features).
The researchers classified these heart rate features into the different categories of emotion using state-of-the-art deep learning methodologies in a group of subject who viewed a series of music videos meant to elicit different emotions.
In addition to these, the following papers were also accepted to and presented at the same conferences. Again, Faghih served as the senior author.
Wearable Brain Machine Interface Architecture for Regulation of Energy in Hypercortisolism (H. F. Azgomi and R. T. Faghih) - 53rd IEEE Asilomar Conference on Signals, Systems, and Computers
Cortisol is the body’s main stress hormone. Its primary purpose is to raise blood glucose levels in response to external stressors. It is categorized among the class of hormones known as glucocorticoids. Disorders of cortisol typically involve the secretion of too much cortisol (hypercortisolism) or too little cortisol (hypocortisolism). Cushing’s disease is a type of hypercortisolism.
In this research, the authors used a control-theoretic model relating an unobserved energy state in the body to different binary and continuous-valued blood cortisol measurements. Cortisol secretion also follows a 24-hour rhythm (known as a circadian rhythm).
The researchers designed the control necessary to reinstate circadian rhythmicity in simulated blood cortisol measurements from patients with Cushing’s disease. The control design also took into consideration drug dynamics that are commonly used for treatment. Based on the control signal, a drug dose for infusing cortisol in the morning and a similar dose for clearing cortisol at night were recommended. These suggested doses would then be able to help resolve daytime energy drops and nighttime sleeping difficulties in Cushing’s patients.
Facial Expression-Based Emotion Classification Using Electrocardiogram and Respiration Signals (D. S. Wickramasuriya, M. K. Tessmer and R. T. Faghih) - IEEE-EMB Special Topics Conference on Healthcare Innovations and Point-of-Care Technologies (HI-POCT).
Many methods have been developed to automatically recognize emotion from different physiological signals. A number of these methods rely on neural signal recordings that are relatively inconvenient to monitor in the long-term. Moreover, in a number of studies, the subjects self-report their emotions on different scales. Unfortunately, due to inter-subject variability, not all the subjects use the same scale in a consistent manner.
Facial expressions provide a more reliable means of extracting the emotional truth of a subject. In this work, the authors attempted to see whether the extremes of emotional valence (as labeled using facial expressions) could be captured using simple heart rate and breathing measurements. Based on features extracted from heart rate and breathing at locations where subjects laughed or visibly displayed aversive reactions to movie clips that were shown to them, an accurate classification of high and low valence was shown to be possible.
This last work began as a National Science Foundation REU (Research Experience for Undergraduates) project during the summer of 2018 and expanded into a conference paper.
All these publications are based on research funded in part by the NSF.