Eunjung Yeo 여은정 /ɨnd͡zʌŋ jʌ/
Let's make a world where everyone can share their messages with the world!
Gates-Dell Complex
2317 Speedway
Austin, TX 78712
Welcome!
My name is Eunjung Yeo. I am currently a visiting scholar at SALT Lab, in the Department of Computer Science at the University of Texas at Austin, advised by Prof. David Harwath. I received my M.A. and Ph.D. in Linguistics from Seoul National University, where I worked in the Spoken Language Processing Lab. Before that, I completed my bachelor’s degree in Korean Language and Literature, with a double major in Psychology, at Yonsei University.
I was previously a visiting scholar at ChangeLing Lab in the Language Technologies Institute at Carnegie Mellon University, advised by Prof. David Mortensen, with whom I maintain strong research collaborations. I also collaborate closely with Prof. Julie Liss and Prof. Visar Berisha at Arizona State University.
My research focuses on computational phonetics and phonology, with a particular focus on clinical speech analysis and applications. I am passionate about proposing tools and insights that make it possible for everyone—regardless of speech or language differences—to share their messages. I believe that linguistic knowledge, combined with computational methods, can open new pathways toward inclusive communication.
news
| Dec 23, 2025 | Our team has been awarded $30k grant from Texas Health Catalyst! |
|---|---|
| Oct 16, 2025 | Our proposal for the Texas Health Catalyst has been selected as a finalist for the Fall 2025 cycle! 🤠 |
| Oct 11, 2025 | Our proposal for the UT Proof of Concept Awards has been selected as a finalist for the Fall 2025 cycle! 🤘 |
| Jul 28, 2025 | Our paper on cross-language intelligibility assessment leveraging AI has been accepted in Perspectives of the ASHA SIG 19! 🚀 |
| Jun 20, 2025 | I have started my new academic journey as a Visiting Scholar in the Department of Computer Science at the University of Texas at Austin! 🤠🤘 |
selected publications
- ASHA SIG 19Applications of Artificial Intelligence for Cross-language Intelligibility Assessment of Dysarthric SpeechPerspectives of the ASHA SIG 19, 2025
- InterspeechTowards Inclusive ASR: Investigating Voice Conversion for Dysarthric Speech Recognition in Low-Resource LanguagesIn Interspeech, 2025
- NAACL mainLeveraging Allophony in Self-Supervised Speech Models for Atypical Pronunciation Assessment2025
- InterspeechSpeech Intelligibility Assessment of Dysarthric Speech by using Goodness of Pronunciation with Uncertainty QuantificationIn Interspeech, 2023
- ICASSPAutomatic severity classification of dysarthric speech by using self-supervised model with multi-task learningIn ICASSP, 2023
- ICPhSComparison of L2 Korean pronunciation error patterns from five L1 backgrounds by using automatic phonetic transcriptionIn ICPhS, 2023
- APSIPA ASCCross-lingual Dysarthria Severity Classification for English, Korean, and TamilIn APSIPA ASC, 2022