Eunjung Yeo 여은정 /ɨnd͡zʌŋ jʌ/

Let's make a world where everyone can share their messages with the world!

profile.jpeg

Gates-Dell Complex

2317 Speedway

Austin, TX 78712

Welcome!

My name is Eunjung Yeo. I am currently a visiting scholar at SALT Lab, in the Department of Computer Science at the University of Texas at Austin, advised by Prof. David Harwath. I received my M.A. and Ph.D. in Linguistics from Seoul National University, where I worked in the Spoken Language Processing Lab. Before that, I completed my bachelor’s degree in Korean Language and Literature, with a double major in Psychology, at Yonsei University.

I was previously a visiting scholar at ChangeLing Lab in the Language Technologies Institute at Carnegie Mellon University, advised by Prof. David Mortensen, with whom I maintain strong research collaborations. I also collaborate closely with Prof. Julie Liss and Prof. Visar Berisha at Arizona State University.

My research focuses on computational phonetics and phonology, with a particular focus on clinical speech analysis and applications. I am passionate about proposing tools and insights that make it possible for everyone—regardless of speech or language differences—to share their messages. I believe that linguistic knowledge, combined with computational methods, can open new pathways toward inclusive communication.

news

Jul 28, 2025 Our paper on cross-language intelligibility assessment leveraging AI has been accepted in Perspectives of the ASHA SIG 19! 🚀
Jun 20, 2025 I have started my new academic journey as a Visiting Scholar in the Department of Computer Science at the University of Texas at Austin! 🤠
May 20, 2025 Two of our papers have been accepted to Interspeech 2025! [1][2]🌟
Jan 22, 2025 Our paper has been accepted to NAACL (main)! :tada:
Jan 22, 2025 Dynamic-SUPERB Phase-2 has been accepted to ICLR 2025! :dart:

selected publications

  1. ASHA SIG 19
    Applications of Artificial Intelligence for Cross-language Intelligibility Assessment of Dysarthric Speech
    Eunjung YeoJulie LissVisar Berisha, and 1 more author
    Perspectives of the ASHA SIG 19, 2025
  2. Interspeech
    Towards Inclusive ASR: Investigating Voice Conversion for Dysarthric Speech Recognition in Low-Resource Languages
    Chin-Jou LiEunjung YeoKwanghee Choi, and 7 more authors
    In Interspeech, 2025
  3. NAACL main
    Leveraging Allophony in Self-Supervised Speech Models for Atypical Pronunciation Assessment
    Kwanghee ChoiEunjung YeoKalvin Chang, and 2 more authors
    2025
  4. Interspeech
    Speech Intelligibility Assessment of Dysarthric Speech by using Goodness of Pronunciation with Uncertainty Quantification
    Eunjung Yeo*Kwanghee Choi*, Sunhee Kim, and 1 more author
    In Interspeech, 2023
  5. ICASSP
    Automatic severity classification of dysarthric speech by using self-supervised model with multi-task learning
    Eunjung Yeo*Kwanghee Choi*, Sunhee Kim, and 1 more author
    In ICASSP, 2023
  6. ICPhS
    Comparison of L2 Korean pronunciation error patterns from five L1 backgrounds by using automatic phonetic transcription
    Eunjung Yeo*, Hyungshin Ryu*, Jooyoung Lee, and 2 more authors
    In ICPhS, 2023
  7. APSIPA ASC
    Cross-lingual Dysarthria Severity Classification for English, Korean, and Tamil
    Eunjung YeoKwanghee Choi, Sunhee Kim, and 1 more author
    In APSIPA ASC, 2022