Cross-Language Intelligibility Assessment

Intelligibility assessment applicable across various languages

Since the beginning of my Ph.D. journey, I have dedicated my research to advancing cross-language intelligibility assessment for dysarthric speech. Despite significant strides in speech AI, much of the technology remains heavily biased toward English, leaving speakers of low-resource languages underserved. By developing cross-language intelligibility assessment tools, I aim to bridge this gap, ensuring equitable access to speech AI for individuals with speech pathologies across diverse linguistic backgrounds.

I am currently continuing this research advised by Prof. David Mortensen at CMU LTI, Prof. Julie Liss at Arizona State University’s College of Health Solutions, and Prof. Visar Berisha, who holds joint appointments at ASU’s College of Engineering and College of Health Solutions.

In (Hernandez et al., 2020), I helped my colleague Abner Hernandez to perform comparative analysis between English and Korean dysarthric speakers.

In (Yeo et al., 2022), we expanded our analysis by incorporating phoneme-level pronunciation features, which broadened the scope of the features considered, and included Tamil dysarthric speech to enhance linguistic diversity.

In (Yeo et al., 2022), we proposed a novel classification method that integrates both language-independent and language-dependent features, providing a more robust framework for cross-language intelligibility assessment.

In (Yeo* et al., 2023), we introduced an improved Goodness of Pronunciation (GoP) score designed specifically for pathological speech analysis. By utilizing the Common Phone dataset, this approach demonstrated applicability across multiple languages. Furthermore, we examined the relative importance of individual phonemes in intelligibility scoring, revealing that the significance of specific phonemes varies across languages, highlighting the need for language-sensitive intelligibility metrics.

References

2023

  1. Interspeech
    Speech Intelligibility Assessment of Dysarthric Speech by using Goodness of Pronunciation with Uncertainty Quantification
    Eunjung Yeo*Kwanghee Choi*, Sunhee Kim, and 1 more author
    In Interspeech, 2023

2022

  1. Oriental Cocosda
    Multilingual analysis of intelligibility classification using English, Korean, and Tamil dysarthric speech datasets
    Eunjung Yeo, Sunhee Kim, and Minhwa Chung
    In Oriental Cocosda, 2022
  2. APSIPA ASC
    Cross-lingual Dysarthria Severity Classification for English, Korean, and Tamil
    Eunjung YeoKwanghee Choi, Sunhee Kim, and 1 more author
    In APSIPA ASC, 2022

2020

  1. Interspeech
    Dysarthria Detection and Severity Assessment Using Rhythm-Based Metrics.
    Abner Hernandez, Eunjung Yeo, Sunhee Kim, and 1 more author
    In Interspeech, 2020