Rachael Tatman

Chevron down

Leveraging insights from human speech perception for speaker dialect adaption

Adjusting to a talker's dialect is easy for humans, but accent adaptation continues to limit automatic speech recognition. One key difference is that humans make use of more than just the acoustic signal; social cues as subtle as a sweatshirt or stuffed animal can drastically impact how we hear speech sounds. I will present a behavioral model showing that including social information about the speaker can result in much more human-like classification. This is of particular interest for virtual assistants designed for single speakers, which could improve out-of-the-box performance by leveraging demographic information.

Rachael is a PhD Candidate and National Science Foundation Graduate Research Fellow in the Department of Linguistics at the University of Washington. She investigate the sublexical units of language across modalities using experimental, statistical and computational methods. Her training and research has mainly been in the area of phonetics, phonology, sociolinguistics and sign linguistics.

Buttontwitter Buttonlinkedin

As Featured In

Original
Original
Original
Original
Original
Original

Partners & Attendees

Intel.001
Nvidia.001
Graphcoreai.001
Ibm watson health 3.001
Facebook.001
Acc1.001
Rbc research.001
Twentybn.001
Forbes.001
Maluuba 2017.001
Mit tech review.001
Kd nuggets.001