Using Custom Model Ensembles to Improve Multi-domain Conversations
Conversations often cover multiple domains. Multilingual conversations, sales conversations switching to data entry information, and conversational AI with a multi-use case conversational flow are just a few examples where a conversation can switch wildly between domains within the same conversation. Customizing a single model helps, but using ensembles of targeted models instead of a single model can lead to even better performance.
• What it takes to train several models
• Show some simple code to combine several models
• Results comparing a single trained model vs an ensemble of trained models
Jeff "Susan" Ward is a Research Engineer at Deepgram, where for the past four years, he has been exploring and innovating technological solutions in the realm of automatic speech recognition. His work has focused on automating the entire training pipeline with the intent to enable rapid customization across a variety of ASR use cases. He also has experience in automatic alignment, transcript cleaning, large-scale data management, automated training, and model design. Before joining Deepgram, Susan earned his master's from the University of Edinburgh and his pilot wings from the US Navy.