The Future of Voice Computing is in the Ear
With the proliferation of voice assistants (e.g. Apple's Siri, Google's Now, Amazon's Alexa, Microsoft's Cortana) and voice assistant speakers (e.g. Amazon's Echo, Google's Home), people are realizing that "voice is the next frontier of computing". Voice allows for efficient and hands-free communication. However, one of the biggest problems facing voice assistants is that it works well in quiet environments but does not work in noisy environments (e.g. factories, hospitals, construction sites). A device situated in the ear helps to solve this problem. Kinouko will talk through a complete end-to-end solution for voice-enabling enterprise messaging applications. This platform is comprised of a tightly integrated state-of-art ear device, an “eOS” (ear Operating System) and an AI engine which allows SmartEar to control the full user experience with their smart assistant. The ear device can be comfortably kept in the user’s ears all day long because it utilizes technology originally designed and developed for completely-in-canal hearing aids. The eOS will eventually allow any third-party application to provide a completely hands-free, always-on, voice only user interface. SmartEar has also developed it’s own proprietary software framework for a natural language dialogue engine using deep learning technology. This engine can understand the user's intent from their utterances, and also makes use of the context and audio environment to track user's intent across multiple dialogue turns unlike any existing voice assistant product on the market.
Kinuko Masaki is the CEO and co-founder of SmartEar Inc. Dr. Masaki has earned a bachelor’s and master’s in EECS from MIT, a Ph.D. in biomedical engineering from Harvard/MIT, and a Post-PhD from Stanford Medical School in stem cell engineering. After leaving the academic world, she led R&D efforts at Advanced Bionics and EarLens and created intellectual properties for several companies seeking to further hearing and audio technology. She believes the ear will house the next computing platform and the time is ripe for the integration of AI, speech recognition, and hearing technologies.