Welcome to a New Era of Machine Emotional Intelligence

By Sophie Curtis on March 23, 2015

Original
Emotions are hardwired into our brains at birth and manifest as facial expressions. How can machines be taught to read emotion as efficiently as humans?

Eyeris is a deep learning based vision technology that enables everyday devices to understand how we feel, who we are and how we perceive the environment around us. Their holistic facial expression recognition methodology imitates human vision and allows their algorithm to learn prototypic expressions directly from the face instead of relying on decomposed action units.

By bridging the gap between emotion recognition, face recognition, age and gender identification, eye tracking, gaze estimation and everything else in between, Eyeris’s software provides incredibly comprehensive analytics extracted from a human face.

Modar Alaoui, founder and CEO at Eyeris, is a serial entrepreneur and expert in AI-based vision software development. At the Deep Learning Summit in Boston, Modar will reveal a use of convolutional neural networks (CNNs) as a deep learning architecture towards the creation of a facial expression recognition vocabulary. His presentation will cover how this new approach allows vision software algorithms to read micro-expressions in real-time with a high level of accuracy, speed and customization, as well as revealing a number of current industry verticals that are benefiting today from integrating emotion recognition technology into their commercial applications to amplify context awareness; and subsequently, enhance users experiences through a better ambient intelligence.

We caught up with Modar ahead of the summit to hear his thoughts on the latest advancements in deep learning:

What are the key factors that have enabled recent advancements in deep learning?

The combination of the recent advancements in microprocessors, coupled with the commoditization of GPU-based super computers along with today’s massive amounts of datasets available for algorithm – supervised and unsupervised – training has clearly enabled the (r)evolution of Deep Learning.

As it is the case for most emerging technologies, those who leverage the afore-mentioned enabling items ride the early waves of deep learning. Not only do they get to discover its early challenges, but also disrupt greatly by solving them to benefit their respective technologies, whether in speech, text or image recognition.

We, in the image analysis area, come at it from personal experiences, which allow us to use deep learning as a medium to continuously push the boundaries of facial expression recognition.

What are the main types of problems now being addressed in the deep learning space?

Image recognition, speech recognition and text recognition are three areas were deep learning is addressed today.

Popular deep learning architectures such as Convolutional Neural Networks address particularly image and speech recognition applications. CNNs deem to be easier to train than other regular, deep, feed-forward neural networks since they can be trained with standard backpropagation. They have many fewer parameters to estimate, making them a highly attractive architecture to use for image analysis, especially in our case of emotion tracking through facial micro-expression recognition.

What are the practical applications of your work and what sectors are most likely to be affected?

While there are a number of applications that can benefit from emotion recognition today, we have purposely chosen our industry verticals to solve harder problems by leveraging unique technology differentiators, including the integration of Deep Learning architectures into our expression recognition algorithms for continuous and improved learning in relatively short timeframes.

Our mission towards advancing Ambient Intelligence (AmI) allows us to enable a new era of Human Machine Interaction (HMI) where embedded Systems, including everyday devices and machines, can understand and predict users emotions and respond accordingly in time-critical situations to enhance user experiences.

Predictability and improved accuracy through rapid adaptation are key areas that affect user and environment personalization and delivery.

What developments can we expect to see in deep learning in the next 5 years?

While there are a large number of different variants of deep architectures, most of them remain branches of some original parent architectures. Since not all of these architectures are implemented on the same datasets, it is not always possible today to compare their performance all together.

Deep learning, however, is a fast-growing field so new architectures, variants and algorithms are expected to branch out more and more, each will target many or a specific problem in its respective area. Industries like Healthcare for drug discovery and toxicology or Automotive for scene recognition and camera view interpretation are all prone for more developments with Deep Learning architectures in the coming years.

What advancements excite you most in the field?

Deep learning is a tool that allows algorithm training through one of its infrastructures using either supervised labeled data, unsupervised labeled data or through reinforced learning. In either case, data here, both in quality and quality, represents the “raw material” that, via the deep learning “tool”, permits for algorithm training and is a lot of time, a crucial indicator to accuracy.

Both the large numbers of available datasets today and the ones being implicitly amassed by companies of all sizes, startups and large corporations, are what is shaping the future of deep learning. Being part of and contributing to this future with our own proprietary datasets and Artificially Intelligent algorithms, is certainly exciting.

This raw material data, together with the deep learning tool are enabling the “fruit” of advanced decision-making algorithms, some of which today, which include our technology, outweigh human logic, speed and overall performance. And this is what excites us the most.

The Deep Learning Summit is taking place in Boston on 26-27 May. For more information and to register, please visit the event website here
Join the conversation with the event hashtag #reworkDL



Neural Networks Machine Learning Deep Learning Data Mining Pattern Recognition AI Deep Learning Summit Deep Learning Algorithms


0 Comments

    As Featured In

    Original
    Original
    Original
    Original
    Original
    Original

    Partners & Attendees

    Intel.001
    Nvidia.001
    Ibm watson health 3.001
    Acc1.001
    Rbc research.001
    Twentybn.001
    Mit tech review.001
    Kd nuggets.001
    Facebook.001
    Maluuba 2017.001
    Graphcoreai.001
    Forbes.001
    This website uses cookies to ensure you get the best experience. Learn more