Helping Machines Understand the World to Connect People and Things

By Sophie Curtis on January 16, 2015

Original

Loop AI Labs builds the Digital Genome - a deep profile of any person or any thing on the fly. By combining techniques in deep learning and classical symbolic reasoning, they create a machine intelligence that continually learns about people and things in the world, and acquires new concepts from experience like a person would. This technology was recently released in an API that enables developers to build the next generation of personalized apps and services without needing AI experience.

Patrick Ehlen, PhD, is a cognitive scientist and Head of Deep Learning at Loop AI Labs. He specializes in representation learning for semantics, pragmatics, and concept acquisition. We caught up with him ahead of his participation at the Deep Learning Summit, to hear his thoughts on the future of deep learning and artificial intelligence technologies.

What do you feel are the leading factors enabling recent advancements in deep learning?

The obvious “new” things are faster processors, more data, and weight initialization for deep networks. Also a lot of optimizations. More interestingly, there are many old methods (by “old” I mean 1990s) that are making a comeback and getting used in new ways that are surprisingly effective. One such method is using Long Short-Term Memory (LSTM)—with RNNs—in the tasks of language understanding and machine translation. I see a lot of renewed promise there. Also, various sparsity enforcement methods that allow you to use a large number of parameters. Dropout, for instance, is not exactly a new method and it certainly has its theoretical detractors, but it has proved quite useful in solving problems and winning competitions.

Which industries do you think will be disrupted by deep learning in the future? And how?

The real legacy of deep learning will be to allow us to move beyond the primarily classification-oriented machine learning tasks of today to a world where we have much better competence in our command of high-dimensional state spaces where dependencies among data points and features are not as straightforward as most current analytical problems. Language understanding and concept learning are prime examples, and I would point to these as being essential to future progress in my own field. But this will soon extend to medicine and the hard sciences, and anything where you have messy, high-dimensional spaces that you want to make sense of but that don’t yield cleanly to standard dimensionality reduction techniques like PCA.

What is currently being developed in your field that will be essential to future progress?

When it comes to language processing, unsupervised learning of high-dimensional embedding spaces has provided effective solutions to some tough problems, like speech recognition and machine translation. But a giant problem for which we still don’t have a good solution is handling compositionality in language along with these continuous representations. For example, I might have continuous representations of “John” and “sharks” and “eating,” but how do I represent the meaning of “John eats sharks” in a way that is distinct from “Sharks eat John”? Various methods have been proposed that involve matrix arithmetic. Another promising avenue comes from Oxford folks using Lambek’s pregroup calculus. Aside from this potential advance, I would also add that we still lack a consistently tractable approach to sequence learning, and to learning networks of association in continuous spaces.

Which areas do you feel could benefit from cross-industry collaboration?

Companies throughout the world in almost every industry have loads of unorganized and unanalyzed data—much in text form—sitting in desktop machines and data centers. Even when these data are analyzed—for example, in medicine, finance and law—they are analyzed by teams of research experts, which is costly and time-consuming. But there is a barrier to entry in many such industries where the short-term cost from disrupting business that would be caused by changing the status quo is too great, and too much is at stake for companies to take risks. Our science will slowly make inroads into these areas, but it will take time and many small steps.

What developments can we expect to see in deep learning in the next 5 years? 

If you listen to some people, we may be cowering in caves hiding from our superinteligent creations that now want to exterminate us! But more realistically, aside from the three promising areas I listed above (compositionality, sequences, and association networks), another is that we will see wider application to existing data sources. Deep learning can appear almost magical in the way it uncovers meaningful patterns from raw data. So far, however, little has been done to connect this power with many crowd-sourced knowledge bases that were developed in the last decade. We’ve started down this path, and others will surely join. But turning these data sources into generalized feature spaces will significantly expand the range of problems to which deep learning can be applied.

What advancements excite you most in this field? 

I am most excited by the prospect of developing machine intelligence that understands the world in a way that is similar to how humans do it. Right now, machines are so far afield from understanding what we people are actually about that it is often comical. But that is about to change in a big way. I don’t mean the recent clickbait about “Facebook understands you better than your spouse”, which is obvious rubbish. But the data and methods now exist for creating systems that can model and understand our world far better than we’ve ever known before, and that is very exciting to me. This advance will enable us to finally move beyond the one-size-fits-all interfaces and advertisements we endure today, along with myriad other applications that will change the world as we know it.

What do you see in the future for Loop AI Labs?

At Loop AI Labs, we’ll soon be releasing a new API that will enable other companies to understand their customers, employees, products, records, and other “things” that are crucial to their business. The API will take unstructured text, learn a conceptual model that is specific to that data, and output feature vectors that can be used as a deep profile. This Thing Digital Genome API is unique in two ways. First, its conceptual model is tailored to the domain of the data, rather than trying to fit data to some existing, general domain. Second, the API can be used effectively without a data science or AI background. We’ve already seen a lot of interest in this API, so we’re excited to get it out and see what people do with it.

The Deep Learning Summit is taking place in San Francisco on 29-30 January. You can get more information and register here.

Neural Networks Machine Learning Deep Learning AI Deep Learning Summit Deep Learning Algorithms


0 Comments

    As Featured In

    Original
    Original
    Original
    Original
    Original
    Original

    Partners & Attendees

    Intel.001
    Nvidia.001
    Ibm watson health 3.001
    Acc1.001
    Rbc research.001
    Twentybn.001
    Mit tech review.001
    Kd nuggets.001
    Facebook.001
    Maluuba 2017.001
    Graphcoreai.001
    Forbes.001
    This website uses cookies to ensure you get the best experience. Learn more