Forough Arabshahi

Neuro-Symbolic Learning Algorithms for Automated Reasoning

Humans possess impressive problem solving and reasoning capabilities, be it mathematical, logical or commonsense reasoning. Computer scientists have long had the dream of building machines with similar reasoning and problem solving abilities to humans. Currently, there are three main challenges in realizing this dream. First, the designed system should be able to extrapolate in a zero-shot way and reason in scenarios that are much harder than what it has seen before. Second, the system’s decisions/actions should be interpretable, so that humans can easily verify if the decisions are due to reasoning skills or artifacts/sparsity in data. Finally, even if the decisions are easily interpretable, the system should include some way for the user to efficiently teach the correct reasoning when it makes an incorrect decision. We discuss how we can address these challenges using instructable neuro-symbolic reasoning systems. Neuro-symbolic systems bridge the gap between two major directions in artificial intelligence research: symbolic systems and neural networks. We will see how these hybrid models exploit the interpretability of symbolic systems to obtain explainability. Moreover, combined with our developed neural networks, they extrapolate to harder reasoning problems. Finally, these systems can be directly instructed by humans in natural language, resulting in sample-efficient learning in data-sparse scenarios.

Forough Arabshahi is a Senior Research Scientist at Meta Platforms (Facebook) Inc. Her research focuses on developing sample-efficient, robust, explainable and instructable machine learning algorithms for automated reasoning. Prior to joining Meta, she was a postdoctoral researcher working with Tom Mitchell at Carnegie Mellon University. During her postdoc, she developed an explainable neuro-symbolic commonsense reasoning engine for the learning by instruction agent (LIA). During her PhD with Animashree Anandkumar and Sameer Singh she developed sample-efficient and provably consistent latent variable graphical models and deep learning models that extrapolate to harder examples by extracting hierarchical structures from examples. The grand goal of her research is to build a reasoning system that learns problem solving strategies by incorporating real world examples, symbolic knowledge from the problem domain as well as human natural language instructions and demonstrations.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more