Arrival & Champagne Reception
Susan Fahringer - Perkins Coie
With over 26 years of litigation experience, Susan Fahringer counsels and defends some of the world’s leading innovators in privacy, IP and complex commercial litigation. She has extensive experience defending consumer class actions, especially in areas involving privacy and biometrics, and she defends companies in matters brought by state and government agencies in the privacy arena. Susan also represents companies across a broad array of industries in complex and strategic business disputes, from contract disputes to claims for unfair competition, misrepresentation and other business torts. She regularly counsels and represents clients in IP litigation matters as well, including trade secret and copyright disputes, and she provides strategic advice on litigation risk mitigation.
Susan has served on the firm’s Management Committee and Executive Committee and as co-chair of the firm's Intellectual Property Litigation practice. She currently serves as co-chair of the firm’s Artificial Intelligence and Machine Learning and Robotics industry group.
Negin Nejati - Airbnb
Are We Solving The Right Problem?
We are living in an exciting time where Machine Learning theory for common applications is maturing, open source tools are plenty, and computation is cheap. While this enables us to move faster than ever, it also makes it easy to throw latest technology at any given problem with little preparation. This can lead to overly complex solutions, suboptimal processes, and waste time. In this talk I’ll draw examples from real applications to show the necessity of spending time on defining the problem accurately before diving for solutions.
Negin Nejati received her Ph.D. in Electrical Engineering from Stanford University. Her thesis was focused on Machine Learning and Cognitive Sciences. She joined Apple Maps in 2013 where she led the geosearch effort building a query understanding Machine Learning model and geo search backend. She joined Airbnb in 2016 and is currently focused on improving customer support through Machine Learning. She enjoys building end to end products and her focus is on Natural Language Understanding.
Devi Parikh - Facebook AI Research & Georgia Tech
Towards Agents That Can See, Talk, Act, and Reason
Wouldn't it be nice if machines could understand content in images and communicate this understanding as effectively as humans? Such technology would be immensely powerful, be it for aiding a visually-impaired user navigate a world built by the sighted, assisting an analyst in extracting relevant information from a surveillance feed, educating a child playing a game on a touch screen, providing information to a spectator at an art gallery, or interacting with a robot. As computer vision and natural language processing techniques are maturing, we are closer to achieving this dream than we have ever been. In this talk, I will discuss our efforts towards building agents that can see, talk, act, and reason. Given an image and a natural language question about the image (e.g., “What kind of store is this?”, “How many people are waiting in the queue?”, “Is it safe to cross the street?”), can we build agents that produce an accurate natural language answer (“bakery”, “5”, “Yes”). Instead of answering individual questions about an image in isolation, can we build machines that can hold a sequential natural language conversation with humans about visual content? Instead of just passively answering questions, can agents navigate in an environment to gather the necessary information to answer the questions? And finally, how can we teach machine common sense so their interactions with humans are natural and seamless?
Devi Parikh is an Assistant Professor in the School of Interactive Computing at Georgia Tech, and a Research Scientist at Facebook AI Research (FAIR). Her research interests include computer vision and AI in general and visual recognition problems in particular. Her recent work involves exploring problems at the intersection of vision and language, and leveraging human-machine collaboration for building smarter machines. She received her Ph.D. from Carnegie Mellon University in 2009. She is a recipient of an NSF CAREER award, an IJCAI Computers and Thought award, a Sloan Research Fellowship, an Office of Naval Research (ONR) Young Investigator Program (YIP) award, an Army Research Office (ARO) Young Investigator Program (YIP) award, an Allen Distinguished Investigator Award in Artificial Intelligence from the Paul G. Allen Family Foundation, four Google Faculty Research Awards, an Amazon Academic Research Award, an Outstanding New Assistant Professor award from the College of Engineering at Virginia Tech, a Rowan University Medal of Excellence for Alumni Achievement, Rowan University's 40 under 40 recognition, a Forbes' list of 20 "Incredible Women Advancing A.I. Research" recognition, and a Marr Best Paper Prize awarded at the International Conference on Computer Vision (ICCV).
Rosanne Liu - Uber AI Labs
Intrinsic Dimension of Objective Landscapes in Deep Neural Networks
Many deep neural networks that solve amazing tasks employ large numbers of parameters. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such a measure? How many parameters are really needed? One way to answer this question is by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. We can slowly increase the dimension of this subspace, and note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape.
Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes. This latter finding has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold. Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10. In addition to providing new cartography of the objective landscapes wandered by parameterized models, this method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution. A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times.
Dr. Rosanne Liu is a Research Scientist and a founding member of Uber AI Labs. She received her PhD degree in Computer Science from Northwestern University. Her research interests include neural network interpretability, object recognition and detection, generative models, and adversarial attacks and defense in neural networks.