Arrival & Champagne Reception
Susan Fahringer - Perkins Coie
With over 26 years of litigation experience, Susan Fahringer counsels and defends some of the world’s leading innovators in privacy, IP and complex commercial litigation. She has extensive experience defending consumer class actions, especially in areas involving privacy and biometrics, and she defends companies in matters brought by state and government agencies in the privacy arena. Susan also represents companies across a broad array of industries in complex and strategic business disputes, from contract disputes to claims for unfair competition, misrepresentation and other business torts. She regularly counsels and represents clients in IP litigation matters as well, including trade secret and copyright disputes, and she provides strategic advice on litigation risk mitigation.
Susan has served on the firm’s Management Committee and Executive Committee and as co-chair of the firm's Intellectual Property Litigation practice. She currently serves as co-chair of the firm’s Artificial Intelligence and Machine Learning and Robotics industry group.
Devi Parikh - Facebook AI Research & Georgia Tech
Towards Agents That Can See, Talk, Act, and Reason
Wouldn't it be nice if machines could understand content in images and communicate this understanding as effectively as humans? Such technology would be immensely powerful, be it for aiding a visually-impaired user navigate a world built by the sighted, assisting an analyst in extracting relevant information from a surveillance feed, educating a child playing a game on a touch screen, providing information to a spectator at an art gallery, or interacting with a robot. As computer vision and natural language processing techniques are maturing, we are closer to achieving this dream than we have ever been. In this talk, I will discuss our efforts towards building agents that can see, talk, act, and reason. Given an image and a natural language question about the image (e.g., “What kind of store is this?”, “How many people are waiting in the queue?”, “Is it safe to cross the street?”), can we build agents that produce an accurate natural language answer (“bakery”, “5”, “Yes”). Instead of answering individual questions about an image in isolation, can we build machines that can hold a sequential natural language conversation with humans about visual content? Instead of just passively answering questions, can agents navigate in an environment to gather the necessary information to answer the questions? And finally, how can we teach machine common sense so their interactions with humans are natural and seamless?
Devi Parikh is an Assistant Professor in the School of Interactive Computing at Georgia Tech, and a Research Scientist at Facebook AI Research (FAIR). Her research interests include computer vision and AI in general and visual recognition problems in particular. Her recent work involves exploring problems at the intersection of vision and language, and leveraging human-machine collaboration for building smarter machines. She received her Ph.D. from Carnegie Mellon University in 2009. She is a recipient of an NSF CAREER award, an IJCAI Computers and Thought award, a Sloan Research Fellowship, an Office of Naval Research (ONR) Young Investigator Program (YIP) award, an Army Research Office (ARO) Young Investigator Program (YIP) award, an Allen Distinguished Investigator Award in Artificial Intelligence from the Paul G. Allen Family Foundation, four Google Faculty Research Awards, an Amazon Academic Research Award, an Outstanding New Assistant Professor award from the College of Engineering at Virginia Tech, a Rowan University Medal of Excellence for Alumni Achievement, Rowan University's 40 under 40 recognition, a Forbes' list of 20 "Incredible Women Advancing A.I. Research" recognition, and a Marr Best Paper Prize awarded at the International Conference on Computer Vision (ICCV).
Rosanne Liu - Uber AI Labs
Generative Models And The Imperative Incompetence Of Deconvolution
Generative models -- models that generate entirely new data that "look like" training examples -- are one of the most promising approaches towards understanding the world in an unsupervised, or weakly supervised manner. The popularity of Generative Adversarial Networks (GANs) have motivated a body of work that successfully generate realistic natural scenes, human faces, and artistic images. However, we recently found out that they struggle at representing disjoint, discrete object sets, and worse, at creating structures among them. This could be attributed to a crucial, but often-overlooked impotence of deconvolution, which is widely adopted in GANs. Towards understanding how generative processes work, we take a deep look into the deconvolution process and the curse of discreteness.
Dr. Rosanne Liu is a Research Scientist and a founding member of Uber AI Labs. She received her PhD degree in Computer Science from Northwestern University. Her research interests include neural network interpretability, object recognition and detection, generative models, and adversarial attacks and defense in neural networks.
Probability & Uncertainty in Deep Learning