• THIS SCHEDULE TAKES PLACE ON DAY 2

  • 08:00
    Jibin Liu

    WELCOME & OPENING REMARKS - 8am PST | 11am EST | 4pm GMT

    Jibin Liu - Software Enginner - Amazon

    Down arrow blue

    Jibin Liu is a Software Engineer at Amazon, focusing on building large-scale annotation system with Machine Learning techniques to enhance classification tasks. Formerly as a Software Engineer at eBay, he was working on using Reinforcement Learning to improve the efficiency of web crawling. Prior to eBay, he worked at Esri, a pioneer in the geospatial information system, at which he applied Deep Learning on imagery analysis. Before that, he was an Environmental Consultant at AKRF, Inc. in NYC.

    Transitioning from Environmental Engineering to Machine Learning, Jibin is passionate about applying Machine Learning and Deep Learning on automation, within both the digital and physical worlds.

    Twitter Linkedin
  • TOOLS & TECHNIQUES

  • 08:10
    Aditya Grover

    Label-Free Bias Mitigation For Fair Generative Modeling

    Aditya Grover - Research Scientist - Facebook AI Research

    Down arrow blue

    Label-free Bias Mitigation For Fair Generative Modeling

    Large-scale generative models, such as GPT, for both language and vision domains are trained on a variety of data sources scraped from the internet. Unsurprisingly, these data sources are often biased with respect to key demographic factors such as gender and race. Due to the latent nature of the underlying factors, detecting and mitigating bias is especially challenging for unsupervised machine learning. In this talk, I will present a model-agnostic and label-free approach for mitigating the bias of deep generative models based on importance weighting. Empirically, we demonstrate the efficacy of our approach which reduces bias with respect to latent factors by an average of up to 34.6% over baselines for comparable image generation using generative adversarial networks.

    Key Takeaways: - Generative models can be trained on large, Internet-scale datasets. Many compelling examples show that in doing so, they can amplify harmful biases in the dataset. - Fixing these biases is hard because the datasets used for training are unlabelled. - Fortunately, we do not need expensive labels for fixing the dataset bias and can instead rely on weaker forms of supervision.

    Aditya Grover is a research scientist at Facebook AI Research, a visiting postdoctoral researcher at UC Berkeley, and an incoming assistant professor of computer science at UCLA (starting Fall 2021). His research focuses on probabilistic modeling for representation learning and reasoning in high dimensions, and is grounded in applications in science and sustainability, such as weather forecasting and electric batteries. Aditya’s research has been published in top machine learning and scientific venues including Nature, covered by various media outlets, included in widely-used open source software, and deployed into production at major technology companies. He has won several awards, including a best paper award (StarAI), a best undergraduate thesis award, a Stanford Centennial Teaching Award, a Stanford Data Science Scholarship, a Lieberman Fellowship, and a Microsoft Research Ph.D. Fellowship. Aditya received his Ph.D. and masters from Stanford University in 2020 and bachelors from IIT Delhi in 2015, all in computer science.

    Twitter Linkedin
  • 08:35
    Changyou Chen

    Heat-Kernel Empowered Deep Generative Models

    Changyou Chen - Assistant Professor - University of Buffalo

    Down arrow blue

    Heat-Kernel Empowered Deep Generative Models

    Deep generative model (DGM) has been an influential topic in deep learning with great success in various applications. Although there have been many works on generative adversarial networks, many of them do not take manifold information of the training data into consideration in training. In this talk, I will present our recent work to effectively achieve the goal by simultaneously learning an intrinsic heat kernel of the manifold. The heat kernel encodes extensive geometric information of a manifold in an implicit way. In our work, we propose a way to incorporate manifold information into kernel-based DGMs by substituting the kernel with the learned heat kernel in the DGM. Our experimental results on image synthesis demonstrate the superiority of the proposed method, obtaining better generation quality relative to strong baselines.

    Key Takeaways: 1. A deep generative model to learning to incorporate manifold information from the training data. 2. Achieved by simultaneously learning the associated heat kernel of the manifold. 3. Well-justified theoretical guarantees and improved performance on several image generation tasks.

    Changyou Chen is an Assistant Professor in the Department of Computer Science and Engineering at the University at Buffalo, State University of New York. His research interest includes Bayesian machine learning, deep learning and deep reinforcement learning. Previously, Dr. Chen was a Research Assistant Professor and a Postdoctoral Associate in the Department of Electrical and Computer Engineering, Duke University. He got his PhD from College of Engineering and Computer Science, the Australian National University.

    Linkedin
  • 09:00
    Mihaela van der Schaar

    Synthetic Data: Breaking the Data Logjam in Machine Learning

    Mihaela van der Schaar - Professor - University of Cambridge

    Down arrow blue

    Synthetic Data: Breaking the Data Logjam in Machine Learning

    Machine learning has the potential to catalyze a complete transformation in many domains, including healthcare, but researchers in our field are still hamstrung by a lack of access to high-quality data, which is the result of perfectly valid concerns regarding privacy.

    In this talk, I will examine how synthetic data techniques could offer a powerful solution to this problem by revolutionizing how we access and interact with various datasets. Our lab is one of a small handful of groups cutting a path through this largely uncharted territory. We also designed and run the first international synthetic data competition at the premier machine learning conference, NeurIPS 2020. To read more about our research on this topic, see https://www.vanderschaar-lab.com/synthetic-data-breaking-the-data-logjam-in-machine-learning-for-healthcare/

    Mihaela van der Schaar is the John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge, a Fellow at The Alan Turing Institute in London, and a Chancellor’s Professor at UCLA. Mihaela was elected IEEE Fellow in 2009. She has received numerous awards, including the Oon Prize on Preventative Medicine from the University of Cambridge (2018), a National Science Foundation CAREER Award (2004), 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award. Mihaela’s work has also led to 35 USA patents (many widely cited and adopted in standards) and 45+ contributions to international standards for which she received 3 International ISO (International Organization for Standardization) Awards. In 2019, she was identified by National Endowment for Science, Technology and the Arts as the most-cited female AI researcher in the UK. She was also elected as a 2019 “Star in Computer Networking and Communications” by N²Women. Her research expertise spans signal and image processing, communication networks, network science, multimedia, game theory, distributed systems, machine learning and AI.

    Twitter Linkedin
  • 09:25

    COFFEE & NETWORKING BREAK

  • APPLICATIONS OF GANS

  • 09:35
    Swaroop Ghosh

    Small Molecule Drug Discovery Using Quantum Machine Learning

    Swaroop Ghosh - Associate Professor - Pennsylvania State University

    Down arrow blue

    Small Molecule Drug Discovery Using Quantum Machine Learning

    Existing drug discovery pipelines take 5-10 years and cost billions of dollars. Computational approaches such as, Generative Adversarial Networks (GANs) discover drug candidates by generating molecular structures that obey chemical and physical properties and show affinity towards the target receptor. However, classical GANs are inefficient and suffer from curse-of-dimensionality. A full quantum GAN may require more than 90 qubits even to generate QM9-like small molecules. We propose a qubit-efficient quantum GAN with a hybrid generator (QGAN-HG) to learn richer representation of molecules via searching exponentially large chemical space with few qubits more efficiently than classical GAN.

    3 Key Takeaways: 1) QGAN-HG requires only a fraction of parameters to learn molecular distribution as efficiently as classical counterpart.

    2) QGAN-HG with patched circuits accelerates standard QGAN-HG training process

    3) QGAN-HG with patched circuits avoids potential gradient vanishing issue of deep neural networks.

    Swaroop Ghosh received his Ph.D. from Purdue University. He is Monkowski Associate Professor at Penn State. Overall, he has delivered 40+ keynotes/ invited talks/tutorials and published 130+ papers on various aspects of electronic design automation. Recently, he has been leading NSF accelerator project on drug discovery using quantum artificial intelligence. He is senior member of IEEE and NAI, and a Distinguished Speaker of ACM.

    Linkedin
  • Junde Li

    Small Molecule Drug Discovery Using Quantum Machine Learning

    Junde Li - Doctoral Student - Pennsylvania State University

    Down arrow blue

    Small Molecule Drug Discovery Using Quantum Machine Learning

    Existing drug discovery pipelines take 5-10 years and cost billions of dollars. Computational approaches such as, Generative Adversarial Networks (GANs) discover drug candidates by generating molecular structures that obey chemical and physical properties and show affinity towards the target receptor. However, classical GANs are inefficient and suffer from curse-of-dimensionality. A full quantum GAN may require more than 90 qubits even to generate QM9-like small molecules. We propose a qubit-efficient quantum GAN with a hybrid generator (QGAN-HG) to learn richer representation of molecules via searching exponentially large chemical space with few qubits more efficiently than classical GAN.

    3 Key Takeaways: 1) QGAN-HG requires only a fraction of parameters to learn molecular distribution as efficiently as classical counterpart.

    2) QGAN-HG with patched circuits accelerates standard QGAN-HG training process

    3) QGAN-HG with patched circuits avoids potential gradient vanishing issue of deep neural networks.

    Junde Li is a doctoral student in the Department of Computer Science and Engineering at The Pennsylvania State University since 2019. His research interests include quantum computing, machine learning, and hybrid quantum classical machine learning and optimization for drug discovery and robust perception for autonomous vehicles.

    Linkedin
  • 10:00
    Robin Kips

    Realistic Cosmetics Virtual Try-On Using GANs

    Robin Kips - Research Scientist - L'Oréal

    Down arrow blue

    Realistic Cosmetics Virtual Try-On Using GANs

    The ability of generative models to synthesize realistic images offer new perspectives for cosmetics virtual try-on applications. We propose a new formulation for the makeup style transfer task, with the objective to learn a color controllable makeup style synthesis. We introduce CA-GAN, a generative model that learns to modify the color of specific objects (e.g. lips or eyes) in the image to an arbitrary target color while preserving the background. Since color labels are rare and costly to acquire, our method leverages weakly supervised learning for conditional GANs. This enables to realistically simulate and transfer various makeup styles

    Robin KIPS is a Research Scientist in the Artificial Intelligence department of L’Oréal Research and Innovation in France. His research focus on GANs, neural rendering, and color based computer vision problems. Robin is currently pursuing a Ph.D. at Télécom Paris, working on how to bring new perspectives to virtual try on technologies using generative models.

    Key Takeaways:

    o Realistic and controllable generative models can be trained without labelled data using weak supervision.

    o Generative model can implicitly learn to process complex phenomenon such as specularities in a realistic way.

    o Controllable generative models are good candidates for the future of AR applications

    Twitter Linkedin
  • 10:25
    Roundtable Discussions & Demos with Speakers

    BREAKOUT SESSIONS

    Roundtable Discussions & Demos with Speakers - - AI EXPERTS

    Down arrow blue

    Join a roundtable discussion hosted by AI experts to get your questions answered on a variety of topics.

    You are free to come in and out of all sessions to ask your questions, share your thoughts, and learn more from the speakers and other attendees.

    Roundtable Discussions 28th January: • 'From Open-Endedness to AI' hosted by Kenneth Stanley, Research Manager, OpenAI • ‘Cost Optimize Your Machine Learning with Multi-Cloud’ hosted by Leon Kuperman, Co-Founder & CTO, CAST AI

    • ‘Creating Data Products with Visual Intelligence’ hosted by Daniel Gifford, Senior Data Scientist, Getty Images

    • ‘Swapping Autoencoder for Deep Image Manipulation’ hosted by Richard Zhang, Research Scientist, Adobe

    • 'Conversational AI: Human-Like or All Too Human’ hosted by Mark Jancola, CTO & VP of Engineering, Conversica

    • 'Supercharge Your Data Quality: Automated QA' hosted by Aurelie Drouet, Product Marketing Manager & Shaashwat Saraf, Customer Success Engineer, Sama

    Roundtable Discussions 29th January: • ‘Curriculum Generation for Reinforcement Learning’ hosted by Natasha Jaques, Research Scientist, Google Brain

    • ‘The AI Economist’ hosted by Stephan Zhang, Lead Research Scientist, Salesforce Research

    • ‘A Win-Win in Precision Ag’ hosted by Jennifer Hobbs, Director of Machine Learning, Intelin Air

    • ‘Delivering responsible AI: via the carrot or the stick?’ hosted by Myrna MacGregor, BBC Lead, Responsible AI+ML, BBC

  • 10:45

    COFFEE & NETWORKING BREAK

  • 10:55

    PANEL: How Can We Best Harness The Potential of Generative Models & Overcome Challenges for Innovative Applications

  • Nikolay Jetchev

    PANELIST

    Nikolay Jetchev - Senior Research Scientist - Zalando

    Down arrow blue

    Nikolay Jetchev studied Mathematics and Computer Science at the Technical University of Darmstadt, receiving his diploma in 2007. He completed his Ph.D. in Machine Learning and Robotics at the Free University of Berlin in 2012, teaching as a postdoctoral fellow for a while. In the last several years, he has been part of the Computer Vision team at Zalando Research. Research focus: deep discriminative and generative neural networks, GANs, texture synthesis, digital image stylisation techniques, creative AI art experiments.

    Linkedin
  • Krishna Kumar Singh

    PANELIST

    Krishna Kumar Singh - Research Scientist - Adobe

    Down arrow blue

    Krishna is a Research Scientist at Adobe Research, working in the area of computer vision and deep learning. His research focuses on developing visual recognition and image generation models with minimal human supervision. Recently, he has been working on unsupervised image generation and disentanglement by providing explicit control over the different fine-grained properties of an image.

    He did his Ph.D. in Computer Science at the University of California, Davis under the supervision of Prof. Yong Jae Lee. Previously, he finished his Masters in Robotics at Carnegie Mellon University, advised by Prof. Alexei Efros and Prof. Kayvon Fatahalian. He did his undergrad in Computer Science and Engineering at IIIT Hyderabad. More info can be found on his webpage: kkanshul.github.io

    Linkedin
  • Alexia Jolicoeur-Martineau

    PANELIST

    Alexia Jolicoeur-Martineau - Research Scientist - MILA

    Down arrow blue

    Alexia is a research scientist in statistics and artificial intelligence (AI). Her main research interests are Generative Adversarial Networks (GANs), deep learning, and large-scale gene-by-environment models. Her academic and professional background is in statistics. She started pursuing the study of AI in 2017 on her own. In 2017, she released the Meow Generator, a model that generates pictures of cats 🐈. In 2018, she wrote two sole-author papers on GANs, one of which has become highly influential (See “The relativistic discriminator: a key element missing from standard GAN”). In 2019, she wrote one sole-author papers on GANs, entered the highly competitive PhD program at MILA, and received the Borealis AI Fellowship. Her ultimate goal is to push GANs beyond their current capabilities so that one day we can generate media content (such as movies, music, video games, and comics) through an artificial intelligence.

    Twitter Linkedin
  • Jiangbo Yuan

    PANELIST

    Jiangbo Yuan - Applied Researcher III - eBay

    Down arrow blue

    Jiangbo Joined eBay in 2019 and he is currently a senior applied researcher in the CoreAI computer vision research team. He received his PhD from the Florida State University in 2014. Over the last 10 years, he has worked on projects and delivered various models to productions in areas of image recognition and retrieval, object detection, fashion outfit recommendation, virtual try-on, OCR, etc. He is currently focused on research & development for large-scale product image/text retrieval benchmarks, multimodal learning, self-supervised learning, generative models, etc.

    Linkedin
  • 11:45

    MAKE CONNECTIONS: Meet with Attendees Virtually for 1:1 Conversations and Group Discussions over Similar Topics and Interests

  • 12:00

    END OF SUMMIT

This website uses cookies to ensure you get the best experience. Learn more