• THIS SCHEDULE TAKES PLACE ON DAY 2

  • 08:00

    WELCOME & OPENING REMARKS - 8am PST | 11am EST | 4pm GMT

  • ETHICAL AI IN PRACTICE

  • 08:10
    Deborah Raji

    AI Evaluation, Accountability and Auditing

    Deborah Raji - Fellow - Mozilla

    Down arrow blue

    Deborah is an incoming Mozilla fellow, interested in topics of algorithmic auditing and evaluation. She has worked closely with the Algorithmic Justice League initiative on several award-winning projects to highlight cases of bias in computer vision. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on various projects to operationalize ethical considerations in ML engineering practice.

    Twitter Linkedin
  • 08:35
    Alice Xiang

    Operationalizing AI Ethics

    Alice Xiang - Senior Research Scientist - SonyAI

    Down arrow blue

    Operationalizing AI Ethics

    In this talk, Alice Xiang, Senior Research Scientist at Sony AI, will discuss some of the steps Sony AI is taking to operationalize AI ethics and some of the key research questions Sony AI will be exploring around fairness, privacy, explainability, and causality. Given that AI ethics is an area with many competing values and the need for highly contextualized solutions, research will play a key role in bridging the gap between the goals of AI ethics and their operationalization.

    Alice Xiang is a Senior Research Scientist at Sony AI, where she leads research on responsible AI. Alice previously worked as the Head of Fairness, Transparency, and Accountability Research at the Partnership on AI. Core areas of Alice's research include algorithmic bias mitigation, explainability, causal inference, and algorithmic governance. Alice is both a statistician and lawyer and has previously developed machine learning models and served as legal counsel for technology companies. Alice holds a Juris Doctor from Yale Law School, a Master’s in Development Economics from Oxford, a Master’s in Statistics from Harvard, and a Bachelor’s in Economics from Harvard.

    Twitter Linkedin
  • BUILDING ETHICAL AI

  • 09:00
    Anna Rohrbach

    Explainable AI for Addressing Bias and Improving User Trust

    Anna Rohrbach - Research Scientist - UC Berkeley

    Down arrow blue

    Explainable AI for Addressing Bias and Improving User Trust

    Explainable Artificial Intelligence (or XAI) has a long history of research, while recently it has re-emerged as an active area of investigation in the Deep Learning community. With the growing popularity and success of Machine Learning and, especially, Deep Learning techniques, the community has been striving to open up these “black boxes”. From the engineering perspective, interpretability could provide ways to better understand, debug and improve the models, by exposing unwanted behavior such as e.g., reliance on spurious correlations in the data. From the user perspective, explainability may address the crucial condition of wider adoption, namely trust. In this talk I will first show how visual explanations can help expose harmful biases encoded by humans in the training data or model design in the context of gender prediction in visual captioning. Next, I will talk about our work on explainable and advisable driving models. Here, we develop models that can both, generate textual explanations of their actions, as well as incorporate user advice in the form of observation-action rules.

    3 Key Takeaways: 1) Our proposed approach to visual captioning allows us to achieve lower error rate in gender prediction while encouraging the model to “look” at people rather than rely on spurious contextual cues.

    2) Making deep models explain their decisions in natural language does not lead to performance degradation and may in fact improve performance.

    3) Incorporating human knowledge (or advice) in deep models leads to better performing, more interpretable models that gain higher human trust.

    I am a Research Scientist at UC Berkeley, working with Prof. Trevor Darrell. I have completed my PhD at Max Planck Institute for Informatics under supervision of Prof. Bernt Schiele. My research is at the intersection of vision and language. I am interested in a variety of tasks, including image and video description, visual grounding, visual question answering, etc. Recently, I am focusing on building explainable models and addressing bias in existing vision and language models.

    Twitter
  • 09:25

    COFFEE & NETWORKING BREAK

  • AI FOR SOCIAL GOOD

  • 09:35
    Ciira wa Maina

    Building Environmental Conservation Technology in Kenya

    Ciira wa Maina - Senior Lecturer - Dedan Kimathi University of Technology

    Down arrow blue

    Building Environmental Conservation Technology in Kenya

    Ecosystems around the world face a number of threats as a result of unsustainable exploitation of natural resources. In this talk I will describe efforts to develop systems capable of monitoring vulnerable ecosystems by leveraging machine learning and the internet of things. Our efforts have involved deployments in ecosystems in Kenya which is blessed with rich biodiversity which we would like to help preserve for future generations.

    1. Conservation of natural resources is a key challenge of our time
    2. We can leverage machine learning in environmental conservation
    3. Technology is not a silver bullet - sustainable consumption needs to be emphasised.

    I am a lecturer at Dedan Kimathi University of Technology in Nyeri, Kenya where I also conduct research in a number of areas including bioacoustics, IoT, machine learning and data science.

    Prior to joining DeKUT in 2013, I was a postdoctoral researcher at the University of Sheffield between 2011 and 2013, a PhD student at Drexel University in Philadelphia, USA between 2007 and 2011 and a BSc Student at the University of Nairobi between 2002 and 2007.

    Twitter
  • 10:00
    Jennifer Hobbs

    Precision Agriculture and AI

    Jennifer Hobbs - Director of Machine Learning - Intelin Air

    Down arrow blue

    A Win-Win in Precision Ag

    Deep learning techniques for precision agriculture enable the optimization of management practices, including water and chemical applications, benefiting both the farmer and the environment. We’ll explore two use cases. First, we collect high-resolution aerial imagery and use a deep learning based density-estimation approach to count and localize flowering pineapple plants across a field, enabling precision application of chemicals and reducing waste. Second, we use longitudinal aerial imagery of corn and soy fields to detect and predict nutrient deficiency stress. By leveraging a U-Net and Convolutional LSTM, we are able to detect and predict stress up to three weeks earlier.

    Key Takeaways: Advances in deep learning and high-resolution image acquisition are revolutionizing precision agriculture Deep density-estimation techniques enable us to count millions of plants from aerial images in just a few seconds Incorporating the temporal element of our data enables us to do better detection and prediction of key issues in the field like identifying plants under stress

    Jennifer Hobbs is the Director of Machine Learning at IntelinAir, an ag-tech startup using computer vision and machine learning to deliver intelligence and insights to the agriculture industry. Her team is responsible for the development and delivery of models which identify relevant patterns in the field and alerts users to them. She completed her PhD in Physics and Astronomy at Northwestern University. Throughout her career she has been involved in all phases of the machine learning lifecycle, transforming raw data into compelling technology products through data modeling and architecture, pipeline design and management, machine learning, and visualization.

    Twitter Linkedin
  • 10:25
    Roundtable Discussions & Demos with Speakers

    BREAKOUT SESSIONS

    Roundtable Discussions & Demos with Speakers - - AI EXPERTS

    Down arrow blue

    Join a roundtable discussion hosted by AI experts to get your questions answered on a variety of topics.

    You are free to come in and out of all sessions to ask your questions, share your thoughts, and learn more from the speakers and other attendees.

  • 10:45

    COFFEE & NETWORKING BREAK

  • 10:55

    PANEL: Ensuring Responsible AI

  • Mayank Kejriwal

    MODERATOR

    Mayank Kejriwal - Research Assistant Professor - University of Southern California

    Down arrow blue

    Dr. Mayank Kejriwal is a research scientist at the University of Southern California's Information Sciences Institute (ISI), and a research assistant professor in the Department of Industrial and Systems Engineering. He received his Ph.D. from the University of Texas at Austin. His dissertation on Web-scale data linking was recently recognized with an international Best Dissertation award in his field. His research is highly applied, with a specific focus on using data and AI for social good. He has contributed to systems used by both DARPA and by law enforcement, and has active collaborations across academia and industry. He is also the co-author of an upcoming textbook on knowledge graphs (MIT Press, 2020), and has delivered tutorials and demonstrations at numerous conferences and venues. In 2019, he was named a Forbes Under 30 Scholar, and he was shortlisted for the Forbes 30 under 30 (Science).

    Twitter Linkedin
  • Myrna MacGregor

    PANELIST

    Myrna MacGregor - BBC Lead, Responsible AI+ML - BBC

    Down arrow blue

    Myrna Macgregor leads BBC thinking on responsible AI/Machine Learning. She is focussed on developing the right tools and resources to incorporate the BBC’s values and mission into the technology it builds. As a public policy specialist, she is particularly interested in work on AI/ML fairness, transparency and accountability.

    Myrna started her career in the British foreign service, serving in Brussels, Pristina (Kosovo), Berlin and Tel Aviv, and working on diverse issues like governance, electoral reform, climate change and the Middle East Peace Process. She speaks French, German and Albanian.

    Linkedin
  • Daniel Gifford

    PANELIST

    Daniel Gifford - Senior Data Scientist - Getty Images

    Down arrow blue

    Dan Gifford is a Senior Data Scientist responsible for creating data products at Getty Images in Seattle, Washington. Dan works at the intersection between science and creativity and builds products that improve the workflows of both Getty Images photographers and customers. Currently, he is the lead researcher on visual intelligence at Getty Images and is developing innovative new ways for customers to discover content. Prior to this, he worked as a Data Scientist on the Ecommerce Analytics team at Getty Images where he modernized testing frameworks and analysis tools used by Getty Images Analysts in addition to modeling content relationships for the Creative Research team. Dan earned a Ph.D. in Astronomy and Astrophysics from the University of Michigan in 2015 where he developed new algorithms for estimating the size of galaxy clusters and universe cosmology. He also engineered a new image analysis pipeline for an instrument on a telescope used by the department at the Kitt Peak National Observatory.

    Twitter Linkedin
  • Engin Bozdag

    PANELIST

    Engin Bozdag - Senior Privacy Architect II - Uber

    Down arrow blue

    Engin is a senior privacy architect at Uber and leads the technical privacy review process. Uber's Privacy Reviews ensure that privacy is embedded into products and services as early as possible. Prior to Uber, Engin worked for health tech leader Philips and led their technical GDPR implementation program. Engin holds a Ph.D. degree in algorithmic bias and technology ethics and an M.S. in software engineering, both from Delft University of Technology, one of the leading engineering schools in the world. Engin is a member of the ISO/PC 317 Working Group working to create a global standard on Privacy by Design. Engin is also affiliated with 4TU Centre for Ethics & Technology (the leading research center in the Netherlands on technology ethics) and also a regular guest lecturer for Delft University of Technology.

    Twitter Linkedin
  • Maria Luciana Axente

    PANELIST

    Maria Luciana Axente - Responsible AI Lead - PwC

    Down arrow blue

    In her role as Responsible AI and AI for Good Lead at PwC, Maria leads the implementation of ethics in AI for the firm while partnering with industry, academia, governments, NGO and civil society, to harness the power of AI in an ethical and responsible manner, acknowledging the benefits and risks in many walks of life. She has played a crucial part in the development and set-up of PwC’s UK AI Center of Excellence, the firm’s AI strategy and most recently the development of PwC’s Responsible AI toolkit, firms methodology for embedding ethics in AI. Maria is a globally recognised AI ethics expert, a Advisory Board member of the UK All-Party Parliamentary Group on AI, member of BSI/ISO & IEEE AI standard groups, a Fellow of the RSA and an advocate for gender diversity, children and youth rights in the age of AI.

    Twitter Linkedin
  • 11:45

    MAKE CONNECTIONS: Meet with Attendees Virtually for 1:1 Conversations and Group Discussions over Similar Topics and Interests

  • 12:00

    END OF SUMMIT

This website uses cookies to ensure you get the best experience. Learn more