The World's Biggest Deep Learning Summit - Day 2 Highlights

Original

Day 2 of RE•WORK’s Deep Learning Summit returned today with discussions on robots in space, progress in self-driving cars, the future of e-commerce, creating ethically sound AI, and much more.

Did you enjoy the summit? Or maybe you missed out? Register for next year's event with the discount code SF2020 to save 50% by getting a 241 pass. 

Yesterday, we launched the first half of our Impact Stages with sessions taking place on AI Assistants, Education, Environment & Sustainability, Industry Applications, and Connect (check out the highlights here). Today saw several new Impact Stages hosting sessions on Ethics & Social Responsibility, Technical Labs, Startups & Investors and Futurescaping.

At each summit, we invite press attendees to interview some of our expert speakers for our Video Platform and Women in AI Podcast. Yesterday and today we hosted these interviews in the interactive area in the summit where attendees had the opportunity to watch the live recordings as well as asking questions in a short Q&A after each interview. Over the two days we’ve spoken with Jeff Clune from Uber AI Labs, Chelsea Finn from Google Brain, Anna Bethke from Intel, Deborah Harrison from Microsoft, Cathy Pearl from Google, amongst many more.


“This is my first time at the event, it was recommended by my boss and I will be recommending it to all my colleagues. The quality of speakers is amazing and you actually get to meet them here. I really appreciate everything you have done, the speakers, talks and organisation has been amazing.” - Tony Szedlak, Auto-Owners Insurance

At the networking drinks session last night, it was great to hear attendee’s feedback about the event so far:

“This is very different to most conferences. There are both strong academics and really great companies” - Matt Cowell, Quanthub

“This is my third RE•WORK summit, I'm now a VIP which is nice, it's a great mix of attendees and having Ian Goodfellow here is amazing, he's a superstar”  Jacob Miller, CCR

“It’s been incredible to meet people who I’ve read and researched about. Actually speaking to these people is amazing. All of the speakers have time for the attendees which is different  to other summits" - Martin Beyer, FING

Once attendees had grabbed coffee and breakfast, the crowd began to disperse as everyone made their way to the first sessions of the day.

Opening the Technical Labs session was Yuandong Tian from Facebook AI who opened with an overview of the landscape of AI. He explained that in recent years, huge milestones have been reached and we’ve come a long way with the mass implementation, however ‘the areas of AI in need of the most improvement include common sense (chatbots and Question/answer), high-order reasoning in text generation and also in situations where there is few supervised data complicated environments (autonomous driving)". He went on to explain that ‘games are a great test bed for reinforcement learning, this is because there is an infinite supply of fully labelled data, it's low cost per sample and faster than real-time.’

Introducing the Ethics & Social Responsibility stage for the first time, Fiona McEvoy kicked off by outlining the main concerns in ethics and AI:

Outside of critical, high profile areas like system bias and data privacy - we’re still trying to anticipate problems that could manifest down the line. It’s important to remember that there has been a huge amount of development over an incredibly short period. Fortunately, for years now great thinkers have been considering the sorts of ethical dilemmas we’re now encountering, and there are long-established fields like medical ethics from which AI ethics can learn a lot.’ The morning’s sessions focused on responsible AI, using AI to empower those with disabilities, tackling fake news, and looking into the limitations of AI.


Timnit Gebru, Research Scientist in the Ethical AI Team at Google spoke about how we can understand the limitations of AI and what to do when they fail. She opened by discussing diversity, bias and other aspects:

“I had some sort of activist streak, mostly by necessity because of my background, but I separated this from my technical skills. I didn’t want to be known as the black woman talking about black woman things in tech - I wanted to be known for my work. Around 2016 a few things happened which made me realise that the lack of black voices in the field was an issue. At another conference, I counted 5 black people and there were around 5000 people there. There’s all this conversation about bias, but all the people most affected by this aren’t at the table involved in the conversation. I started to work on Black AI.”

Timnit continued to explain the concept of an AI datasheet to standardize information for datasets and pre-trained models, in order to push the field as a whole towards transparency and accountability.

Meanwhile, on the Deep Learning stage, Sergey Levine from UC Berkeley shared his groundbreaking work in Deep Robotic Learning. He explained how back in October he was able to train simulated robots to imitate human activities as seen in unlabeled YouTube videos, a breakthrough in this area.

Also working with Sergey and presenting later this morning on the Deep Learning stage was Chelsea Finn from UC Berkeley and Google Brain. Chelsea is working on Meta-Learning Deep Networks and explained how, although deep learning has enabled significant advances in a variety of domains; however, it relies heavily on large labelled datasets.


If you had a friend who was the world champion of Go, or who could speak 10 different languages, they’d probably be pretty good at lots of different things. This isn’t the case with machines. If a machine can win at Go, that might be all they can do - so how can we build machines that are more generalized? Take humans, to start, we do simple, but general tasks like stacking items. It turns out the simpler and broader skills are very hard for machines to learn. They’re so intuitive to us so recreating them is very hard because we don’t understand how they work. So how can we try to do the unimpressive things?

She went on to discuss how meta-learning is helping to adapt deep models to new tasks with tiny amounts of data, by leveraging data from other tasks. Outside of the technical presentations, a Quick Pitch session was taking place on the Investors & Startups Stage. The session was a chance as a startup to present or demo your work, with 3 minutes to pitch and 2 minutes for Q&A. We heard from several companies keen to share their work:

  • QuantHub: “A bad hire can cost your company 100k in hard costs. The countless hours of technical vetting can cost 25k alone. In the future, we are going to be doing more of this, by 2020 there will be 2.8 million openings in those roles."

  • Bonsai Tech: "We develop enterprise chatbots for customer support & scheduling info. Also use DL for computer vision (people detection, customer demographics).”

  • VideoKen: “You’re able to see what topics are covered in a video and the algorithm will take you exactly to the point in the video that contains the topic you’re interested in. Videos account for more than 75% of internet traffic and we want to create additional value using AI.”

The Futurescaping stage was focusing on the current and future impact of technology to shape a collaborative workforce and society. Within this, we learned about interpreting and adjusting to human needs in human-machine collaboration, navigating international expansion for AI companies, and zooming in on how to adopt a machine learning mindset.

Discussing the challenges companies face when expanding overseas was the Canadian Trade Commissioner Service hosting the discussion 'Growth Across Borders: Navigating International Expansion for AI Companies'. Jon French, Senior Director of Global Recruitment, Community & Alumni, NEXT Canada asked “As you grow internationally how do you maintain culture?“ 

Co-Founder responded by explaining that “culture is the most important thing. One of the keys is finding people that others like working with, as well as getting the right skill set.” Some of the other interesting questions raised by the audience and discussed in the session were the following: What is the biggest hurdle for international expansion? What is the primary motivator to expand internationally? Which Canadian AI/ML supercluster are you interested in hearing more about? What type of info would be most helpful to better navigate international expansion?


On the Ethics and Social Responsibility stage, an exciting panel took place: Ethical AI - Harnessing Automation for a Just World. This discussion featured Shubha Nabar from Salesforce Einstein, Anna Bethke from Intel, Chandra Khatri from Uber AI Labs, and Londa Schiebinger from Stanford University. The panellists discussed the potential pitfalls of automation, processes that businesses should implement to avoid these pitfalls, definitions of fairness, and the incredible opportunity we have as a society, to instil fairness as a first class construct into the systems we build.

‘You can de-bias some of your algorithms with embeddings whilst pulling the sexist associations out. One technique I like is the ‘counterfactual’ - where Google’s search shows men high paying jobs 5 times more, so if you counter test it, you can see if it’s fair, and if it is, your platform is ‘safe’ to launch.’ Londa Schiebinger

Similar to counterfactuals, you can look at other tools - one example by Lime (a transparency toolkit) is that we're looking at an image classifier and they were looking at Husky dogs - they looked to see what and why made it classified as that. It looked at the snow. So what happens now if we have a labrador in the snow or a Husky on a beach? We need to make sure the training is more generalized to be unbiased.’ Anna Bethke, Intel

Before leaving, attendees were taking pictures at the photo wall and with the RE•WORK balloons, giving their feedback on the interactive question walls. There were some interesting findings:

48% of attendees think that AI should be taught in schools from a young age, and another 34% think it should be taught from high school, leaving only 18% of attendees thinking that AI shouldn’t be taught until College level.

Opinions were split between healthcare and education when attendees were asked where they think the largest impact will be made by AI in the coming years with 48% voting healthcare and 52% estimating that education will be most transformed.

Register for next year's summit with the discount code SF2020 to save 50% by getting a 2-4-1 pass.

Original

Deep Learning Dangerous Environments Future of Education AI Deep Learning Summit Ethics


0 Comments

    As Featured In

    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original

    Partners & Attendees

    Intel.001
    Nvidia.001
    Acc1.001
    Rbc research.001
    Mit tech review.001
    Graphcoreai.001
    Twentybn.001
    Kd nuggets.001
    Forbes.001
    Ibm.001
    Maluuba 2017.001
    Facebook.001
    This website uses cookies to ensure you get the best experience. Learn more