Last month at the Deep Learning Summit and Deep Learning in Healthcare Summit, RE•WORK introduced an extensive Workshop track, offering complimentary interactive sessions for all attendees. Run by AI experts, the addition of these practical sessions gave way for creativity, experimentation and new ideas for everyone to take away after the summit and implement in their own work. As well as technical sessions honing in on the technicalities of deep learning in different scenarios, we looked at designing human-centric AI as well as encouraging attendees to meet with and pose questions to VCs and other experts.
The Mapping: Cansu Canca & Laura Haaber Ihle - AI Ethics Lab
When searching for “professor” or “CEO” on Google images, the results show overwhelmingly white male pictures. While these jobs are held by white male professionals more often, the image search results present an extreme bias against representing women and people of colour. This has been pointed out as an ethical problem in various outlets; however, the problem persists. The workshop used this case as an example on how to structure the ethical problem at hand and its underlying principles before moving on to try solving it.
Explore with the Experts: Danielle Belgrave, Machine Learning Researcher, Microsoft Research, Sarah Culkin, Strategic Data Lead, NHS England, Daniel Leightley, Post-Doctoral Research Associate, King's Centre for Military Health Research
These experts who are using AI and Deep Learning to solve challenges in the Healthcare sector came together to answer questions on both their research and applications. Attendees offered their thoughts on the challenges they’re facing and also received feedback on their current challenges. Some of the questions covered included: How do we incorporate expert knowledge into Deep Learning? - Who is responsible when the ‘machine’ makes an incorrect decision that costs lives? - How do we build and develop trust in AI technology, especially within the healthcare sector?
Predictive and Generative Deep Learning for Graphs, Amir Saffari, BenevolentAI
“Graphs are more complex than trees, for example, as they lack the natural order - this poses the challenge of where to begin?” Amir Saffari, BenevolentAI
This technical session took the format of a presentation, followed by an extensive Q&A. Amir spoke about how graphs are a natural way to model many real-world complex objects and shared their advances in using Deep Learning for graphs. The workshop covered BenevolentAI’s recent approaches to model and generated graphs with optimal properties using reinforcement learning as well as an architecture for conditional generative graph models.
How to Design AI with Human Centricity, Caryn Tan, Accenture
Whether we like it or not, we are inevitably moving into a world of more autonomous decisions by AI. In fact, done right it could lead to a really exciting future. Done right is the key. This workshop encouraged attendees from both technical and non-technical backgrounds to come together, exploring the unintended consequences of AI, including not only just consumer problems but also legal issues, regulations and policy.
Investing in Startups: A Founder’s Story and Meet the Investors
Parker Moss, Entrepreneur-in-Residence at F-Prime Capital chaired this new session exploring investing in startups. Eduardo Jorgensen, Founder of MedicSen, kicked off with his ‘Founders Story’ about his startup dedicated to healthcare who have seen significant growth over the past year since launching GlycSen for intelligent treatment of insulin-dependent diabetes. Leading investors interested in deep learning across all industries then shared industry insights and tips on a panel discussion. The investors were: Dmitry Kaminskiy, Managing Partner, Deep Knowledge Ventures, John Spindler, General Partner, AI Seed, Frederic Lardieg, Partner, Octopus Ventures
Ethical implications of AI are something at the forefront of everyone's minds. From the social media giants dealing with user’s personal information such as addresses, photos and interests, to healthcare companies handling sensitive medical information. We’re going to take a closer look at a couple of the workshop sessions throughout the Summit that encouraged attendees to consider these issues throughout the entire design process of AI:
Covering Ethical issues within Artificial Intelligence, the workshop was intended to allow individuals to be open about what Artificial Intelligence means for the future of our society, and what it should become in order to realise the best possible outcome. The main thrust of the workshop was based around what hurdles there were in the way of beneficial AI, and how they could be overcome.
- Discussions centred around the results given by search engines, and whether or not they were fair or useful
- Cansu and Laura delved into the biased results that search engines provide when typing “professor" in, and how pictures are overwhelmingly white and male.
- Debate ensure surrounding whether search results should change with the intention of fixing a social issue, or whether they should be accurate and show disparities.
For example, should we show less white men as professors? There is both a gender and racial gap in the profile of professors, but how should we best represent this? Should this remain, whilst ignoring the moral implications and potential influence of such a portrayal?
The 2-hour workshop used a game-like structure of the Mapping method, which engaged participants and helped them develop essential tools to decide on ethical solutions that are technically feasible. Collaborating with each other, participants tested the strength of their ideas and progressed gradually towards creating solutions to this real-life problem as well as analyzing how their solution would hold up in other relevant cases such as voice assistant responses and other search result categories. “The Mapping helps bring abstract ethical arguments to the ground—in a very literal sense since the Mapping takes the form of a physical ground game.” - Cansu Canca, Ethics AI Lab
Throughout the session, the following questions were answered:
- Should search engines monitor the variables of sex, age, and location to modify results?
- Should search engines track who is tagging pictures, in order to gauge potential bias
- Should search engines results be based on the prominence of an individual, or the number of results? i.e. Should the results be more egalitarian?
- Who gets to decide who monitors the search results. Is this censorship, and could it cause damage?
How do we ensure that we build AI that is not marginalising groups within society? How do we prevent negative unintended consequences such as creating larger social disparity? This workshop encouraged both business professionals and data scientists to think about how to ensure their AI is designed with human centricity in mind.
“AI and Humans are far superior to AI or Humans alone. This is why they must be designed to work together.” Caryn Tan, Accenture
There are problems with AI marginalising people, and this roots from the initial problem of hiring. Caryn explained than in Data Science you bring together various fields to learn and develop. However many candidates within science don’t realise that they could be data scientists for example physicists. There is a huge problem in Data Science with gender imbalance, as men are statistically more inclined to apply for jobs they are not qualified for, whereas women don’t (sometimes even if they are qualified). What is important, is to look at how we can eliminate the bias both in human hiring and human promotion - Caryn suggested the possibility of a democratic system where organisations vote upon hiring and promotions.
The session then opened up to attendees to come up with their own ideas on how it could be possible, in this example, to transform the hiring process in AI to solve the gender imbalances.
Teams were formed and topics selected to discuss how AI could affect the HR industry: Hiring / Employee Satisfaction / Promotion, and everyone then designed a prototype for how to improve HR.
A variety of ideas surfaced, from the obscure such as dream monitoring to emotional monitoring to decide whether or not employees were happy in their roles, to the more practical ideas of the gamification of work situations to ascertain whether or not employees trusted the management of their company. The groups then were encouraged to reverse brainstorm, breaking down what was wrong with their prototype, before presenting their ideas to the rest of the session room.
Caryn explained how AI can be used to ensure that individuals are selected for the right jobs and as a result happier in their work, which naturally is cheaper due to lower turnover. Ultimately in this case focusing on Human Resources, AI could be used to solve hiring ideas, eliminate human bias, and save money in the long term by less turnover.
Are you keen to get involved in practical sessions to further your experience and knowledge within AI? Join RE•WORK at the Deep Learning Summit in San Francisco, where we will be hosting the first ever standalone workshop track with two full days of interactive sessions, ranging in levels of technicality and practical applications. Register now to guarantee your place in San Francisco this January.