AI holds great promise but also significant threat. As AI capabilities continue to advance at a rapid pace, so does the risk to both company and consumer. In the plenary session at the Deep Learning in Finance Summit, Deep Learning in Retail & Advertising Summit, and the AI Assistant Summit in London last week, we were joined by experts in AI security as well as those facing giant risks when advancing AI in their business.
Aditya Kaul from Tractica kicked off the session by explaining how AI is constantly being re-invented and reminding us that 'in the next 5 years we might not even call it AI, it'll just be the way things run.'
The panel was made up of Shahar Avin, Postdoctoral Researcher in the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, Catherine Flick, Senior Lecturer in Computing and Social Responsibility, from DeMontfort University, Bianca Furtuna, Freelance Data Scientist, and Jochen L Leidner, Professor of Data Analytics at University of Sheffield.
Catherine kicked off: I'm currently working on updating ACMs code at the moment and work on responsible research and innovation. We need to think not only what is the problem and solution, but what are the unintended consequences, like miss-use cases. Thinking about ethics and social impacts is really important by bringing in diverse use-cases to test on to make sure there aren’t unintended consequences. People who aren’t technical will probably have a different perspective of what’s important when we’re thinking about privacy.
Shahar: Definitely by encouraging more talking between policy makers and the government. Malicious consequences are no longer unforeseen, so we have no excuse in being unprepared - we need to look forward and make a plan for how to make sure they’re used responsibly.
Jochen: I volunteer to teach because I think it’s important to upscale the next generation with technology and ethics. We need transparency in machine learning and people need to know why it’s happening. However, the best methods of ML aren’t transparent, but customers prefer lesser performing models that are transparent. Here, there are opportunities for research, but there are often also tough decisions to make. People are naive about how they share their data, and sometimes volunteer their data in ways that might lead to unintended consequences.
Catherine: Of course we’d love full transparency and security, but you can’t have it all, so we need to decide what the most important priorities are and find the balance. There’s no flow chart to see what’s most important, it’s about context so it’s a difficult thing to be able to set in stone. You need to know the values behind it to determine what you need to focus on. Try cancelling your Once you’ve given your data away you can’t get it back.To hear more from the Privacy & Security panel, register for video access here.
Ansgar: we’re talking about unjustified bias, because any decision has bias of some sort, but we want the bias to be based on justified and appropriate criteria. You have to understand what the criteria are on the decision that’s being made, in a transparent way so we can make sure the justification is something that’s an acceptable justification.
Lucy: That's an interesting question, and we currently use a lot of London based real-world data. Of course different countries and cultures have different road laws and eticette, and even within London the behaviours of drivers vary. For instance, a driver in suburban North London might be different from someone navigating a highly populated area in South London, and also the times of day has an impact. Then there are cyclists, which are challenging for autonomous vehicles, and again they behave differently - a guy in a suit on a boris bike, or a deliveroo driver, or a drop handlebar lycra cyclist - they're not going to behave the same.To hear more from the Ethics panel, register for video access here.