Ansgar Koene

Given the rapid growth of AI deployment, successful implementation of risk-based AI regulations will require AI risk assessments to be conducted at a scale that will be difficult to achieve without some level of automation. The need for automated AI risk assessments is further emphasised by the need to perform post-deployment monitoring.

In his talk I will present the findings of a survey on AI risk assessment methodologies, outlining commonly identified assessment factors. Based on these survey results I will discuss key challenges and potential approaches towards automation of the AI risk assessments that will be required by risk-based AI regulations.

Key takeaways:

risk-based AI regulations will require large scale risk assessments of AI application; AI risk assessment involves evaluation of multiple technical and non-technical risk factors; AI can play an important role in automation of AI risk monitoring.

Dr. Ansgar Koene is an AI Regulatory Advisor at the EY Global where he supports the AI Lab’s Policy activities on Trusted AI. He is also a Senior Research Fellow at the Horizon Institute for Digital Economy Research (University of Nottingham). Ansgar chairs the IEEE P7003 Standard for Algorithmic Bias Considerations working group, is the Bias Focus Group leader for the IEEE Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS), and a trustee for the 5Rgiths foundation for the Rights of Young People Online. Ansgar has a multi-disciplinary research background, having worked and published on topics ranging from Policy and Governance of Algorithmic Systems (AI), data-privacy, AI Ethics, AI Standards, bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies. He holds an MSc in Electrical Engineering and a PhD in Computational Neuroscience.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more