When you hear the term ‘responsible AI’, it’s understandable that this is to be interpreted as the model itself being responsible for the consequences of its actions. However, if AI is indeed artificial, surely that means it cannot assume responsibility? In this context, we refer to the overarching responsibility of AI from the very early stages of creation right up to deployment - so whose responsibility is it for the models to act safely and for the greater good of society? Many options spring to mind: the government, businesses, independent regulators, or someone else entirely?
Topics: AI, Business Applications, Ethics, AI for Good