The Hidden Risk of Blind Trust in AI's 'Black Box'
For the field of AI to reach any measurable sense of maturity, we'll need methods to debug, error-check, and understand the decision-making process of machines. The lack of trust was at the heart of many failures of one of the best-known AI efforts. Artificial intelligence (AI) is a transformational $15 trillion opportunity, but without explainability, it will not reach any measurable sense of deployment. Explainability is an essential element of the future where we will have artificially intelligent machine partners. In this session, I will go over why the AI needs to be explainable, what does that mean, the state of the art of explainable AI as well as various approaches to build it.
Former Head of Architecture/Engineering of Worldwide Corporate Network at Google, Ajay is a technologist, business futurist, & prolific inventor with about 90 patents pending/issued specializing in artificial intelligence, Wi-Fi networking, Quantum computing, and Real Time Location. He is author of “RTLS for Dummies”, “Augmented Reality for Dummies” & “Artificial Intelligence for Wireless Networking”. Ajay Malik is Head of Artificial Intelligence at View, Inc. Prior to that, Ajay was the CEO & Founder of Oro Networks, a company developing a Smart Building AI Assistant. Before starting ORO, Ajay was head of architecture and engineering for the worldwide corporate network at Google. Ajay has also held executive leadership positions at Meru Networks, Hewlett-Packard, Cisco, and Motorola. He completed B.E in Computer Science & Technology from IIT, Roorkee, India.