AI Community Should Learn from Global Warming

Original

The European Commission states that 'humans are increasingly influencing the climate and the earth’s temperature by burning fossil fuels, cutting down rain forests and farming livestock.'  We call this effect on our climate, which was unintended and not recognized early during the Industrial revolution, Global Warming.  I would suggest that for AI, there will be a similar awakening in the future where we might state that humans are increasingly losing control over the growing influence of AI technologies on their social, economic and political climate. We can call this Global Artificial Intelligence Swarming or Global Swarming for short, where swarming describes a situation of being overrun by AI technologies.

So what is happening with Global Warming?  

Well, there is now, after a century of 'burning fossil fuels, cutting down rain forests and farming livestock', a realization that the threat is real and only a collaborative global effort might help mitigate or slow down the adverse effects.  This realization is what led to the Paris Agreement on Climate Change. 

The Paris Agreement 'brings all nations into a common cause to undertake ambitious efforts to combat climate change and adapt to its effects, with enhanced support to assist developing countries to do so.  As such it charts a new course in global climate effort.'

Now this brings up another question. What if these same Parties to the Paris Agreement had reached such an accord 10 years ago instead of in 2017? Or 30 years ago? Or 50 years ago?  First it is well known that safety prevention is generally cheaper than reactive safety measures.  Second, we know that over time, as a safety concern arises, risk management options may be reduced.

We think that the global AI community can learn from what is going on with climate change and not wait until it is too late to agree to collaborate in an effective global framework.  For this reason, the Consortium for Safer AI is unique in asking it members to take a pledge to share information, without jeopardizing their business well-being, which would help elevate the overall understanding of all the potential risks associated with the fast growing penetration of AI technology into our world. Max Tegmark has noted that as a technology gets more powerful, it is more risky to learn from mistakes.  Learning from mistakes has been one of the ways of managing the risks of our products and processes.  With the exponential growth in the power of AI technology, we cannot afford to wait and learn from failures of commercialized products much longer.  This pledge by Consortium members is modeled on a feature of the Paris Agreement called the nationally determined contributions (NDC).  The NDC asks that each Party to the agreement take actions and report those actions so that other Parties to the Agreement can benefit.  This is not a binding agreement and so its effectiveness is based on the 'common cause” that “charts a new course in global climate effort'. 

The Consortium for Safer AI was formed to help create a Paris Agreement like environment to fund research and share learnings as we are still early in the AI revolution.  We do not want to be in the same position with AI revolution as we are now trying to deal with the consequences of our Industrial revolution.  As the Paris agreement notes, since we have let too much time pass, we must make 'ambitious efforts' and must learn to “adapt” to the effect of climate change.  For the AI revolution, we still have time to use our human intelligence collaboratively to be proactive about the safety risks of the coming AI revolution.

Join Mahmood Tabaddor, Founder of Consortium for Safer AI in San Francisco this January 25 & 26 in a panel discussion on practical safety issues of AI products.  4 speakers will discuss these issues in a 20 minute panel followed by interactive roundtable discussions on key questions and an open floor Q&A. 

Topics explored will include the safety of machine learning operating systems; the consequences of poor design; aligning AI design and methods with human-values and many more.

Can't make it? Register now for the Deep Learning in Robotics Summit in San Francisco this June 28 - 29.
Original

Deep Learning Deep Learning Summit AI AI Assistants Ethics


0 Comments

    As Featured In

    Original
    Original
    Original
    Original
    Original
    Original

    Partners & Attendees

    Intel.001
    Nvidia.001
    Acc1.001
    Ibm watson health 3.001
    Rbc research.001
    Mit tech review.001
    Kd nuggets.001
    Facebook.001
    Maluuba 2017.001
    Graphcoreai.001
    Twentybn.001
    Forbes.001
    This website uses cookies to ensure you get the best experience. Learn more