Gretchen Greene

How Machine Vision Fails: Adversarial Attacks and Other Problems

In recent years, we’ve seen remarkable computer vision successes using neural networks. OCR allows automated mail routing. Facial recognition identifies suspects on security video and our friends on social media. Scene segmentation and object detection and classification are used in autonomous vehicle navigation, medical diagnosis and robotic manufacturing.

But we’ve also seen notable failures. A black person is misclassified as a gorilla in Google photos, recalling centuries of racial slurs. In one fatal crash, a Tesla can’t see cross traffic. In another, an Uber misclassifies a pedestrian as an unknown object, then as a vehicle and then as a bicycle, with varying expectations of future travel path. Snow on the road might be mistaken for lane markings. Worn markings and asphalt smoothly transitioning to dirt might not be seen at all. A stop sign can be changed to a speed limit sign with a few pieces of tape or a bit of graffiti. A person can make her face disappear or turn a toy turtle into a rifle. A single pixel change can make an image unrecognizable.

How fragile are the machine vision systems you rely on, what are the ways they will fail and what can you do about it?

An AI policy researcher, lawyer and computer vision scientist, Gretchen Greene advises government leaders on AI strategy, use and policy and works with Cambridge startups on everything from autonomous vehicles to QR codes in wearables. Greene has been a U.S. Departments of Defense, Energy and Homeland Security mathematician, has published in machine learning, science and policy journals and has been interviewed by the Economist, Forbes China and the BBC. An affiliate researcher at MIT’s Media Lab, Greene has a CPhil and MS in math from UCLA and a JD from Yale.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more