Wired
A study by MIT researchers examining adversarial images finds that AI systems pick up on tiny details in images that are imperceptible to the human eye, which can lead to misidentification of objects, reports Louise Matsakis for Wired. “It’s not something that the model is doing weird, it’s just that you don’t see these things that are really predictive,” says graduate student Shibani Santurkar.