Concept Recursive Activation FacTorization for Explainability tagged posts

New Tool explains how AI ‘sees’ Images and why it might Mipostake an Astronaut for a Shovel

New tool explains how AI 'sees' images and why it might mistake an astronaut for a shovel

Why is it that artificial intelligence systems can outperform humans on some visual tasks, like facial recognition, but make egregious errors on others—such as classifying an image of an astronaut as a shovel?

Like the human brain, AI systems rely on strategies for processing and classifying images. And like the human brain, little is known about the precise nature of those processes. Scientists at Brown University’s Carney Institute for Brain Science are making strides in understanding both systems, publishing a recent paper that helps to explain computer vision in a way the researchers say is accessible as well as more useful than previous models.

“Both the human brain and the deep neural networks that power AI systems are referred to as black boxes because we don’t know exa...

Read More