Skip to content ↓

AI pareidolia: Can machines spot faces in inanimate objects?

New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.
Press Inquiries

Press Contact:

Rachel Gordon
Phone: 617-258-0675
MIT Computer Science and Artificial Intelligence Laboratory
Close
In this watercolor scene a cartoon robot sees a smiling face in the moon, while a dog looks on.
Caption:
The “Faces in Things” dataset is a comprehensive, human-labeled collection of over 5,000 pareidolic images. The research team trained face-detection algorithms to see faces in these pictures, giving insight into how humans learned to recognize faces within their surroundings.
Credits:
Image: Alex Shipps/MIT CSAIL

In 1994, Florida jewelry designer Diana Duyser discovered what she believed to be the Virgin Mary’s image in a grilled cheese sandwich, which she preserved and later auctioned for $28,000. But how much do we really understand about pareidolia, the phenomenon of seeing faces and patterns in objects when they aren’t really there? 

A new study from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) delves into this phenomenon, introducing an extensive, human-labeled dataset of 5,000 pareidolic images, far surpassing previous collections. Using this dataset, the team discovered several surprising results about the differences between human and machine perception, and how the ability to see faces in a slice of toast might have saved your distant relatives’ lives.

“Face pareidolia has long fascinated psychologists, but it’s been largely unexplored in the computer vision community,” says Mark Hamilton, MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and lead researcher on the work. “We wanted to create a resource that could help us understand how both humans and AI systems process these illusory faces.”

So what did all of these fake faces reveal? For one, AI models don’t seem to recognize pareidolic faces like we do. Surprisingly, the team found that it wasn’t until they trained algorithms to recognize animal faces that they became significantly better at detecting pareidolic faces. This unexpected connection hints at a possible evolutionary link between our ability to spot animal faces — crucial for survival — and our tendency to see faces in inanimate objects. “A result like this seems to suggest that pareidolia might not arise from human social behavior, but from something deeper: like quickly spotting a lurking tiger, or identifying which way a deer is looking so our primordial ancestors could hunt,” says Hamilton.

A row of five photos of animal faces atop five photos of inanimate objects that look like faces

Another intriguing discovery is what the researchers call the “Goldilocks Zone of Pareidolia,” a class of images where pareidolia is most likely to occur. “There’s a specific range of visual complexity where both humans and machines are most likely to perceive faces in non-face objects,” William T. Freeman, MIT professor of electrical engineering and computer science and principal investigator of the project says. “Too simple, and there’s not enough detail to form a face. Too complex, and it becomes visual noise.”

To uncover this, the team developed an equation that models how people and algorithms detect illusory faces.  When analyzing this equation, they found a clear “pareidolic peak” where the likelihood of seeing faces is highest, corresponding to images that have “just the right amount” of complexity. This predicted “Goldilocks zone” was then validated in tests with both real human subjects and AI face detection systems.

3 photos of clouds above 3 photos of a fruit tart. The left photo of each is “Too Simple” to perceive a face; the middle photo is “Just Right,” and the last photo is “Too Complex"

This new dataset, “Faces in Things,” dwarfs those of previous studies that typically used only 20-30 stimuli. This scale allowed the researchers to explore how state-of-the-art face detection algorithms behaved after fine-tuning on pareidolic faces, showing that not only could these algorithms be edited to detect these faces, but that they could also act as a silicon stand-in for our own brain, allowing the team to ask and answer questions about the origins of pareidolic face detection that are impossible to ask in humans. 

To build this dataset, the team curated approximately 20,000 candidate images from the LAION-5B dataset, which were then meticulously labeled and judged by human annotators. This process involved drawing bounding boxes around perceived faces and answering detailed questions about each face, such as the perceived emotion, age, and whether the face was accidental or intentional. “Gathering and annotating thousands of images was a monumental task,” says Hamilton. “Much of the dataset owes its existence to my mom,” a retired banker, “who spent countless hours lovingly labeling images for our analysis.”

The study also has potential applications in improving face detection systems by reducing false positives, which could have implications for fields like self-driving cars, human-computer interaction, and robotics. The dataset and models could also help areas like product design, where understanding and controlling pareidolia could create better products. “Imagine being able to automatically tweak the design of a car or a child’s toy so it looks friendlier, or ensuring a medical device doesn’t inadvertently appear threatening,” says Hamilton.

“It’s fascinating how humans instinctively interpret inanimate objects with human-like traits. For instance, when you glance at an electrical socket, you might immediately envision it singing, and you can even imagine how it would ‘move its lips.’ Algorithms, however, don’t naturally recognize these cartoonish faces in the same way we do,” says Hamilton. “This raises intriguing questions: What accounts for this difference between human perception and algorithmic interpretation? Is pareidolia beneficial or detrimental? Why don’t algorithms experience this effect as we do? These questions sparked our investigation, as this classic psychological phenomenon in humans had not been thoroughly explored in algorithms.”

As the researchers prepare to share their dataset with the scientific community, they’re already looking ahead. Future work may involve training vision-language models to understand and describe pareidolic faces, potentially leading to AI systems that can engage with visual stimuli in more human-like ways.

“This is a delightful paper! It is fun to read and it makes me think. Hamilton et al. propose a tantalizing question: Why do we see faces in things?” says Pietro Perona, the Allen E. Puckett Professor of Electrical Engineering at Caltech, who was not involved in the work. “As they point out, learning from examples, including animal faces, goes only half-way to explaining the phenomenon. I bet that thinking about this question will teach us something important about how our visual system generalizes beyond the training it receives through life.”

Hamilton and Freeman’s co-authors include Simon Stent, staff research scientist at the Toyota Research Institute; Ruth Rosenholtz, principal research scientist in the Department of Brain and Cognitive Sciences, NVIDIA research scientist, and former CSAIL member; and CSAIL affiliates postdoc Vasha DuTell, Anne Harrington MEng ’23, and Research Scientist Jennifer Corbett. Their work was supported, in part, by the National Science Foundation and the CSAIL MEnTorEd Opportunities in Research (METEOR) Fellowship, while being sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator. The MIT SuperCloud and Lincoln Laboratory Supercomputing Center provided HPC resources for the researchers’ results.

This work is being presented this week at the European Conference on Computer Vision.

Related Links

Related Topics

Related Articles

More MIT News