Algorithms modeled loosely on the brain have helped artificial intelligence take a giant leap forward in recent years. Those algorithms, in turn, have advanced our understanding of human intelligence while fueling discoveries in a range of other fields.
MIT founded the Quest for Intelligence to apply new breakthroughs in human intelligence to AI, and use advances in AI to push human intelligence research even further. This fall, nearly 50 undergraduates joined MIT’s human-machine intelligence quest under the Undergraduate Research Opportunities Program (UROP). Students worked on a mix of projects focused on the brain, computing, and connecting computing to disciplines across MIT.
Picking the right word with a click
Nicholas Bonaker, a sophomore, is working on a software program called Nomon to help people with nearly complete paralysis to communicate by pressing a button. Nomon was created more than a decade ago by Tamara Broderick, as a master’s thesis, and soon found a following on the web. Now a computer science professor at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), Broderick handed Nomon off to Bonaker last summer for an update at a user’s request.
The program allows the user to select from more than 80 words or characters on a screen; the user presses a button when a clock corresponding to the desired word or character reaches noon. The hands of each clock move slightly out of phase, helping Nomon to figure out which word or character to choose. The program automatically adapts to a user’s clicking style, giving those with less precise motor control more time to pick their word.
“Nick has made Nomon much easier for a user to install and run, including directly from Windows," Broderick says. “He has dramatically improved the user interface, and refactored the code to make it easier to incorporate future improvements."
Bonaker’s next step is to test Nomon on able-bodied and motor-impaired users to see how it compares to traditional row-column scanner software. “It’s been fun knowing this could have a big impact on someone’s life,” he says.
Predicting how materials respond to 3-D printing
3-D printers are now mainstream, but industrial molds are still better at turning out items like high-quality car parts, or replacement hips and knees. Senior Alexander Denmark chose a project in the lab of Elsa Olivetti, a professor in the Department of Materials Science and Engineering, to understand how 3-D printing methods can be made more consistent.
Working with graduate students in Olivetti’s lab, Denmark used machine-learning algorithms to explore how the printer’s laser speed, and the layering of different types of materials, influence the properties of the finished product. He helped build a framework for comparing 3-D printing parameters to the final product’s mechanical properties.
“We hope to use it as a guide in printing experiments,” he says. “Say I want our final product to be really strong, or relatively lightweight, this approach could help tell us at what power to set the laser or how thick each layer of material should be.”
Denmark says the project helped bring his coding skills to the next level. He also appreciated the mentoring he received from his graduate student colleagues. “They gave me a lot of advice on improving my approach,” he says.
A faster way to find new drugs
Developing new drugs is expensive because of the vast number of chemical combinations possible. Second-year student Alexandra Dima chose to work on a project in the lab of Rafael Gomez-Bombarelli, a professor of materials science and engineering. Bombarelli is using machine-learning tools to narrow the search for promising drug candidates by predicting which molecules are most likely to bind with a target protein in the body.
So far, Dima has helped to build a database of hundreds of thousands of small molecules and proteins, detailing their chemical structures and binding properties. She has also worked on the deep learning framework aimed at predicting which molecule-protein pairs have the strongest binding affinity, and thus, represent the most promising drug candidates. Specifically, she helped to optimize the parameters of a message-passing neural network in the framework.
Among the challenges she overcame, she says, was learning to extract massive amounts of data from the web and standardize it. She also enjoyed the deep dive into bioinformatics, and as a computer science and biology major, being able to work on a real-world application. “I feel so lucky that I got to start using my coding skills to build tools that have a real life-sciences application,” she says.
Improving face-recognition models
Neeraj Prasad, a sophomore, is using machine learning tools to test ideas about how the brain organizes visual information. His project in the lab of Pawan Sinha, a neuroscience professor in the Department of Brain and Cognitive Sciences (BCS), started with a puzzle: Why are children who are treated for cataracts unable to later recognize faces? The retina matures faster in newborns with cataracts, leading researchers to hypothesize that the newborns, by missing out on seeing faces through blurry eyes, failed to learn to identify faces by their overall configuration.
With researchers in Sinha’s lab, Prasad tested the idea on computer models based on convolutional neural networks, a form of deep learning that mimics the human visual system. When researchers trained the neural nets on pictures of blurred, filtered, or discolored faces, it was able to generalize what it had learned to new faces, suggesting that the blurry vision we have as babies helps us in learning to recognize faces. The results offer insight into how the visual system develops, and suggest a new method for improving face-recognition software.
Prasad says he learned new computational techniques and how to use the symbolic math library, TensorFlow. Patience was also required. “It took a lot of time to train the neural nets — the networks are so large that we often had to wait several days, even on a supercomputer, for results,” he says.
Tracking language comprehension in real time
Language underlies much of what we think of as intelligence: It lets us represent concepts and ideas, think and reason about the world, and communicate and coordinate with others. To understand how the brain pulls it all off, psychologists have developed methods for tracking how quickly people grasp what they read and hear, in so-called sentence-processing experiments. Longer reading times can indicate that a word, in a given context, is harder to comprehend, thus help researchers fill out a general model of how language comprehension works.
Veronica Boyce, a senior majoring in brain and cognitive sciences, has been working in the lab of BCS computational psycholinguistics professor Roger Levy to adapt a sentence-processing experimental method for the web, where more participants can be recruited. The method is powerful but requires labor-intensive hand-crafting of experimental materials. This fall, she showed that deep-learning language models could automatically generate experimental materials and, remarkably, produce higher-quality experiments than manually-crafted materials.
Boyce presents her results next month at the CUNY Conference on Sentence Processing, and will try to improve on her method by building in grammatical structures as part of a related project under the MIT-IBM Watson AI Lab. Current deep-learning language models have no explicit representation of grammar; the patterns they learn in text and speech are based on statistical calculations rather than a set of symbolic rules governing nouns, verbs and other parts of speech.
“Our work is showing that these hybrid symbolic-deep learning models often do better than traditional models in capturing grammar in language,” says Levy. “This is exciting for Veronica’s project, and future sentencing-processing work. It has the potential to advance research in both human and machine intelligence.”
A conversational calorie counter
A computer science major and a triple jumper on the MIT Track and Field team, third-year student Elizabeth Weeks had the chance this fall to combine her interests in technology and healthy eating by working on a voice-controlled nutrition app in the lab of James Glass, a senior research scientist at CSAIL.
Coco Nutritionist lets users log their meals by talking into their phone rather than manually typing in the information. A collaboration between computer scientists at MIT and nutritionists at Tufts University, the app is meant to make it easier for people to track what they eat, and thus avoid empty calories and mindless eating.
Weeks helped develop the user interface, and on the back-end, building a new feature for adding recipes and homemade meals, making meal data in the cloud accessible through a call to the server. “Lots of users had requested that we add this feature, and Elizabeth really pulled it off,” says Mandy Korpusik, a graduate student in CSAIL who led the project. Coco Nutritionist made its debut in the Apple Store last month and has already racked up nearly 900 downloads.
The Quest for Intelligence UROP projects were funded by former Alphabet executive chairman Eric Schmidt and his wife, Wendy; the MIT–IBM Watson AI Lab; and the MIT-SenseTime Alliance on Artificial Intelligence