Expanding robot perception
Associate Professor Luca Carlone is working to give robots a more human-like awareness of their environment.
Download RSS feed: News Articles / In the Media / Audio
Associate Professor Luca Carlone is working to give robots a more human-like awareness of their environment.
Assistant Professor Manish Raghavan wants computational techniques to help solve societal problems.
Using the island as a model, researchers demonstrate the “DyMonDS” framework can improve resiliency to extreme weather and ease the integration of new resources.
The neuroscientist turned entrepreneur will be hosted by the MIT Schwarzman College of Computing and focus on advancing the intersection of behavioral science and AI across MIT.
Five MIT faculty members and two additional alumni are honored with fellowships to advance research on beneficial AI.
A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.
Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.
MIT engineers show how detailed mapping of weather conditions and energy demand can guide optimization for siting renewable energy installations.
Marzyeh Ghassemi works to ensure health-care models are trained to be robust and fair.
The technique could make AI systems better at complex tasks that involve variability.
By sidestepping the need for costly interventions, a new method could potentially reveal gene regulatory programs, paving the way for targeted treatments.
Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.
A new method called Clio enables robots to quickly map a scene and identify the items they need to complete a given set of tasks.
Researchers argue that in health care settings, “responsible use” labels could ensure AI systems are deployed appropriately.
Researchers find large language models make inconsistent decisions about whether to call the police when analyzing surveillance videos.