Automated system teaches users when to collaborate with an AI assistant
MIT researchers develop a customized onboarding process that helps a human learn when a model’s advice is trustworthy.
MIT researchers develop a customized onboarding process that helps a human learn when a model’s advice is trustworthy.
Human Guided Exploration (HuGE) enables AI agents to learn quickly with some help from humans, even if the humans make mistakes.
Twelve teams of students and postdocs across the MIT community presented innovative startup ideas with potential for real-world impact.
With the PockEngine training method, machine-learning models can efficiently and continuously learn from user data on edge devices like smartphones.
AI models that prioritize similarity falter when asked to design something completely new.
Researchers coaxed a family of generative AI models to work together to solve multistep robot manipulation problems.
By focusing on causal relationships in genome regulation, a new AI method could help scientists identify new immunotherapy techniques or regenerative therapies.
MIT researchers investigate the causes of health care disparities among underrepresented groups.
A new study bridging neuroscience and machine learning offers insights into the potential role of astrocytes in the human brain.
Researchers discover how to control the anomalous Hall effect and Berry curvature to create flexible quantum magnets for use in computers, robotics, and sensors.
MIT Sloan Associate Professor Rahul Mazumder finds ways to create and refine statistical models with an array of applications.
This AI system only needs a small amount of data to predict molecular properties, which could speed up drug discovery and material development.
Training artificial neural networks with data from real brains can make computer vision more robust.
MAGE merges the two key tasks of image generation and recognition, typically trained separately, into a single system.
The scientists used a natural language-based logical inference dataset to create smaller language models that outperformed much larger counterparts.