Quantum simulator could help uncover materials for high-performance electronics
By emulating a magnetic field on a superconducting quantum computer, researchers can probe complex properties of materials.
By emulating a magnetic field on a superconducting quantum computer, researchers can probe complex properties of materials.
Inspired by large language models, researchers develop a training technique that pools diverse data to teach robots new skills.
“MouthIO” is an in-mouth device that users can digitally design and 3D print with integrated sensors and actuators to capture health data and interact with a computer or phone.
By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
A new method can train a neural network to sort corrupted data while anticipating next steps. It can make flexible plans for robots, generate high-quality video, and help AI agents navigate digital environments.
A new study of bubbles on electrode surfaces could help improve the efficiency of electrochemical processes that produce fuels, chemicals, and materials.
Associate Professor Julian Shun develops high-performance algorithms and frameworks for large-scale graph processing.
MIT CSAIL researchers created an AI-powered method for low-discrepancy sampling, which uniformly distributes data points to boost simulation accuracy.
By enabling users to chat with an older version of themselves, Future You is aimed at reducing anxiety and guiding young people to make better choices.
New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.
The program will invite students to investigate new vistas at the intersection of music, computing, and technology.
Researchers argue that in health care settings, “responsible use” labels could ensure AI systems are deployed appropriately.
Undergraduate engineering is No. 1; undergraduate business and computer science programs are No. 2.
Researchers find large language models make inconsistent decisions about whether to call the police when analyzing surveillance videos.
“Co-LLM” algorithm helps a general-purpose AI model collaborate with an expert large language model by combining the best parts of both answers, leading to more factual responses.