Using ideas from game theory to improve the reliability of language models
A new “consensus game,” developed by MIT CSAIL researchers, elevates AI’s text comprehension and generation skills.
A new “consensus game,” developed by MIT CSAIL researchers, elevates AI’s text comprehension and generation skills.
More than a decade since its launch, App Inventor recently hosted its 100 millionth project and registered its 20 millionth user. Now hosted by MIT, the app also supports experimenting with AI.
A new algorithm learns to squish, bend, or stretch a robot’s entire body to accomplish diverse tasks like avoiding obstacles or retrieving items.
Ashutosh Kumar, a materials science and engineering PhD student and MathWorks Fellow, applies his eclectic skills to studying the relationship between bacteria and cancer.
The conversation in Kresge Auditorium touched on the promise and perils of the rapidly evolving technology.
Associate Professor Jonathan Ragan-Kelley optimizes how computer graphics and images are processed for the hardware of today and tomorrow.
Together, the Hasso Plattner Institute and MIT are working toward novel solutions to the world’s problems as part of the Designing for Sustainability research program.
MIT Department of Mechanical Engineering grad students are undertaking a broad range of innovative research projects.
Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.
TorNet, a public artificial intelligence dataset, could help models reveal when and why tornadoes form, improving forecasters' ability to issue warnings.
At MIT’s Festival of Learning 2024, panelists stressed the importance of developing critical thinking skills while leveraging technologies like generative AI.
An expert in robotics and AI, Shah succeeds Steven Barrett at AeroAstro.
For the first time, researchers use a combination of MEG and fMRI to map the spatio-temporal human brain dynamics of a visual image being recognized.
Researchers have developed a security solution for power-hungry AI models that offers protection against two common attacks.
A new technique can be used to predict the actions of human or AI agents who behave suboptimally while working toward unknown goals.