Can robots learn from machine dreams?
MIT CSAIL researchers used AI-generated images to train a robot dog in parkour, without real-world data. Their LucidSim system demonstrates generative AI's potential for creating robotics training data.
MIT CSAIL researchers used AI-generated images to train a robot dog in parkour, without real-world data. Their LucidSim system demonstrates generative AI's potential for creating robotics training data.
Yiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo will start postgraduate studies at Oxford next fall.
An AI method developed by Professor Markus Buehler finds hidden links between science and art to suggest novel materials.
MIT and IBM researchers are creating linkage mechanisms to innovate human-AI kinematic engineering.
By sidestepping the need for costly interventions, a new method could potentially reveal gene regulatory programs, paving the way for targeted treatments.
A new design tool uses UV and RGB lights to change the color and textures of everyday objects. The system could enable surfaces to display dynamic patterns, such as health data and fashion designs.
MIT engineers’ new model could help researchers glean insights from genomic data and other huge datasets.
Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.
Researchers are leveraging quantum mechanical properties to overcome the limits of silicon semiconductor technology.
As he invents programmable materials and self-organizing systems, Skylar Tibbits is pushing design boundaries while also solving real-world problems.
Aboard NASA’s Orion spacecraft, the terminal will beam data over laser links during the first crewed lunar mission since 1972.
By emulating a magnetic field on a superconducting quantum computer, researchers can probe complex properties of materials.
Inspired by large language models, researchers develop a training technique that pools diverse data to teach robots new skills.
“MouthIO” is an in-mouth device that users can digitally design and 3D print with integrated sensors and actuators to capture health data and interact with a computer or phone.
By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.