A new vaccine approach could help combat future coronavirus pandemics
The nanoparticle-based vaccine shows promise against many variants of SARS-CoV-2, as well as related sarbecoviruses that could jump to humans.
The nanoparticle-based vaccine shows promise against many variants of SARS-CoV-2, as well as related sarbecoviruses that could jump to humans.
Starting with a single frame in a simulation, a new system uses generative AI to emulate the dynamics of molecules, connecting static molecular structures and developing blurry pictures into videos.
Using the Earth itself as a chemical reactor could reduce the need for fossil-fuel-powered chemical plants.
With a new design, the bug-sized bot was able to fly 100 times longer than prior versions.
With models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.
Using high-powered lasers, this new method could help biologists study the body’s immune responses and develop new medicines.
A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.
MIT chemical engineers designed an environmentally friendly alternative to the microbeads used in some health and beauty products.
Researchers propose a simple fix to an existing technique that could help artists, designers, and engineers create better 3D models.
This new device uses light to perform the key operations of a deep neural network on a chip, opening the door to high-speed processors that can learn in real-time.
MIT students traveled to Washington to speak to representatives from federal executive agencies.
Report aims to “ensure that open science practices are sustainable and that they contribute to the highest quality research.”
The technique could make AI systems better at complex tasks that involve variability.
Physicists surprised to discover electrons in pentalayer graphene can exhibit fractional charge. New study suggests how this could work.
Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.