Drones navigate unseen environments with liquid neural networks
MIT researchers exhibit a new advancement in autonomous drone navigation, using brain-inspired liquid neural networks that excel in out-of-distribution scenarios.
MIT researchers exhibit a new advancement in autonomous drone navigation, using brain-inspired liquid neural networks that excel in out-of-distribution scenarios.
Experts convene to peek under the hood of AI-generated code, language, and images as well as its capabilities, limitations, and future impact.
Martin Luther King Jr. Scholar Brian Nord trains machines to explore the cosmos and fights for equity in research.
“DribbleBot” can maneuver a soccer ball on landscapes such as sand, gravel, mud, and snow, using reinforcement learning to adapt to varying ball dynamics.
MIT researchers built DiffDock, a model that may one day be able to find new drugs faster than traditional methods and reduce the potential for adverse side effects.
With the right building blocks, machine-learning models can more accurately perform tasks like fraud detection or spam filtering.
With further development, the programmable system could be used in a range of applications including gene and cancer therapies.
New LiGO technique accelerates training of large machine-learning models, reducing the monetary and environmental cost of developing AI applications.
J-WAFS researchers are using remote sensing observations to build high-resolution systems to monitor drought.
Computational chemists design better ways of discovering and designing materials for energy applications.
Researchers used machine learning to build faster and more efficient hash functions, which are a key component of databases.
Aleksander Mądry urges lawmakers to ask rigorous questions about how AI tools are being used by corporations.
The computer science and philosophy double-major aims to advance the field of AI ethics.
Aided by machine learning, scientists are working to develop a vaccine that would be effective against all SARS-CoV-2 strains.
MIT researchers uncover the structural properties and dynamics of deep classifiers, offering novel explanations for optimization, generalization, and approximation in deep networks.