MIT’s top research stories of 2024
Stories on tamper-proof ID tags, sound-suppressing silk, and generative AI’s understanding of the world were some of the most popular topics on MIT News.
Stories on tamper-proof ID tags, sound-suppressing silk, and generative AI’s understanding of the world were some of the most popular topics on MIT News.
Professor Jessika Trancik’s course helps students understand energy levers for addressing climate change at the macro and micro scales.
Biodiversity researchers tested vision systems on how well they could retrieve relevant nature images. More advanced models performed well on simple queries but struggled with more research-specific prompts.
Five MIT faculty and staff, along with 19 additional alumni, are honored for electrical engineering and computer science advances.
The neuroscientist turned entrepreneur will focus on advancing the intersection of behavioral science and AI across MIT.
Research could help improve motor rehabilitation programs and assistive robot control.
Deborah Liverman, executive director of MIT Career Advising and Professional Development, offers a window into undergraduate and graduate students’ post-graduation paths.
Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
The MIT senior will pursue graduate studies in the UK at Cambridge University and Imperial College London.
Five MIT faculty members and two additional alumni are honored with fellowships to advance research on beneficial AI.
SERC Scholars from around the MIT community examine the electronic hardware waste life cycle and climate justice.
The “PRoC3S” method helps an LLM create a viable action plan by testing each step in a simulation. This strategy could eventually aid in-home robots to complete more ambiguous chore requests.
In a recent commentary, a team from MIT, Equality AI, and Boston University highlights the gaps in regulation for AI models and non-AI algorithms in health care.
A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.
Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.