3 Questions: Modeling adversarial intelligence to exploit AI’s security vulnerabilities
MIT CSAIL Principal Research Scientist Una-May O’Reilly discusses how she develops agents that reveal AI models’ security weaknesses before hackers do.
Download RSS feed: News Articles / In the Media / Audio
MIT CSAIL Principal Research Scientist Una-May O’Reilly discusses how she develops agents that reveal AI models’ security weaknesses before hackers do.
Starting with a single frame in a simulation, a new system uses generative AI to emulate the dynamics of molecules, connecting static molecular structures and developing blurry pictures into videos.
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.
Inspired by the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.
The Thermochromorph printmaking technique developed by CSAIL researchers allows images to transition into each other through changes in temperature.
Using this model, researchers may be able to identify antibody drugs that can target a variety of infectious diseases.
Biodiversity researchers tested vision systems on how well they could retrieve relevant nature images. More advanced models performed well on simple queries but struggled with more research-specific prompts.
Five MIT faculty and staff, along with 19 additional alumni, are honored for electrical engineering and computer science advances.
With models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.
Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
The MIT senior will pursue graduate studies in the UK at Cambridge University and Imperial College London.
Five MIT faculty members and two additional alumni are honored with fellowships to advance research on beneficial AI.
The “PRoC3S” method helps an LLM create a viable action plan by testing each step in a simulation. This strategy could eventually aid in-home robots to complete more ambiguous chore requests.
In a recent commentary, a team from MIT, Equality AI, and Boston University highlights the gaps in regulation for AI models and non-AI algorithms in health care.
A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.