Generative AI imagines new protein structures
“FrameDiff” is a computational tool that uses generative AI to craft new protein structures, with the aim of accelerating drug development and improving gene therapy.
Download RSS feed: News Articles / In the Media / Audio
“FrameDiff” is a computational tool that uses generative AI to craft new protein structures, with the aim of accelerating drug development and improving gene therapy.
Prestigious awards recognize community support of MIT’s goals, values, and mission.
PhD student Will Sussman studies wireless networks while fostering community networks.
This AI system only needs a small amount of data to predict molecular properties, which could speed up drug discovery and material development.
A new computational method facilitates the dense placement of objects inside a rigid container.
Experts from MIT’s School of Engineering, Schwarzman College of Computing, and Sloan Executive Education educate national security leaders in AI fundamentals.
A new dataset can help scientists develop automatic systems that generate richer, more descriptive captions for online charts.
MAGE merges the two key tasks of image generation and recognition, typically trained separately, into a single system.
The system analyzes the likelihood that an attacker could thwart a certain security scheme to steal secret information.
Six teams conducting research in AI, data science, and machine learning receive funding for projects that have potential commercial applications.
MIT researchers characterize gene expression patterns for 22,500 brain vascular cells across 428 donors, revealing insights for Alzheimer’s onset and potential treatments.
The inaugural SERC Symposium convened experts from multiple disciplines to explore the challenges and opportunities that arise with the broad applicability of computing in many aspects of society.
By applying a language model to protein-drug interactions, researchers can quickly screen large libraries of potential drug compounds.
The scientists used a natural language-based logical inference dataset to create smaller language models that outperformed much larger counterparts.
A new multimodal technique blends major self-supervised learning methods to learn more similarly to humans.