Large language models are biased. Can logic help save them?
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.
The long-running programming competition encourages skills and friendships that last a lifetime.
A process that seeks feedback from human specialists proves more effective at optimization than automated systems working alone.
The program leverages MIT’s research expertise and Takeda’s industrial know-how for research in artificial intelligence and medicine.
The method enables a model to determine its confidence in a prediction, while using no additional data and far fewer computing resources than other methods.
MIT spinout Verta offers tools to help companies introduce, monitor, and manage machine-learning models safely and at scale.
The chatbot’s success on the medical licensing exam shows that the test — and medical education — are flawed, Celi says.
A new study shows how large language models like GPT-3 can learn a new task from just a few examples, without the need for any new training data.
A new tool brings the benefits of AI programming to a much broader class of problems.
More than $1 million in funding available to selected Solver teams and fellows.
Computer scientists want to know the exact limits in our ability to clean up, and reconstruct, partly blurred images.
Deep-learning model takes a personalized approach to assessing each patient’s risk of lung cancer based on CT scans.
A new experiential learning opportunity challenges undergraduates across the Greater Boston area to apply their AI skills to a range of industry projects.
New fellows are working on health records, robot control, pandemic preparedness, brain injuries, and more.
AeroAstro major and accomplished tuba player Frederick Ajisafe relishes the community he has found in the MIT Wind Ensemble.