3 Questions: Leo Anthony Celi on ChatGPT and medicine
The chatbot’s success on the medical licensing exam shows that the test — and medical education — are flawed, Celi says.
The chatbot’s success on the medical licensing exam shows that the test — and medical education — are flawed, Celi says.
A new study shows how large language models like GPT-3 can learn a new task from just a few examples, without the need for any new training data.
A new tool brings the benefits of AI programming to a much broader class of problems.
Passionate about creating educational opportunities in India, PhD student Siddhartha Jayanti recently explored multiprocessor speed limits, in a paper written in the Indian language Telugu.
A new measure can help scientists decide which estimation method to use when modeling a particular data problem.
Computer scientists want to know the exact limits in our ability to clean up, and reconstruct, partly blurred images.
“I wouldn’t let the aggressor in the war squash my dreams,” says Ukrainian mathematician and MITx MicroMasters learner Tetiana Herasymova.
Deep-learning model takes a personalized approach to assessing each patient’s risk of lung cancer based on CT scans.
Study shows that if autonomous vehicles are widely adopted, hardware efficiency will need to advance rapidly to keep computing-related emissions in check.
New fellows are working on health records, robot control, pandemic preparedness, brain injuries, and more.
MIT Visiting Scholar Alfred Spector discusses the power of data science and visualization, as well as his new textbook on the subject.
Stefanie Jegelka seeks to understand how machine-learning models behave, to help researchers build more robust models for applications in biology, computer vision, optimization, and more.
Startups founded by mechanical engineers are at the forefront of developing solutions to mitigate the environmental impact of manufacturing.
Seven faculty and alumni are among the winners of the prestigious honors for electrical engineers and computer scientists.
But the harm from a discriminatory AI system can be minimized if the advice it delivers is properly framed, an MIT team has shown.