Skip to content ↓

Topic

Research

Download RSS feed: News Articles / In the Media / Audio

Displaying 346 - 360 of 5011 news clips related to this topic.
Show:

Space.com

Astronomers from MIT and other institutions have found that periodic eruptions from a supermassive black hole located in a galaxy about 800 million light-years from Earth could be caused by a, “second, smaller black hole slamming into a disk of gas and dust, or ‘accretion disk,’ surrounding the supermassive black hole, causing it to repeatedly ‘hiccup’ out matter,” writes Rob Lea for Space.com

The Economist

Research Scientist Robert Ajemian, graduate student Greta Tuckute and MIT Museum Exhibit Content and Experience Developer Lindsay Bartholomew appear on The Economist’s Babbage podcast to discuss the development of generative AI. “The way that current AI works, whether it is object recognition or large language models, it’s trained on tons and tons and tons of data and what it’s essentially doing is comparing something it’s seen before to something it’s seeing now,” says Ajemian.  

New Scientist

FutureTech researcher Tamay Besiroglu speaks with New Scientist reporter Chris Stokel-Walker about the rapid rate at which large language models (LLMs) are improving. “While Besiroglu believes that this increase in LLM performance is partly due to more efficient software coding, the researchers were unable to pinpoint precisely how those efficiencies were gained – in part because AI algorithms are often impenetrable black boxes,” writes Stokel-Walker. “He also points out that hardware improvements still play a big role in increased performance.”

Scientific American

Prof. Katharina Ribbeck speaks with Christopher Intagliata of Scientific American’s “Science Quickly” podcast about her research exploring how mucus can treat and prevent disease. “The basic building blocks of mucus that give mucus its gooey nature are these threadlike molecules—they look like tiny bottlebrushes—that display lots and lots of sugar molecules on their backbone,” explains Ribbeck. “And these sugar molecules—we call them glycans—interact with molecules from the immune system and microbes directly. And the exact configuration and density of these sugar molecules is really important for health.”

Nature

Prof. Long Ju and his colleagues observed the fractional quantum anomalous Hall effect (FQAHE) when five layers of graphene were sandwiched between sheets of boron nitride, reports Dan Garisto for Nature. The findings are, “capturing physicists’ imagination because they are fundamentally new discoveries about how electrons behave,” writes Garisto.

Bloomberg

Prof. David Autor speaks with Bloomberg’s Odd Lots podcast hosts Joe Weisenthal and Tracy Alloway about how AI could be leveraged to improve inequality, emphasizing the policy choices governments will need to make to ensure the technology is beneficial to humans. “Automation is not the primary source of how innovation improves our lives,” says Autor. “Many of the things we do with new tools is create new capabilities that we didn’t previously have.”

The New York Times

Prof. David Autor and Prof. Daron Acemoglu speak with New York Times columnist Peter Coy about the impact of AI on the workforce. Acemoglu and Autor are “optimistic about a continuing role for people in the labor market,” writes Coy. “An upper bound of the fraction of jobs that would be affected by A.I. and computer vision technologies within the next 10 years is less than 10 percent,” says Acemoglu.

Politico

MIT researchers have found that “when an AI tool for radiologists produced a wrong answer, doctors were more likely to come to the wrong conclusion in their diagnoses,” report Daniel Payne, Carmen Paun, Ruth Reader and Erin Schumaker for Politico. “The study explored the findings of 140 radiologists using AI to make diagnoses based on chest X-rays,” they write. “How AI affected care wasn’t dependent on the doctors’ levels of experience, specialty or performance. And lower-performing radiologists didn’t benefit more from AI assistance than their peers.”

Salon

Researchers from MIT and elsewhere have isolated a “protein in human sweat that protects against Lyme disease,” reports Matthew Rozsa for Salon. The researchers believe that if “properly harnessed the protein could form the basis of skin creams that either prevent the disease or treat especially persistent infections,” writes Rosza.

The Economist

Research Scientists Karthik Srinivasan and Robert Ajemian speak with The Economist’s Babbage podcast about the role of big data and specialized computer chips in the development of artificial intelligence. “I think right now, actually, the goal should be just to harness big data as much as we can,” says Ajemian. “It’s kind of this new tool, a new toy, that humanity has to play with and obviously we have to play with it responsibly. The architectures that they built today are not that different than the ones that were built in the 60s and the 70s and the 80s. The difference is back then they did not have big data and tremendous compute." 

Boston Magazine

Boston Magazine spotlights MIT’s leading role in the AI revolution in the Greater Boston area. “With a $2 million grant from the Department of Defense, MIT’s Artificial Intelligence Lab combines with a new research group, Project MAC, to create what’s now known as the Computer Science and Artificial Intelligence Laboratory (CSAIL). Over the next three years, researchers lead groundbreaking machine-learning projects such as the creation of Eliza, a psychotherapy-based computer program that could process languages and establish emotional connections with users (a primordial chatbot, essentially).”

Scientific American

Researchers at MIT and elsewhere have found that high exposure to implausible and outlandish false claims can increase the belief in more ambiguous-seeming ones, reports Chris Stokel-Walker for Scientific American. The researchers “conducted five experiments with nearly 5,500 participants in all in which they asked these individuals to read or evaluate news headlines,” writes Stokel-Walker. “Across all the experiments, participants exposed to blatantly false claims were more likely to believe unrelated, more ambiguous falsehoods.”