Skip to content ↓

Topic

Algorithms

Download RSS feed: News Articles / In the Media / Audio

Displaying 31 - 45 of 564 news clips related to this topic.
Show:

Popular Science

Prof. Yoon Kim speaks with Popular Science reporter Charlotte Hu about how large language models like ChatGPT operate. “You can think of [chatbots] as algorithms with little knobs on them,” says Kim. “These knobs basically learn on data that you see out in the wild,” allowing the software to create “probabilities over the entire English vocab.”

Boston.com

MIT researchers have developed a new tool called “PhotoGuard” that can help protect images from AI manipulation, reports Ross Cristantiello for Boston.com. The tool “is designed to make real images resistant to advanced models that can generate new images, such as DALL-E and Midjourney,” writes Cristantiello.

CNN

Researchers at MIT have developed “PhotoGuard,” a tool that can be used to protect images from AI manipulation, reports Catherine Thorbecke for CNN. The tool “puts an invisible ‘immunization’ over images that stops AI models from being able to manipulate the picture,” writes Thorbecke.

Forbes

At CSAIL’s Imagination in Action event, Prof. Stefanie Jegelka’s presentation provided insight into “the failures and successes of neural networks and explored some crucial context that can help engineers and other human observers to focus in on how learning is happening,” reports research affiliate John Werner for Forbes.

Forbes

Prof. Jacob Andreas explored the concept of language guided program synthesis at CSAIL’s Imagination in Action event, reports research affiliate John Werner for Forbes. “Language is a tool,” said Andreas during his talk. “Not just for training models, but actually interpreting them and sometimes improving them directly, again, in domains, not just involving languages (or) inputs, but also these kinds of visual domains as well.”

Forbes

Prof. Daniela Rus, director of CSAIL, writes for Forbes about Prof. Dina Katabi’s work using insights from wireless systems to help glean information about patient health. “Incorporating continuous time data collection in healthcare using ambient WiFi detectable by machine learning promises an era where early and accurate diagnosis becomes the norm rather than the exception,” writes Rus.

ABC News

Researchers from MIT and Massachusetts General Hospital have developed “Sybil,” an AI tool that can detect the risk of a patient developing lung cancer within six years, reports Mary Kekatos for ABC News. “Sybil was trained on low-dose chest computer tomography scans, which is recommended for those between ages 50 and 80 who either have a significant history of smoking or currently smoke,” explains Kekatos.

Forbes

During her talk at CSAIL’s Imagination in Action event, Prof. Daniela Rus, director of CSAIL, explored the promise of using liquid neural networks “to solve some of AI’s notorious complexity problems,” writes research affiliate John Werner for Forbes. “Liquid networks are a new model for machine learning,” said Rus. “They're compact, interpretable and causal. And they have shown great promise in generalization under heavy distribution shifts.”

Forbes

In an article for Forbes, research affiliate John Werner spotlights Prof. Dina Katabi and her work showcasing how AI can boost the capabilities of clinical data. “We are going to collect data, clinical data from patients continuously in their homes, track the symptoms, the evolution of those symptoms, and process this data with machine learning so that we can get insights before problems occur,” says Katabi.

WCVB

Prof. Regina Barzilay speaks with Nicole Estephan of WCVB-TV’s Chronicle about her work developing new AI systems that could be used to help diagnose breast and lung cancer before the cancers are detectable to the human eye.

Science

In conversation with Matthew Huston at Science, Prof. John Horton discusses the possibility of using chatbots in research instead of humans. As he explains, a change like that would be similar to the transition from in-person to online surveys, “"People were like, ‘How can you run experiments online? Who are these people?’ And now it’s like, ‘Oh, yeah, of course you do that.’”

Forbes

Researchers from MIT have found that using generative AI chatbots can improve the speed and quality of simple writing tasks, but often lack factual accuracy, reports Richard Nieva for Forbes. “When we first started playing with ChatGPT, it was clear that it was a new breakthrough unlike anything we've seen before,” says graduate student Shakked Noy. “And it was pretty clear that it was going to have some kind of labor market impact.”

Yahoo! News

Prof. Marzyeh Ghassemi speaks with Yahoo News reporter Rebecca Corey about the benefits and risks posed by the use of AI tools in health care. “I think the problem is when you try to naively replace humans with AI in health care settings, you get really poor results,” says Ghassemi. “You should be looking at it as an augmentation tool, not as a replacement tool.”

The Conversation

Writing for The Conversation, postdoc Ziv Epstein SM ’19, PhD ’23, graduate student Robert Mahari and Jessica Fjeld of Harvard Law School explore how the use of generative AI will impact creative work. “The ways in which existing laws are interpreted or reformed – and whether generative AI is appropriately treated as the tool it is – will have real consequences for the future of creative expression,” the authors note.  

Politico

Neil Thompson, director of the FutureTech research project at MIT CSAIL and a principal investigator MIT’s Initiative on the Digital Economy, speaks with Politico reporter Mohar Chatterjee about generative AI, the pace of computer progress and the need for the U.S. to invest more in developing the future of computing. “We need to make sure we have good secure factories that can produce cutting-edge semiconductors,” says Thompson. “The CHIPS Act covers that. And people are starting to invest in some of these post-CMOS technologies — but it just needs to be much more. These are incredibly important technologies.”