Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 106 - 120 of 700 news clips related to this topic.
Show:

TechCrunch

Researchers at MIT have found that large language models mimic intelligence using linear functions, reports Kyle Wiggers for TechCrunch. “Even though these models are really complicated, nonlinear functions that are trained on lots of data and are very hard to understand, there are sometimes really simple mechanisms working inside them,” writes Wiggers. 

Forbes

Forbes reporter Oludolapo Makinde spotlights research by Prof. Daron Acemoglu and Prof. Simon Johnson that explores the impact of AI on the workforce. “Instead of aiming to create artificial superintelligence or AI systems that outperform humans, [Acemoglu and Johnson] propose shifting the focus to supporting workers,” writes Makinde.

Fortune

A new report by researchers from MIT and Boston Consulting Group (BCG) has uncovered “how AI-based machine learning and predictive analytics are super-powering key performance indictors  (KPIs),” reports Sheryl Estrada for Fortune. “I definitely see marketing, manufacturing, supply chain, and financial folks using these value-added formats to upgrade their existing KPIs and imagine new ones,” says visiting scholar Michael Schrage.

The Economist

Research Scientist Robert Ajemian, graduate student Greta Tuckute and MIT Museum Exhibit Content and Experience Developer Lindsay Bartholomew appear on The Economist’s Babbage podcast to discuss the development of generative AI. “The way that current AI works, whether it is object recognition or large language models, it’s trained on tons and tons and tons of data and what it’s essentially doing is comparing something it’s seen before to something it’s seeing now,” says Ajemian.  

New Scientist

FutureTech researcher Tamay Besiroglu speaks with New Scientist reporter Chris Stokel-Walker about the rapid rate at which large language models (LLMs) are improving. “While Besiroglu believes that this increase in LLM performance is partly due to more efficient software coding, the researchers were unable to pinpoint precisely how those efficiencies were gained – in part because AI algorithms are often impenetrable black boxes,” writes Stokel-Walker. “He also points out that hardware improvements still play a big role in increased performance.”

Boston Magazine

A number of MIT faculty and alumni – including Prof. Daniela Rus, Prof. Regina Barzilay, Research Affiliate Haddad Habib, Research Scientist Lex Fridman, Marc Raibert PhD '77, former Postdoc Rana El Kaliouby and Ray Kurzweil '70 – have been named key figures “at the forefront of Boston’s AI revolution,” reports Wyndham Lewis for Boston Magazine. These researchers are “driving progress and reshaping the way we live,” writes Lewis.

Bloomberg

Prof. David Autor speaks with Bloomberg’s Odd Lots podcast hosts Joe Weisenthal and Tracy Alloway about how AI could be leveraged to improve inequality, emphasizing the policy choices governments will need to make to ensure the technology is beneficial to humans. “Automation is not the primary source of how innovation improves our lives,” says Autor. “Many of the things we do with new tools is create new capabilities that we didn’t previously have.”

The New York Times

Prof. David Autor and Prof. Daron Acemoglu speak with New York Times columnist Peter Coy about the impact of AI on the workforce. Acemoglu and Autor are “optimistic about a continuing role for people in the labor market,” writes Coy. “An upper bound of the fraction of jobs that would be affected by A.I. and computer vision technologies within the next 10 years is less than 10 percent,” says Acemoglu.

Politico

MIT researchers have found that “when an AI tool for radiologists produced a wrong answer, doctors were more likely to come to the wrong conclusion in their diagnoses,” report Daniel Payne, Carmen Paun, Ruth Reader and Erin Schumaker for Politico. “The study explored the findings of 140 radiologists using AI to make diagnoses based on chest X-rays,” they write. “How AI affected care wasn’t dependent on the doctors’ levels of experience, specialty or performance. And lower-performing radiologists didn’t benefit more from AI assistance than their peers.”

The Economist

Research Scientists Karthik Srinivasan and Robert Ajemian speak with The Economist’s Babbage podcast about the role of big data and specialized computer chips in the development of artificial intelligence. “I think right now, actually, the goal should be just to harness big data as much as we can,” says Ajemian. “It’s kind of this new tool, a new toy, that humanity has to play with and obviously we have to play with it responsibly. The architectures that they built today are not that different than the ones that were built in the 60s and the 70s and the 80s. The difference is back then they did not have big data and tremendous compute." 

Boston Magazine

Boston Magazine spotlights MIT’s leading role in the AI revolution in the Greater Boston area. “With a $2 million grant from the Department of Defense, MIT’s Artificial Intelligence Lab combines with a new research group, Project MAC, to create what’s now known as the Computer Science and Artificial Intelligence Laboratory (CSAIL). Over the next three years, researchers lead groundbreaking machine-learning projects such as the creation of Eliza, a psychotherapy-based computer program that could process languages and establish emotional connections with users (a primordial chatbot, essentially).”

TechCrunch

Harry Rein '15, MEng '16 and Chris Tinsley MBA '20 co-founded ShopMy, a marketing platform designed to connect content creators with brands and monetize their content, reports Laruen Forristal for TechCrunch. “ShopMy’s marketing platform equips creators with the tools they need to earn from their product recommendations, like building digital storefronts, accessing a catalog of millions of products, making commissionable links and chatting directly with companies via mobile app,” explains Forristal.

The Economist

Prof. Pulkit Agrawal and graduate student Gabriel Margolis speak with The Economist’s Babbage podcast about the simulation research and technology used in developing intelligent machines. “Simulation is a digital twin of reality,” says Agrawal. “But simulation still doesn’t have data, it is a digital twin of the environment. So, what we do is something called reinforcement learning which is learning by trial and error which means that we can try out many different combinations.”