Skip to content ↓

Topic

MIT Sloan School of Management

Download RSS feed: News Articles / In the Media / Audio

Displaying 76 - 90 of 437 news clips related to this topic.
Show:

Forbes

Sloan Visiting Senior Lecturer Paul McDonagh-Smith speaks with Joe McKendrick of Forbes about the ongoing discussions about AI safety guidelines. “While ensuring safety is crucial, especially for frontier AI models, there is also a need to strike a balance where AI is a catalyst for innovation without putting our organizations and broader society at risk,” explains McDonagh-Smith. 

Financial Times

Prof. Anna Stansbury speaks with Soumaya Keynes of the Financial Times podcast “The Economics Show” about her recent research on the class ceiling, which finds that an individual’s family circumstances can hold them back, even if they have earned a PhD. “We should care if people have opportunities to fulfill their talents for reasons of equity and justice. But the other is a very kind of banal economic reason, which is efficiency,” says Stansbury. “If you assume that talent for something is equally distributed, then we should care if people that are talented aren’t getting to fulfill that talent because it’s worse for overall productivity and overall outcomes.”

The Hill

Researchers from MIT and Oxford University has found “social media platforms’ suspensions of accounts may not be rooted in political biases, but rather certain political groups’ tendency to share misinformation,” reports Miranda Nazzaro for The Hill. “Thus, even under politically neutral anti-misinformation polices, political asymmetries in enforcement should be expected,” researchers wrote. “Political imbalance in enforcement need not imply bias on the part of social media companies implementing anti-misinformation polices.” 

Financial Times

A new working paper by MIT Prof. Antoinette Schoar and Brandeis Prof. Yang Sun explores how different people react to financial advice, reports Robin Wigglesworth for Financial Times. “The results indicate that most people do update their beliefs in the direction of the advice they receive, irrespective of their previous views,” writes Wigglesworth. 

Scientific American

Writing for Scientific American, MIT Prof. David Rand and University of Pennsylvania postdoctoral fellow Jennifer Allen highlight new challenges in the fight against misinformation. “Combating misbelief is much more complicated—and politically and ethically fraught—than reducing the spread of explicitly false content,” they write. “But this challenge must be bested if we want to solve the ‘misinformation’ problem.”

Forbes

Writing for Forbes, Senior Lecturer Guadalupe Hayes-Mota '08 MS '16, MBA '16 explores the challenges, opportunities and future of AI-driven drug development. “I see the opportunities for AI in drug development as vast and transformative,”  writes Hayes-Mota. “AI can help potentially uncover new drug candidates that would have been impossible to find through traditional methods.”

Boston.com

MIT has been named the number 2 university in the nation on U.S. News & World Report’s annual list of the country’s top universities and colleges, reports Ross Cristantiello for Boston.com 

Boston 25 News

MIT has been named to the second spot in U.S News & World Report’s “Best National University Rankings,” reports Frank O’Laughlin for Boston 25 News.

The Boston Globe

MIT was named the number 2 university in the nation in U.S. News & World Report’s annual ranking of the best colleges and universities in the country, reports Travis Andersen for The Boston Globe.

TechCrunch

TechCrunch reporters Kyle Wiggers and Devin Coldewey spotlight a new generative AI model developed by MIT researchers that can help counteract conspiracy theories. The researchers “had people who believed in conspiracy-related statements talk with a chatbot that gently, patiently, and endlessly offered counterevidence to their arguments,” explain Wiggers and Coldewey. “These conversations led the humans involved to stating a 20% reduction in the associated belief two months later.”

Newsweek

New research by Prof. David Rand and his colleagues has utilized generative AI to address conspiracy theory beliefs, reports Marie Boran for Newsweek. “The researchers had more than 2,000 Americans interact with ChatGPT about a conspiracy theory they believe in, explains Boran. “Within three rounds of conversation with the chatbot, participants’ belief in their chosen conspiracy theory was reduced by 20 percent on average.” 

Bloomberg

Researchers from MIT and Stanford University have found “staff at one Fortune 500 software firm became 14% more productive on average when using generative AI tools,” reports Olivia Solon and Seth Fiegerman for Bloomberg

Popular Science

A new study by researchers from MIT and elsewhere tested a generative AI chatbot’s ability to debunk conspiracy theories , reports Mack Degeurin for Popular Science. “In the end, conversations with the chatbot reduced the participant’s overall confidence in their professed conspiracy theory by an average of 20%,” writes Degeurin. 

Los Angeles Times

A new study by researchers from MIT and elsewhere has found that an AI chatbot is capable of combating conspiracy theories, reports Karen Kaplan for The Los Angeles Times. The researchers found that conversations with the chatbot made people “less generally conspiratorial,” says Prof. David Rand.  “It also increased their intentions to do things like ignore or block social media accounts sharing conspiracies, or, you know, argue with people who are espousing those conspiracy theories.”

The New York Times

A new chatbot developed by MIT researchers aimed at persuading individuals to stop believing unfounded conspiracy theories has made “significant and long-lasting progress at changing people’s convictions,” reports Teddy Rosenbluth for The New York Times. The chatbot, dubbed DebunkBot, challenges the “widely held belief that facts and logic cannot combat conspiracy theories.” Professor David Rand explains: “It is the facts and evidence themselves that are really doing the work here.”