Bloomberg
Researchers from MIT and Stanford University have found “staff at one Fortune 500 software firm became 14% more productive on average when using generative AI tools,” reports Olivia Solon and Seth Fiegerman for Bloomberg.
Researchers from MIT and Stanford University have found “staff at one Fortune 500 software firm became 14% more productive on average when using generative AI tools,” reports Olivia Solon and Seth Fiegerman for Bloomberg.
A new study by researchers from MIT and elsewhere tested a generative AI chatbot’s ability to debunk conspiracy theories , reports Mack Degeurin for Popular Science. “In the end, conversations with the chatbot reduced the participant’s overall confidence in their professed conspiracy theory by an average of 20%,” writes Degeurin.
A new study by researchers from MIT and elsewhere has found that an AI chatbot is capable of combating conspiracy theories, reports Karen Kaplan for The Los Angeles Times. The researchers found that conversations with the chatbot made people “less generally conspiratorial,” says Prof. David Rand. “It also increased their intentions to do things like ignore or block social media accounts sharing conspiracies, or, you know, argue with people who are espousing those conspiracy theories.”
A new chatbot developed by MIT researchers aimed at persuading individuals to stop believing unfounded conspiracy theories has made “significant and long-lasting progress at changing people’s convictions,” reports Teddy Rosenbluth for The New York Times. The chatbot, dubbed DebunkBot, challenges the “widely held belief that facts and logic cannot combat conspiracy theories.” Professor David Rand explains: “It is the facts and evidence themselves that are really doing the work here.”
A new study by Prof. David Rand and his colleagues has found that chatbots, powered by generative AI, can help people abandon conspiracy theories, reports Rebecca Ruiz for Mashable. “Rand and his co-authors imagine a future in which a chatbot might be connected to social media accounts as a way to counter conspiracy theories circulating on a platform,” explains Ruiz. “Or people might find a chatbot when they search online for information about viral rumors or hoaxes thanks to keyword ads tied to certain conspiracy search terms.”
Researchers from MIT and elsewhere have created an AI Risk Repository, a free retrospective analysis detailing over 750 risks associated with AI, reports Tor Constantino for Forbes. “If current understanding is fragmented, policymakers, researchers, and industry leaders may believe they have a relatively complete shared understanding of AI risks when they actually don’t,” says Peter Slattery, a research affiliate at the MIT FutureTech project. “This sort of misconception could lead to critical oversights, inefficient use of resources, and incomplete risk mitigation strategies, which leave us more vulnerable.”
In an article for Forbes, Robert Clark spotlights how MIT researchers developed a new model to predict irrational behaviors in humans and AI agents in suboptimal conditions. “The goal of the study was to better understand human behavior to improve collaboration with AI,” Clark writes.
Forbes contributor Peter High spotlights research by Senior Research Scientist Peter Weill, covering real-time decision-making, the importance of digitally savvy leadership and the potential of generative AI. High notes Weill’s advice to keep up. “The gap between digitally advanced companies and those lagging is widening, and the consequences of not keeping pace are becoming more severe. ‘You can’t get left behind on being real time,’ he warned.”
New Yorker reporter Dhruv Khullar spotlights how researchers from across MIT are using AI to advance drug development. Khullar highlights the MIT Jameel Clinic, the Broad Institute and various faculty members for their efforts in bridging the gap between AI and drug research. “With AI, we’re getting that much more efficient at finding molecules—and in some cases creating them,” says Prof. James Collins. “The cost of the search is going down. Now we really don’t have an excuse.”
Prof. Daron Acemoglu is a guest on the Financial Times podcast, “The Economics Show with Soumaya Keynes," detailing his research on the economics of AI and implications for workers. He says AI could help the current workforce communicate better and control its own data, while opening up possibilities for the geographically or economically disadvantaged, if the right policies are put in place. “I think having this conversation, and really making it a central part of the public debate that there is a technically feasible and socially beneficial different direction of technology, would have a transformative effect on the tech sector,” he explains.
Researchers from MIT and Northwestern University have developed some guidelines for how to spot deepfakes, noting “there is no fool-proof method that always works,” reports Jeremy Hsu for New Scientist.
Writing for Forbes, Andrew Binns highlights research from Prof. Daron Acemoglu suggesting total productivity gains of AI could be as little as 0.53% over 10 years, much lower than common estimates.
Senior lecturer Paul McDonagh-Smith speaks with Forbes reporter Joe Mckendrick about the history behind the AI hype cycle. “While AI technologies and techniques are at the forefront of today’s technological innovation, it remains a field defined — as it has from the 1950s — by both significant achievements and considerable hype," says McDonagh-Smith.
Researchers at MIT are working toward training AI models “as subject-matter experts that ethically tailor financial advice to an individual’s circumstances,” reports Tanza Loudenback for Business Insider. “We think we’re about two or three years away before we can demonstrate a piece of software that by SEC regulatory guidelines will satisfy fiduciary duty,” says Prof. Andrew Lo.
TechCrunch reporter Kyle Wiggers spotlights Codeium, a generative AI coding company founded by MIT alums Varun Mohan SM '17 and Douglas Chen '17. Codeium’s platform is run by generative AI models trained on public code, providing suggestions in the context of an app’s entire codebase. “Many of the AI-driven solutions provide generic code snippets that require significant manual work to integrate and secure within existing codebases,” Mohan explains. “That’s where our AI coding assistance comes in.”