Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 46 - 60 of 700 news clips related to this topic.
Show:

Forbes

Researchers at MIT have developed “a publicly available database, culled from reports, journals, and other documents to shed light on the risks AI experts are disclosing through paper, reports, and other documents,” reports Jon McKendrick for Forbes. “These benchmarked risks will help develop a greater understanding the risks versus rewards of this new force entering the business landscape,” writes McKendrick. 

Wired

A new database of AI risks has been developed by MIT researchers in an effort to help guide organizations as they begin using AI technologies, reports Will Knight for Wired. “Many organizations are still pretty early in that process of adopting AI,” meaning they need guidance on the possible perils, says Research Scientist Neil Thompson, director of the FutureTech project.   

TechCrunch

TechCrunch reporter Kyle Wiggers writes that MIT researchers have developed a new tool, called SigLLM, that uses large language models to flag problems in complex systems. In the future, SigLLM could be used to “help technicians flag potential problems in equipment like heavy machinery before they occur.” 

TechCrunch

MIT researchers have developed an AI risk repository that includes over 70 AI risks, reports Kyle Wiggers for TechCrunch. “This is an attempt to rigorously curate and analyze AI risks into a publicly accessible, comprehensive, extensible and categorized risk database that anyone can copy and use, and that will be kept up to date over time,” explains Peter Slattery, a research affiliate at the MIT FutureTech project.  

BBC News

Prof. Regina Barzilay joins  BBC host Caroline Steel and other AI experts to discuss her inspiration for applying AI technologies to help improve medicine and fight cancer. “I think that in cancer and in many other diseases, the big question is always, how do you deal with uncertainty? It's all the matter of predictions," says Barzilay. "Unfortunately, today, we rely on humans who don't have this capacity to make predictions. As a result, many times people get wrong treatments or they are diagnosed much later.” 

Fast Company

In an excerpt from her new book, “The Mind’s Mirror: Risk and Reward in the Age of AI," Prof. Daniela Rus, director of CSAIL, addresses the fear surrounding new AI technologies, while also exploring AI’s vast potential. “New technologies undoubtedly disrupt existing jobs, but they also create entirely new industries, and the new roles needed to support them,” writes Rus.  

NPR

Prof. Daron Acemoglu speaks with NPR Planet Money hosts Greg Rosalsky and Darian Woods about the anticipated economic impacts of generative AI. Acemoglu notes he believes AI is overrated because humans are underrated. "A lot of people in the industry don't recognize how versatile, talented, multifaceted human skills and capabilities are," Acemoglu says. "And once you do that, you tend to overrate machines ahead of humans and underrate the humans."

Forbes

MIT researchers have found that “when nudged to review LLM-generated outputs, humans are more likely to discover and fix errors,” reports Carter Busse for Forbes. The findings suggest that, “when given the chance to evaluate results from AI systems, users can greatly improve the quality of the outputs,” explains Busse. “The more information provided about the origins and accuracy of the results, the better the users are at detecting problems.” 

Tech Briefs

Research Scientist Mathieu Huot speaks with Tech Briefs reporter Andrew Corselli about his work with GenSQL, a generative AI system for databases that “could help users make predictions, detect anomalies, guess missing values, fix errors, or generate synthetic data with just a few keystrokes.” 

TechCrunch

Intelmatix, an AI startup founded by by Almaha Almalki MS '18, Anas Alfaris MS '09, PhD '09 and Ahmad Alabdulkareem PhD '18, aims to provide businesses in the Middle East and North Africa with access to AI for decision-making, reports Annie Njanja for TechCrunch. . “The idea of democratizing access to AI has always been something that we’ve been very passionate about,” says Alfaris. 

 

Scientific American

Prof. Sherry Turkle shares the benefits of being polite when interacting with AI technologies, reports Webb Wright for Scientific American, underscoring the risks of becoming habituated to using crass, disrespectful and dictatorial language. “We have to protect ourselves,” says Turkle. “Because we’re the only ones that have to form relationships with real people.”

TechCrunch

Researchers at MIT have developed a new method for “training home robots in simulation,” reports Brain Heater for TechCrunch. “Simulation has become a bedrock element of robot training in recent decades,” explains Heater. “It allows robots to try and fail at tasks thousands — or even millions — of times in the same amount of time it would take to do it once in the real world.” 

Forbes

Forbes reporter Rodger Dean Duncan spotlights “The Skill Code: How to Save Human Ability in an Age of Intelligent Machines,” a new book by Research Affiliate Matt Bean SM '14, PhD '17. Duncan “explains Beane’s take on AI tools, collaboration and remote work, who suggests traditional mentoring is at risk in the workplace. Beane says today’s successful people have ‘discovered new tactics that others can use to get skills without throwing out the benefits of hybrid working arrangements.’”

The New York Times

Researchers from the Data Provenance Initiative, a research group led by MIT engineers, have found that “important web sources used for training AI models have restricted the use of their data,” reports Kevin Roose for The New York Times. “We’re seeing a rapid decline in consent to use data across the web that will have ramifications not just for A.I. companies, but for researchers, academics and noncommercial entities,” explains graduate student Shayne Longpre.

The Wall Street Journal

Prof. Armando Solar-Lezama speaks with The Wall Street Journal reporter Isabelle Bousquette about large language models (LLMs) in academia. Instead of building LLMs from scratch, Solar-Lezama suggests “students and researchers are focused on developing applications and even creating synthetic data that could be used to train LLMs,” writes Bousquette.