Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 181 - 195 of 700 news clips related to this topic.
Show:

Fortune

Researchers from MIT and elsewhere have identified some of the benefits and disadvantages of generative AI when used for specific tasks, reports Paige McGlauflin and Joseph Abrams for Fortune. “The findings show a 40% performance boost for consultants using the chatbot for the creative product project, compared to the control group that did not use ChatGPT, but a 23% decline in performance when used for business problem-solving,” explain McGlauflin and Abrams.

The Wall Street Journal

A study by researchers from MIT and Harvard examined the potential impact of the use of AI technologies on the field of radiology, reports Laura Landro for The Wall Street Journal. “Both AI models and radiologists have their own unique strengths and areas for improvement,” says Prof. Nikhil Agarwal.

GBH

Prof. Eric Klopfer, co-director of the RAISE initiative (Responsible AI for Social Empowerment in Education), speaks with GBH reporter Diane Adame about the importance of providing students guidance on navigating artificial intelligence systems. “I think it's really important for kids to be aware that these things exist now, because whether it's in school or out of school, they are part of systems where AI is present,” says Klopfer. “Many humans are biased. And so the [AI] systems express those same biases that they've seen online and the data that they've collected from humans.”

Scientific American

A new study by MIT researchers demonstrates how “machine-learning systems designed to spot someone breaking a policy rule—a dress code, for example—will be harsher or more lenient depending on minuscule-seeming differences in how humans annotated data that were used to train the system,” reports Ananya for Scientific American. “This is an important warning for a field where datasets are often used without close examination of labeling practices, and [it] underscores the need for caution in automated decision systems—particularly in contexts where compliance with societal rules is essential,” says Prof. Marzyeh Ghassemi.

The Ojo-Yoshida Report

Research scientist Bryan Reimer speaks with The Ojo-Yoshida Report host Junko Yoshida about the future of the autonomous vehicle industry. “We cannot let the finances drive here,” explains Reimer. “We need to manage the finances to let society win over the long haul.”

Forbes

Forbes reporter Rob Toews spotlights Prof. Daniela Rus, director of CSAIL, and research affiliate Ramin Hasani and their work with liquid neural networks. “The ‘liquid’ in the name refers to the fact that the model’s weights are probabilistic rather than constant, allowing them to vary fluidly depending on the inputs the model is exposed to,” writes Toews.

Financial Times

Researchers at MIT and elsewhere have used artificial intelligence to develop a new antibiotic to combat Acinetobacter baumannii, a challenging bacteria known to become resistant to antibiotics, reports Hannah Kuchler for the Financial Times. “It took just an hour and a half — a long lunch — for the AI to serve up a potential new antibiotic, an offering to a world contending with the rise of so-called superbugs: bacteria, viruses, fungi and parasites that have mutated and no longer respond to the drugs we have available,” writes Kuchler.

Popular Science

Prof. Yoon Kim speaks with Popular Science reporter Charlotte Hu about how large language models like ChatGPT operate. “You can think of [chatbots] as algorithms with little knobs on them,” says Kim. “These knobs basically learn on data that you see out in the wild,” allowing the software to create “probabilities over the entire English vocab.”

Fast Company

Principal Research Scientist Kalyan Veeramachaneni speaks with Fast Company reporter Sam Becker about his work in developing the Synthetic Data Vault, which is helpful for creating synthetic data sets, reports Sam Becker for Fast Company. “Fake data is randomly generated,” says Veeramachaneni. “While synthetic data is trying to create data from a machine learning model that looks very realistic.”

TechCrunch

Researchers from MIT and Harvard have explored astrocytes, a group of brain cells, from a computational perspective and developed a mathematical model that shows how they can be used to build a biological transformer, reports Kyle Wiggers for TechCrunch. “The brain is far superior to even the best artificial neural networks that we have developed, but we don’t really know exactly how the brain works,” says research staff member Dmitry Krotov. “There is scientific value in thinking about connections between biological hardware and large-scale artificial intelligence networks. This is neuroscience for AI and AI for neuroscience.

The Wall Street Journal

Prof. Mark Tegmark speaks with The Wall Street Journal reporter Emily Bobrow about the importance of companies and governments working together to mitigate the risks of new AI technologies. Tegmark “recommends the creation of something like a Food and Drug Administration for AI, which would force companies to prove their products are safe before releasing them to the public,” writes Bobrow.

The Guardian

Prof. D. Fox Harrell writes for The Guardian about the importance of ensuring AI systems are designed to “reflect the ethically positive culture we truly want.” Harrell emphasizes that: “We need to be aware of, and thoughtfully design, the cultural values that AI is based on. With care, we can build systems based on multiple worldviews – and address key ethical issues in design such as transparency and intelligibility."

Wired

Undergraduate student Isabella Struckman and Sofie Kupiec ’23 reached out to the first hundred signatories of the Future of Life Institute’s open letting calling for a pause on AI development to learn more about their motivations and concerns, reports Will Knight for Wired. “The duo’s write-up of their findings reveals a broad array of perspectives among those who put their name to the document,” writes Knight. “Despite the letter’s public reception, relatively few were actually worried about AI posing a looming threat to humanity.”

TechCrunch

Prof. Daniela Rus, director of CSAIL, speaks with TechCrunch reporter Brain Heater about liquid neural networks and how this emerging technology could impact robotics. “The reason we started thinking about liquid networks has to do with some of the limitations of today’s AI systems,” says Rus, “which prevent them from being very effective for safety, critical systems and robotics. Most of the robotics applications are safety critical.”