Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 466 - 480 of 700 news clips related to this topic.
Show:

Fast Company

Fast Company reporter Michael Grothaus writes that CSAIL researchers have developed a new system that allows robots to determine what objects look like by touching them. “The breakthrough could ultimately help robots become better at manipulating objects,” Grothaus explains.

TechCrunch

TechCrunch reporter Darrell Etherington writes that MIT researchers have developed a system that can predict a perso's trajectory. The tool could allow “robots that typically freeze in the face of anything even vaguely resembling a person walking in their path to continue to operate and move around the flow of human foot traffic."

Motherboard

Motherboard reporter Rob Dozier writes about Glitch, an MIT startup that uses machine learning to design clothing. “These tools are meant to empower human designers,” explains graduate student Emily Salvador. “What I think is really cool about these creative-focused AI tools is that there’s still this really compelling need for a human to intervene with the algorithm.”

Forbes

Forbes reporter Joe McKendrick highlights a Nature review article by MIT researchers that calls for expanding the study of AI. “We’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously," they write. "This calls for a new field of scientific study that looks at them not solely as products of engineering and computer science.”

Economist

A new sensory glove developed by MIT researchers provides insight into how humans grasp and manipulate objects, reports The Economist. The glove will not only “be useful in programming robots to mimic people more closely when they pick objects up,” but also could “provide insights into how the different parts of the hand work together when grasping things.”

HealthDay News

A new glove embedded with sensors can enable AI systems to identify the shape and weight of different objects, writes HealthDay reporter Dennis Thompson. Using the glove, “researchers have been able to clearly unravel or quantify how the different regions of the hand come together to perform a grasping task,” explains MIT alumnus Subramanian Sundaram.

New Scientist

New Scientist reporter Chelsea Whyte writes that MIT researchers have developed a smart glove that enables neural networks to identify objects by touch alone. “There’s been a lot of hope that we’ll be able to understand the human grasp someday and this will unlock our potential to create this dexterity in robots,” explains MIT alumnus Subramanian Sundaram.

PBS NOVA

MIT researchers have developed a low-cost electronic glove equipped with sensors that can use tactical information to identify objects, reports Katherine Wu for NOVA Next. Wu writes that the glove is “easy and economical to manufacture, carrying a wallet-friendly price tag of only $10 per glove, and could someday inform the design of prosthetics, surgical tools, and more.”

VentureBeat

Researchers from MIT and a number of other institutions have found that grammar-enriched deep learning models had a better understanding of key linguistic rules, reports Kyle Wiggers for VentureBeat. The researchers found that an AI system provided with knowledge of basic grammar, “consistently performed better than systems trained on little-to-no grammar using a fraction of the data, and that it could comprehend ‘fairly sophisticated’ rules.”

Gizmodo

In an article for Gizmodo, Dell Cameron writes that graduate student Joy Buolamwini testified before Congress about the inherent biases of facial recognition systems. Buolamwini’s research on face recognition tools “identified a 35-percent error rate for photos of darker skinned women, as opposed to database searches using photos of white men, which proved accurate 99 percent of the time.”

Wired

Wired reporter Lily Hay Newman highlights graduate student Joy Buolamwini’s Congressional testimony about the bias of facial recognition systems. “New research is showing bias in the use of facial analysis technology for health care purposes, and facial recognition is being sold to schools,” said Buolamwini. “Our faces may well be the final frontier of privacy.” 

MIT Technology Review

Will Knight writes for MIT Technology Review about the MIT-Air Force AI Accelerator, which “will focus on uses of AI for the public good, meaning applications relevant to the humanitarian work done by the Air Force.” “These are extraordinarily important problems,” says Prof. Daniela Rus. “All of these applications have a great deal of uncertainty and complexity.”

Boston Globe

The new MIT-Air Force AI Accelerator “will look at improving Air Force operations and addressing larger societal needs, such as responses to disasters and medical readiness,” reports Breanne Kovatch for The Boston Globe. “The AI Accelerator provides us with an opportunity to develop technologies that will be vectors for positive change in the world,” says Prof. Daniela Rus.

WCVB

WCVB-TV’s Mike Wankum visits the Media Lab to learn more about a new wearable device that allows users to communicate with a computer without speaking by measuring tiny electrical impulses sent by the brain to the jaw and face. Graduate student Arnav Kapur explains that the device is aimed at exploring, “how do we marry AI and human intelligence in a way that’s symbiotic.”

Fast Company

Fast Company reporter Eillie Anzilotti highlights how MIT researchers have developed an AI-enabled headset device that can translate silent thoughts into speech. Anzilotti explains that one of the factors that is motivating graduate student Arnav Kapur to develop the device is “to return control and ease of verbal communication to people who struggle with it.”