Skip to content ↓

Topic

Computer Science and Artificial Intelligence Laboratory (CSAIL)

Download RSS feed: News Articles / In the Media / Audio

Displaying 271 - 285 of 706 news clips related to this topic.
Show:

Mashable

MIT researchers developed a new control system for the mini robotic cheetah that allows the robot to jump and traverse uneven terrain, reports Jules Suzdaltsev for Mashable. “There’s a camera for processing real-time input from a video camera that then translates that information into body movements for the robot,” Suzdaltsev explains.

Inc.

Inc. columnist Justin Bariso spotlights the late Prof. Patrick Winston’s IAP course “How to Speak,” which was aimed at helping people improve their communications skills while also underscoring the important role engagement plays in becoming a better listener. Some people ask why [no laptops, no cellphones] is a rule of engagement," said Winston. "The answer is, we humans only have one language processor. And if your language processor is engaged ... you're distracted. And, worse yet, you distract all of the people around you. Studies have shown that."

Naked Scientists

The Naked Scientist podcaster Verner Viisainen spotlights how MIT researchers studied vector-based navigation in humans. “What we discovered is actually that we don’t follow the shortest path but actually follow a different kind of optimization criteria which is based on angular deviation,” says Prof. Carlo Ratti.

Popular Science

Popular Science reporter Charlotte Hu writes that MIT researchers have simulated an environment in which socially-aware robots are able to choose whether they want to help or hinder one another, as part of an effort to help improve human-robot interactions. “If you look at the vast majority of what someone says during their day, it has to do with what other [people] want, what they think, getting what that person wants out of another [person],” explains research scientist Andrei Barbu. “And if you want to get to the point where you have a robot inside someone’s home, understanding social interactions is incredibly important.”

The Washington Post

Prof. Julie Shah speaks with Washington Post reporter Tatum Hunter about whether AI technologies will ever surpass human intelligence. “Any positive or negative use or outcome of this technology isn't predetermined. We have a lot of choices that we make,” Shah says. “And these should not be decisions that are left solely to technologists. Everybody needs to be involved because this technology has such a broad impact on all of us.”

TechCrunch

MIT researchers have developed a new machine learning system that can help robots learn to perform certain social interactions, reports Brian Heater for TechCrunch. “Researchers conducted tests in a simulated environment, to develop what they deemed ‘realistic and predictable’ interactions between robots,” writes Heater. “In the simulation, one robot watches another perform a task, attempts to determine the goal and then either attempts to help or hamper it in that task.”

Mashable

Mashable video producer Jules Suzdaltsev shares that MIT scientists and a team of researchers have successfully created full-scale, self-navigating robotic boats ready to wade through the Amsterdam canals. “The boats use GPS, lidar, cameras and control algorithms to reach their full self-navigating capabilities,” writes Suzdaltsev.

TechCrunch

TechCrunch writer Devin Coldewey reports on the ReSkin project, an AI project focused on developing a new electronic skin and fingertip meant to expand the sense of touch in robots. The ReSkin project is rooted in GelSight, a technology developed by MIT researchers that allows robots to gauge an object’s hardness.

Fast Company

Fast Company reporter Kristin Toussaint spotlights how researchers from CSAIL and the Senseable City Lab have worked with the Amsterdam Institute for Advanced Metropolitan Solutions on developing a robotic boat now ready to be used in the canals of Amsterdam . “It’s a kind of dynamic infrastructure that can adapt to the needs of a city as they change, and help Amsterdam decongest its street and better use its waterways,” says Toussaint.

Axios

Axios reporter Alison Snyder writes that a new study by MIT researchers demonstrates how AI algorithms could provide insight into the human brain’s processing abilities. The researchers found “Predicting the next word someone might say — like AI algorithms now do when you search the internet or text a friend — may be a key part of the human brain's ability to process language,” writes Snyder.

Reuters

Reuters reporter Toby Sterling spotlights how MIT researchers have been working with Amsterdam’s Institute for Advanced Metropolitan Solutions to develop a self-driving watercraft for transporting passengers, goods and trash through the canals. “We have a lot of open water available in the canals,” says Stephan van Dijk, Amsterdam’s Institute for Advanced Metropolitan Solutions Innovation Director. “So, we developed a self-driving, autonomous ship to help with logistics in the city and also bringing people around.” 

Scientific American

Using an integrative modeling technique, MIT researchers compared dozens of machine learning algorithms to brain scans as part of an effort to better understand how the brain processes language. The researchers found that “neural networks and computational science might, in fact, be critical tools in providing insight into the great mystery of how the brain processes information of all kinds,” writes Anna Blaustein for Scientific American.

TechCrunch

TechCrunch reporter Mary Ann Azevedo spotlights Adam Marcus ’12 and Nitesh Banta, the co-founders of B12, a digital platform designed to establish, run and grow professional services firms online.  B12 is focused on helping “smaller professional service organizations such as law and accounting firms or mortgage brokerages more easily accept online payments and build a digital presence in general,” writes Ann Azevedo.

Gizmodo

Gizmodo reporter Andrew Liszewski writes that MIT researchers “used a high-resolution video camera with excellent low-light performance (the amount of sensor noise has to be as minimal as possible) to capture enough footage of a blank well that special processing techniques were able to not only see the shadow’s movements, but extrapolate who was creating them.”

Scientific American

Scientific American reporter Sophie Bushwick writes that MIT researchers have developed a new system that can interpret shadows that are invisible to the human eye. “The system can automatically analyze footage of a blank wall in any room in real time, determining the number of people and their actions,” writes Bushwick.