Skip to content ↓

Topic

Autonomous vehicles

Download RSS feed: News Articles / In the Media / Audio

Displaying 91 - 105 of 238 news clips related to this topic.
Show:

Wired

The results of the Media Lab’s “Moral Machine” survey provides a glimpse into how people will respond to the ethical dilemmas surrounding autonomous vehicle accidents. “The point here, the researchers say, is to initiate a conversation about ethics in technology, and to guide those who will eventually make the big decisions about AV morality,” writes Wired’s Aarian Marshall.

Motherboard

Using an online platform known as the “Moral Machine,” researchers at the Media Lab have surveyed more than two million people from 233 countries about how an autonomous vehicle should respond in a crash. “The Moral Machine game is similar to the infamous trolley problem,” writes Tracey Lindeman for Motherboard, “but calibrated for the autonomous car.”

The Guardian

A new study from Media Lab researchers highlights the result of an online survey that asked volunteers how a self-driving vehicle should respond to a variety of potential accidents. “Moral responses to unavoidable damage vary greatly around the world in a way that poses a big challenge for companies planning to build driverless cars,” writes Alex Hern in The Guardian.

The Verge

A new paper by MIT researchers details the results of a survey on an online platform they developed, which asked respondents to make ethical decisions about fictional self-driving car crashes. “Millions of users from 233 countries and territories took the quiz, making 40 million ethical decisions in total,” writes James Vincent of The Verge.

The Washington Post

Carolyn Johnson writes for The Washington Post about a new MIT study “that asked people how a self-driving car should respond when faced with a variety of extreme trade-offs.” According to Prof. Iyad Rahwan, “regulating AI will be different from traditional products, because the machines will have autonomy and the ability to adapt,” explains Johnson.

PBS NewsHour

MIT researchers used an online platform known as the “Moral Machine” to gauge how humans respond to ethical decisions made by artificial intelligence, reports Jamie Leventhal for PBS NewsHour. According to postdoc Edmond Awad, two goals of the platform were to foster discussion and “quantitatively [measure] people’s cultural preferences.”

CNBC

CNBC reporter Lorie Konish speaks with Joseph Coughlin, director of the AgeLab, about some key questions around autonomous vehicles for retirees. “The older people who can't drive — whether it is a cognition issue, health issue, physical disability issue — who gets them in the car?” says Coughlin. “And if your mom’s not cognitively well enough to drive, does she ride in the robot car by herself?”

New Scientist

Prof. Iyad Rahwan speaks with New Scientist reporter Sean O’Neill about his work investigating the ethics of artificial intelligence. “I’m pushing for a negotiated social-contract approach,” explains Rahwan. “As a society we want to get along well, but to do it we need property rights, free speech, protection from violence and so on. We need to think about machine ethics in the same way.”  

Bloomberg News

Prof. John Leonard speaks with Bloomberg News about his work with the Toyota Research Institute on developing a system that combines machine learning technologies and sensors to make vehicles safer. “Imagine if you had the most vigilant and capably trained driver in the world that could take over in a situation where a teenager took a curve too fast,” says Leonard of the inspiration for the system.

The Verge

MIT startup Skydio has launched a platform that allows users to create custom software that can be applied to the company’s autonomous drone, reports Nick Statt for The Verge. The platform will let “app makers and drone enthusiasts develop custom software that takes advantage of the device’s bevy of cameras and sensors, as well as its sophisticated computer vision software and machine learning algorithms.”

WCVB

WCVB’s Mike Wankum visits the Beaver Works Summer Institute to see how high school students are gaining hands-on engineering experience. Robert Shin, director of Beaver Works, explains that the program is aimed at “inspiring the next generation.”

Wired

Wired reporter Jack Stewart explores the technology behind Boston-based startup WaveSense, which applies ground-penetrating radar developed at MIT’s Lincoln Laboratory to give self-driving cars a way to map where they are without relying on visual clues or GPS. The technology, writes Stewart, was “first deployed in 2013 to help troops navigate in Afghanistan, where staying on path and avoiding landmines is a matter of life and death.”

Boston Globe

Boston Globe reporter Bette Keva spotlights SeaTrac Systems, Inc., which was founded by MIT alumni Buddy Duncan and James Herman, and has developed an autonomous, solar-powered boat. Herman explains that SeaTrac’s goal is to develop a boat that could be used on “dirty, dull, dangerous, or expensive” missions.

Wired

In an article for Wired about distracted driving, Aarian Marshall highlights how MIT researchers are studying how drivers use new automated driving systems. “This is about human-centered development: leveraging the human element and integrating it with advances in automation,” explains Research Engineer Bryan Reimer.

Bloomberg

Research Engineer Bryan Reimer speaks with Bloomberg Radio about autonomous vehicle safety following the announcement that nuTonomy will soon test its vehicles on Boston streets. Citing the successful partnership between city officials and autonomous vehicle startups, Reimer stresses the importance of companies demonstrating “that they can walk before they can run.”