Skip to content ↓

Topic

Computer Science and Artificial Intelligence Laboratory (CSAIL)

Download RSS feed: News Articles / In the Media / Audio

Displaying 676 - 690 of 707 news clips related to this topic.
Show:

The Wall Street Journal

Robert Lee Hotz of The Wall Street Journal writes that researchers from MIT and Harvard have developed of prototype of a, “flexible, self-assembling machine.” Potential applications for the technology include everything from self-assembling satellites to shape-shifting search-and-rescue robots. 

USA Today

In a piece for USA Today, Hoai-Tran Bui writes about how a team of researchers from MIT and Harvard have developed a robot that can self-assemble from a flat sheet of paper in minutes. "The big dream is to make robots fast and inexpensive," says Prof. Daniela Rus. 

Slate

Slate reporter Boer Deng writes about the self-assembling robot developed by scientists from MIT and Harvard. The robot, “forms itself, Transformer-like, from a flat sheet into a four-legged creature that crawls,” Deng writes. 

Newsweek

Joe Kloc of Newsweek writes about how MIT and Harvard scientists have developed a self-assembling robot that folds itself into a 3-D robot capable of movement. "We have achieved a long-standing personal goal to design a machine that can assemble itself," says Prof. Daniela Rus of the project. 

The Guardian

Ian Sample of The Guardian reports on how a team of researchers from MIT and Harvard have developed a “Transformer” robot that can self-assemble. "This will rapidly extend the manufacturing capabilities that we have today where configuring an assembly line is done manually and requires a lot of time," Prof. Daniela Rus explains.

CNN

Heather Kelly of CNN reports on how MIT researchers have developed a new technique to recreate audio from silent video. "We showed that we can determine pretty reliably the gender of a speaker from low-quality sound we managed to recover from a tissue box," says Dr. Michael Rubinstein. 

PBS NewsHour

Colleen Shalby reports for the PBS NewHour on the “visual microphone” developed by MIT researchers that can detect and reconstruct audio by analyzing the sound waves traveling through objects. 

Bloomberg Businessweek

Bloomberg Businessweek reporter Drake Bennett writes about how MIT researchers have developed a technique for extracting audio by analyzing the sound vibrations traveling through objects. Bennett reports that the researchers found that sound waves could be detected even when using cell phone camera sensors. 

ABC News

Alyssa Newcomb of ABC News reports on how MIT researchers have developed a new method that can uncover intelligible audio by videotaping everyday objects and translating the sound vibrations back into intelligible sound. 

NPR

NPR’s Melissa Block examines the new MIT algorithm that can translate visual information into sound. Abe Davis explains that by analyzing sound waves traveling through an object, “you can start to filter out some of that noise and you can actually recover the sound that produced that motion.” 

Time

Time reporter Nolan Feeney writes about how researchers from MIT have developed a new technique to extract intelligible audio of speech by “videotaping and analyzing the tiny vibrations of objects.”

Wired

“Researchers have developed an algorithm that can use visual signals from videos to reconstruct sound and have used it to recover intelligible speech from a video,” writes Katie Collins for Wired about an algorithm developed by a team of MIT researchers that can derive speech from material vibrations.

The Washington Post

Rachel Feltman of The Washington Post examines the new MIT algorithm that can reconstruct sound by examining the visual vibrations of sound waves. “This is a new dimension to how you can image objects,” explains graduate student Abe Davis. 

Popular Science

In a piece for Popular Science, Douglas Main writes on the new technique developed by MIT researchers that can reconstruct speech from visual information. The researchers showed that, “an impressive amount of information about the audio (although not its content) could also be recorded with a regular DSLR that films at 60 frames per second.”

Slate

Writing for Slate, Elliot Hannon reports on the new technology developed by MIT researchers that allows audio to be extracted from visual information by processing the vibrations of sound waves as they move through objects.