Skip to content ↓

Topic

Computer vision

Download RSS feed: News Articles / In the Media / Audio

Displaying 16 - 30 of 173 news clips related to this topic.
Show:

WHDH 7

MIT researchers have created a new headset, called X-AR, that can help users find hidden or lost items by sending a wireless signal to any item that has a designated tag on it, reports WHDH. The augmented reality headset “allows them to see things that are otherwise not visible to the human eye,” explains Prof. Fadel Adib. “It visualizes items for people and then it guides them towards items.” 

The Daily Beast

MIT engineers have developed an augmented reality headset that uses RFID technology to allow wearers to find objects, reports Tony Ho Tran for The Daily Beast. “The device is intended to assist workers in places like e-commerce warehouses and retail stores to quickly find and identify objects,” writes Tran. “It can also help technicians find tools and items they need to assemble products.” 

Popular Science

An augmented reality headset developed by MIT engineers, called X-AR, uses RFID technology to help users find hidden objects, reports Andrew Paul for Popular Science. “X-AR’s creators were able to guide users with nearly 99 percent accuracy to items scattered throughout a warehouse testing environment,” writes Paul. “When those products were hidden within boxes, the X-AR still even boasted an almost 92 percent accuracy rate.” 

The Wall Street Journal

Writing for The Wall Street Journal, Dean Daniel Huttenlocher, former Secretary of State Henry Kissinger and former Google CEO Eric Schmidt explore how generative artificial intelligence “presents a philosophical and practical challenge on a scale not experienced since the beginning of the Enlightenment.” Huttenlocher, Kissinger and Schmidt make the case that “parameters for AI’s responsible use need to be established, with variation based on the type of technology and the context of deployment.”

Mashable

Researchers at MIT have developed an autonomous vehicle with “mini sensors to allow it to see the world and also with an artificially intelligent computer brain that can allow it to drive,” explains postdoctoral associate Alexander Amini in an interview with Mashable. “Our autonomous vehicles are able to learn directly from humans how to drive a car so they can be deployed and interact in brand new environments that they’ve never seen before,” Amini notes.

 

The New York Times

Prof. Steven Barrett speaks with New York Times reporter Paige McClanahan about the pressing need to make air travel more sustainable and his research exploring the impact of contrails on the planet’s temperature. “Eliminating contrails is quite a big lever on mitigating the climate impact of aviation,” said Barrett.

TechCrunch

MIT spinout Gaia A is developing a forest management building tool aimed at providing foresters with the resources to make data-driven decisions, reports Haje Jan Kamps and Brian Heater for TechCrunch. “The company is currently using lidar and computer vision tech to gather data but is ultimately building a data platform to tackle some of the big questions in forestry,” writes Kamps and Heater.

Popular Science

Popular Science reporter Charlotte Hu writes that MIT researchers have developed a new machine learning model that can depict how the sound around a listener changes as they move through a certain space. “We’re mostly modeling the spatial acoustics, so the [focus is on] reverberations,” explains graduate student Yilun Du. “Maybe if you’re in a concert hall, there are a lot of reverberations, maybe if you’re in a cathedral, there are many echoes versus if you’re in a small room, there isn’t really any echo.”

TechCrunch

Scientists at MIT have developed “a machine learning model that can capture how sounds in a room will propagate through space,” report Kyle Wiggers and Devin Coldewey for TechCrunch. “By modeling the acoustics, the system can learn a room’s geometry from sound recordings, which can then be used to build a visual rendering of a room,” write Wiggers and Coldewey.

Fast Company

Fast Company reporter Elissaveta Brandon writes that a team of scientists from MIT and elsewhere have developed an amphibious artificial vision system inspired by the fiddler crab’s compound eye, which has an almost 360-degree field of view and can see on both land and water. “When translated into a machine,” writes Brandon, “this could mean more versatile cameras for self-driving cars and drones, both of which can become untrustworthy in the rain.”

TechCrunch

MIT researchers have developed FuseBot, a new system that combines RFID tagging with a robotic arm to retrieve hidden objects from a pile, reports Brian Heater for TechCrunch. “As long as some objects within the pile are tagged, the system can determine where its subject is most likely located and the most efficient way to retrieve it,” writes Heater.

Popular Science

Popular Science reporter Charlotte Hu writes that MIT researchers have developed an “electronics chip design that allows for sensors and processors to be easily swapped out or added on, like bricks of LEGO.” Hu writes that “a reconfigurable, modular chip like this could be useful for upgrading smartphones, computers, or other devices without producing as much waste.”

The Daily Beast

MIT engineers have developed a wireless, reconfigurable chip that could easily be snapped onto existing devices like a LEGO brick, reports Miriam Fauzia for The Daily Beast. “Having the flexibility to customize and upgrade an old device is a modder’s dream,” writes Fauzia, “but the chip may also help reduce electronic waste, which is estimated at 50 million tons a year worldwide.”

The Wall Street Journal

CSAIL researchers have developed a robotic arm equipped with a sensorized soft brush that can untangle hair, reports Douglas Belkin for The Wall Street Journal. “The laboratory brush is outfitted with sensors that detect tension," writes Belkin. “That tension reads as pain and is used to determine whether to use long strokes or shorter ones.”

TechCrunch

TechCrunch reporter Kyle Wiggers spotlights how MIT researchers have developed a new computer vision algorithm that can identify images down to the individual pixel. The new algorithm is a “vast improvement over the conventional method of ‘teaching’ an algorithm to spot and classify objects in pictures and videos,” writes Wiggers.