CBS Boston
CBS Boston features how researchers from MIT and Brigham and Women’s Hospital equipped a robot from Boston Dynamics with technology to enable remote vital sign monitoring.
CBS Boston features how researchers from MIT and Brigham and Women’s Hospital equipped a robot from Boston Dynamics with technology to enable remote vital sign monitoring.
In this video, Bloomberg News spotlights how researchers from MIT and Brigham and Women’s Hospital have developed a new system that facilitates remote monitoring of a patient’s vital signs, as part of an effort to help reduce healthcare workers’ Covid-19 risk. Researchers have successfully measured temperature, breathing rate, pulse rate and blood oxygen saturation in healthy patients.”
Researchers from MIT and Brigham and Women’s Hospital have repurposed a robotic dog from Boston Dynamics with technology that enables doctors to remotely measure a patient’s vital signs, reports Rick Sobey for The Boston Herald. “Using four cameras mounted on the dog-like robot, the researchers have shown that they can measure skin temperature, breathing rate, pulse rate and blood oxygen saturation in healthy patients,” writes Sobey.
A new tool developed by MIT researchers sheds light on the operations of generative adversarial network models and allows users to edit these machine learning models to generate new images, reports Daphne Leprince-Ringuet for ZDNet. "The real challenge I'm trying to breach here," says graduate student David Bau, "is how to create models of the world based on people's imagination."
Verge reporter James Vincent writes that researchers at the MIT-IBM Watson AI Lab have developed an algorithm that can transform selfies into artistic portraits. The algorithm is “trained on 45,000 classical portraits to render your face in faux oil, watercolor, or ink, “Vincent explains.
Paul Carter of BBC’s Click highlights CSAIL research to teach a robot how to feel an object just by looking at it. This will ultimately help the robot “grip better when lifting things like the handle of a mug,” says Carter.
Gizmodo reporter Victoria Song writes that MIT researchers have developed a new system that can teach a machine how to make pizza by examining a photograph. “The researchers set out to teach machines how to recognize different steps in cooking by dissecting images of pizza for individual ingredients,” Song explains.
Using a tactile sensor and web camera, MIT researchers developed an AI system that allows robots to predict what something feels like just by looking at it, reports David Williams for CNN. “This technology could be used to help robots figure out the best way to hold an object just by looking at it,” explains Williams.
Forbes contributor Charles Towers-Clark explores how CSAIL researchers have developed a database of tactile and visual information that could be used to allow robots to infer how different objects look and feel. “This breakthrough could lead to far more sensitive and practical robotic arms that could improve any number of delicate or mission-critical operations,” Towers-Clark writes.
MIT researchers have created a new system that enables robots to identify objects using tactile information, reports Darrell Etherington for TechCrunch. “This type of AI also could be used to help robots operate more efficiently and effectively in low-light environments without requiring advanced sensors,” Etherington explains.
Fast Company reporter Michael Grothaus writes that CSAIL researchers have developed a new system that allows robots to determine what objects look like by touching them. “The breakthrough could ultimately help robots become better at manipulating objects,” Grothaus explains.
Gizmodo reporter Andrew Liszewski writes that MIT researchers have created an algorithm that can automatically fix warped faces in wide-angle shots without impacting the rest of the photo. Liszewski writes that the tool could “be integrated into a camera app and applied to wide angle photos on the fly as the algorithm is fast enough on modern smartphones to provide almost immediate results.”
Wired reporter Lily Hay Newman highlights graduate student Joy Buolamwini’s Congressional testimony about the bias of facial recognition systems. “New research is showing bias in the use of facial analysis technology for health care purposes, and facial recognition is being sold to schools,” said Buolamwini. “Our faces may well be the final frontier of privacy.”
MIT researchers have identified a method to help AI systems avoid adversarial attacks, reports Matthew Hutson for Science. When the researchers “trained an algorithm on images without the subtle features, their image recognition software was fooled by adversarial attacks only 50% of the time,” Hutson explains. “That compares with a 95% rate of vulnerability when the AI was trained on images with both obvious and subtle patterns.”
Researchers at MIT have found that adversarial examples, a kind of optical illusion for AI that makes the system incorrectly identify an image, may not actually impact AI in the ways computer scientists have previously thought. “When algorithms fall for an adversarial example, they’re not hallucinating—they’re seeing something that people don’t,” Louise Matsakis writes for Wired.