Skip to content ↓

Topic

Computer science and technology

Download RSS feed: News Articles / In the Media / Audio

Displaying 766 - 780 of 1115 news clips related to this topic.
Show:

Fortune- CNN

Fortune reporter David Morris writes that MIT researchers have tricked an artificial intelligence system into thinking that a photo of a machine gun was a helicopter. Morris explains that, “the research points towards potential vulnerabilities in the systems behind technology like self-driving cars, automated security screening systems, or facial-recognition tools.”

The Wall Street Journal

In an article for The Wall Street Journal, Visiting Lecturer Irving Wladawsky-Berger spotlights MIT’s AI and the Future of Work Conference. Wladawsky-Berger writes that participants, “generally agreed that AI will have a major impact on jobs and the very nature of work. But, for the most part, they viewed AI as mostly augmenting rather than replacing human capabilities.”

BBC News

Graduate student Anish Athalye speaks with the BBC about his work examining how image recognitions systems can be fooled. "More and more real-world systems are starting to incorporate neural networks, and it's a big concern that these systems may be possible to subvert or attack using adversarial examples,” Athalye explains. 

New Scientist

New Scientist reporter Abigail Beale writes that MIT researchers have been able to trick an AI system into thinking an image of a turtle is a rifle. Beale writes that the results, “raise concerns about the accuracy of face recognition systems and the safety of driverless cars, for example.”

Guardian

Guardian reporter Alex Hern writes that in a new paper MIT researchers demonstrated the concept of adversarial images, describing how they tricked an AI system into thinking an image of a turtle was an image of a gun. The researchers explained that their work “demonstrates that adversarial examples are a significantly larger problem in real world systems than previously thought.”

WGBH

WGBH’s Craig LeMoult reports on the future of work conference held at MIT this week, which examined how automation may impact the labor market. Prof. Erik Brynjolfsson explained that, “we're using technologies to augment not just our muscles but our brains, allowing us to control the world and make them figure things out more effectively.”

Fortune- CNN

Valentina Zarya writes for Fortune that MIT researchers have developed an AI system that can generate horror stories. The system, named Shelley, learned its craft by reading a Reddit forum containing stories from amateur horror writers. The bot, Shelley, also tweets a line for a new story every hour, encouraging Twitter users to continue the story.

CBS Boston

MIT Media Lab researchers have created an AI program that can write horror stories in collaboration with humans via Twitter, reports David Wade for CBS Boston. “Over time, we are expecting her to learn more from the crowd, and to create even more scarier stories,” says postdoctoral associate Pinar Yanardag.

WBUR

In a WBUR segment about how technology is increasingly being used to assist seniors and caregivers, Rachel Zimmerman highlights Rendever, an MIT spinout, and speaks with Prof. Paul Osterman, Prof. Dina Katabi and Dr. Joseph Coughlin about their work. Zimmerman explains that Coughlin believes “a mix of smart devices and other personal services,” will help people age well.

HuffPost

MIT researchers have developed an artificial neural network that can generate horror stories by collaborating with people on Twitter, HuffPost reports. Pinar Yanardag, a postdoc at the Media Lab, explains that the system is, “creating really interesting and weird stories that have never really existed in the horror genre.”

Associated Press

Associated Press reporter Matt O’Brien details how Media Lab researchers have developed a new system, dubbed Shelley, that can generate scary stories. O’Brien explains that, “Shelley's artificial neural network is generating its own stories, posting opening lines on Twitter, then taking turns with humans in collaborative storytelling.”

Newsweek

Newsweek reporter Joseph Frankel writes that MIT Media Lab researchers have developed an AI system named Shelley that uses human input to write short horror stories. Frankel explains that Shelley, “tweets out one or two sentences as the start of a new horror story, then calls for users to respond with their own lines.”

Inside Higher Ed

Inside Higher Ed reporter Lindsay McKenzie spotlights how MIT has begun a new pilot program that offers students the option to receive tamper-free digital diplomas, in addition to a traditional one. McKenzie explains that, “students can quickly access a digital diploma that can be shared on social media and verified by employers to ensure its authenticity.”

Boston Globe

Using video to processes shadows, MIT researchers have developed an algorithm that can see around corners, writes Alyssa Meyers for The Boston Globe. “When you first think about this, you might think it’s crazy or impossible, but we’ve shown that it’s not if you can understand the physics of how light propagates,” says lead author and MIT graduate Katie Bouman.

WGBH

During an appearance on WGBH’s Greater Boston, Prof. Regina Barzilay speaks with Jim Braude about her research and the experience of winning a MacArthur grant. Barzilay explains that the techniques she and her colleagues are developing to apply machine learning to medicine, “can be applied to many other areas. In fact, we have started collaborating and expanding.”