Skip to content ↓

Topic

Computer science and technology

Download RSS feed: News Articles / In the Media / Audio

Displaying 211 - 225 of 1114 news clips related to this topic.
Show:

Wired

Undergraduate student Isabella Struckman and Sofie Kupiec ’23 reached out to the first hundred signatories of the Future of Life Institute’s open letting calling for a pause on AI development to learn more about their motivations and concerns, reports Will Knight for Wired. “The duo’s write-up of their findings reveals a broad array of perspectives among those who put their name to the document,” writes Knight. “Despite the letter’s public reception, relatively few were actually worried about AI posing a looming threat to humanity.”

TechCrunch

Prof. Daniela Rus, director of CSAIL, speaks with TechCrunch reporter Brain Heater about liquid neural networks and how this emerging technology could impact robotics. “The reason we started thinking about liquid networks has to do with some of the limitations of today’s AI systems,” says Rus, “which prevent them from being very effective for safety, critical systems and robotics. Most of the robotics applications are safety critical.”

TechCrunch

Vaikkunth Mugunthan MS ’19 PhD ‘22 and Christian Lau MS ’20, PhD ’22 co-founded DynamoFL – a software company that “offers software to bring large language models (LLMs) to enterprise and fine-tune those models on sensitive data,” reports Kyle Wiggers for TechCrunch. “Generative AI has brought to the fore new risks, including the ability for LLMs to ‘memorize’ sensitive training data and leak this data to malicious actors,” says Mugunthan. “Enterprises have been ill-equipped to address these risks, as properly addressing these LLM vulnerabilities would require recruiting teams of highly specialized privacy machine learning researchers to create a streamlined infrastructure for continuously testing their LLMs against emerging data security vulnerabilities.”

Boston.com

MIT researchers have developed a new tool called “PhotoGuard” that can help protect images from AI manipulation, reports Ross Cristantiello for Boston.com. The tool “is designed to make real images resistant to advanced models that can generate new images, such as DALL-E and Midjourney,” writes Cristantiello.

The Boston Globe

Ivan Sutherland PhD ’63, whose work “laid some of the foundations of the digital world that surrounds us today,” speaks with Boston Globe columnist Scott Kirsner about the importance of fun and play in advancing technological research. “You’re no good at things you think aren’t fun,” Sutherland said. If you want to expand the scope of what’s possible today, he noted, “you need to play around with stuff to understand what it will do, and what it won’t do.”

USA Today

A working paper co-authored by Prof. John Horton and graduate students Emma van Inwegen and Zanele Munyikwa has found that “AI has the potential to level the playing field for non-native English speakers applying for jobs by helping them better present themselves to English-speaking employers,” reports Medora Lee for USA Today. “Between June 8 and July 14, 2021, [Inwegen] studied 480,948 job seekers, who applied for jobs that require English to be spoken but who mostly lived in nations where English is not the native language,” explains Lee. “Of those who used AI, 7.8% were more likely to be hired.”

CNN

Researchers at MIT have developed “PhotoGuard,” a tool that can be used to protect images from AI manipulation, reports Catherine Thorbecke for CNN. The tool “puts an invisible ‘immunization’ over images that stops AI models from being able to manipulate the picture,” writes Thorbecke.

The Boston Globe

Boston Globe reporter Aaron Pressman speaks with alumnus Jeremy Wertheimer, co-founder of ITA Software, about the state of AI innovation in the Greater Boston area, reports Aaron Pressman for The Boston Globe. “Back in the day, we called it good old-fashioned AI,” says Wertheimer. “But the future is to forget all that clever coding. You want to have an incredibly simple program with enough data and enough computing power.”

Financial Times

Prof. David Autor speaks with Delphine Strauss of the Financial Times about the risks AI poses to jobs and job quality, but also the technology’s potential to help rebuild middle-class jobs. “The good case for AI is where it enables people with foundational expertise or judgment to do more expert work with less expertise,” says Autor. He adds, “My hope is that we can use AI to reinstate the value of skills held by people without as high a degree of formal education.”

The Boston Globe

Prof. Daron Acemoglu speaks with Boston Globe reporters Alex Kantrowitz and Douglas Gorman about how to address the advance of AI in the workplace. “We know from many areas that have rapidly automated that they don’t deliver the types of returns that they promised,” says Acemoglu. “Humans are underrated.”  

Reuters

Prof. Simon Johnson speaks with Reuters reporter Mark John about the impact of AI on the economy. “AI has got a lot of potential – but potential to go either way,” says Johnson. “We are at a fork in the road.”

Forbes

At CSAIL’s Imagination in Action event, CSAIL research affiliate and MIT Corporation life member emeritus Bob Metcalfe '69 showcased how the many individual bits of innovation that emerged from the Telnet Protocol later become the foundation for email, writes Prof. Daniela Rus, director of CSAIL, for Forbes. “Looking ahead to the future of connectivity, Metcalfe spoke of the challenges of limited network bandwidth, and the importance of keeping connectivity firmly in mind when developing any new computing technologies,” writes Rus.

Associated Press

AP reporter Ronald Blum spotlights the premiere of Prof. Jay Scheib’s augmented reality-infused production of Wagner’s “Parsifal” at the Bayreuth Festival in Germany. “We sort of focus on a future society in which myth has become possible again," says Scheib. "But at the same time, we’re not that far in the future and the third act is set around a broken lithium-ion field. We’re set in a world that is somehow post-planet and post-collapse of energy production.”

Forbes

At CSAIL’s Imagination in Action event, Prof. Stefanie Jegelka’s presentation provided insight into “the failures and successes of neural networks and explored some crucial context that can help engineers and other human observers to focus in on how learning is happening,” reports research affiliate John Werner for Forbes.

ABC News

Researchers from MIT and Massachusetts General Hospital have developed “Sybil,” an AI tool that can detect the risk of a patient developing lung cancer within six years, reports Mary Kekatos for ABC News. “Sybil was trained on low-dose chest computer tomography scans, which is recommended for those between ages 50 and 80 who either have a significant history of smoking or currently smoke,” explains Kekatos.