Skip to content ↓

Topic

Computer science and technology

Download RSS feed: News Articles / In the Media / Audio

Displaying 256 - 270 of 1168 news clips related to this topic.
Show:

Forbes

Forbes reporter Rob Toews spotlights Prof. Daniela Rus, director of CSAIL, and research affiliate Ramin Hasani and their work with liquid neural networks. “The ‘liquid’ in the name refers to the fact that the model’s weights are probabilistic rather than constant, allowing them to vary fluidly depending on the inputs the model is exposed to,” writes Toews.

The Boston Globe

Prof. Tod Machover speaks with Boston Globe reporter A.Z. Madonna about the restaging of his opera ‘VALIS’ at MIT, which features an AI-assisted musical instrument developed by Nina Masuelli ’23.  “In all my career, I’ve never seen anything change as fast as AI is changing right now, period,” said Machover. “So to figure out how to steer it towards something productive and useful is a really important question right now.”

Freakonomics Radio

Prof. Simon Johnson speaks with Freakonomics guest host Adam Davidson about his new book, economic history, and why new technologies impact people differently. “What do people creating technology, deploying technology— what exactly are they seeking to achieve? If they’re seeking to replace people, then that’s what they’re going to be doing,” says Johnson. “But if they’re seeking to make people individually more productive, more creative, enable them to design and carry out new tasks — let’s push the vision more in that direction. And that’s a naturally more inclusive version of the market economy. And I think we will get better outcomes for more people.”

Nature

Nature contributor David Chandler writes about the late Prof. Edward Fredkin and his impact on computer science and physics. “Fredkin took things even further, concluding that the whole Universe could actually be seen as a kind of computer,” explains Chandler. “In his view, it was a ‘cellular automaton’: a collection of computational bits, or cells, that can flip states according to a defined set of rules determined by the states of the cells around them. Over time, these simple rules can give rise to all the complexities of the cosmos — even life.”

Popular Science

Prof. Yoon Kim speaks with Popular Science reporter Charlotte Hu about how large language models like ChatGPT operate. “You can think of [chatbots] as algorithms with little knobs on them,” says Kim. “These knobs basically learn on data that you see out in the wild,” allowing the software to create “probabilities over the entire English vocab.”

MSNBC

Graduate students Martin Nisser and Marisa Gaetz co-founded Brave Behind Bars, a program designed to provide incarcerated individuals with coding and digital literacy skills to better prepare them for life after prison, reports Morgan Radford for MSNBC. Computers and coding skills “are really kind of paramount for fostering success in the modern workplace,” says Nisser.

TechCrunch

Researchers from MIT and Harvard have explored astrocytes, a group of brain cells, from a computational perspective and developed a mathematical model that shows how they can be used to build a biological transformer, reports Kyle Wiggers for TechCrunch. “The brain is far superior to even the best artificial neural networks that we have developed, but we don’t really know exactly how the brain works,” says research staff member Dmitry Krotov. “There is scientific value in thinking about connections between biological hardware and large-scale artificial intelligence networks. This is neuroscience for AI and AI for neuroscience.

The Wall Street Journal

Prof. Mark Tegmark speaks with The Wall Street Journal reporter Emily Bobrow about the importance of companies and governments working together to mitigate the risks of new AI technologies. Tegmark “recommends the creation of something like a Food and Drug Administration for AI, which would force companies to prove their products are safe before releasing them to the public,” writes Bobrow.

The Guardian

Prof. D. Fox Harrell writes for The Guardian about the importance of ensuring AI systems are designed to “reflect the ethically positive culture we truly want.” Harrell emphasizes that: “We need to be aware of, and thoughtfully design, the cultural values that AI is based on. With care, we can build systems based on multiple worldviews – and address key ethical issues in design such as transparency and intelligibility."

Wired

Undergraduate student Isabella Struckman and Sofie Kupiec ’23 reached out to the first hundred signatories of the Future of Life Institute’s open letting calling for a pause on AI development to learn more about their motivations and concerns, reports Will Knight for Wired. “The duo’s write-up of their findings reveals a broad array of perspectives among those who put their name to the document,” writes Knight. “Despite the letter’s public reception, relatively few were actually worried about AI posing a looming threat to humanity.”

TechCrunch

Prof. Daniela Rus, director of CSAIL, speaks with TechCrunch reporter Brain Heater about liquid neural networks and how this emerging technology could impact robotics. “The reason we started thinking about liquid networks has to do with some of the limitations of today’s AI systems,” says Rus, “which prevent them from being very effective for safety, critical systems and robotics. Most of the robotics applications are safety critical.”

TechCrunch

Vaikkunth Mugunthan MS ’19 PhD ‘22 and Christian Lau MS ’20, PhD ’22 co-founded DynamoFL – a software company that “offers software to bring large language models (LLMs) to enterprise and fine-tune those models on sensitive data,” reports Kyle Wiggers for TechCrunch. “Generative AI has brought to the fore new risks, including the ability for LLMs to ‘memorize’ sensitive training data and leak this data to malicious actors,” says Mugunthan. “Enterprises have been ill-equipped to address these risks, as properly addressing these LLM vulnerabilities would require recruiting teams of highly specialized privacy machine learning researchers to create a streamlined infrastructure for continuously testing their LLMs against emerging data security vulnerabilities.”

Boston.com

MIT researchers have developed a new tool called “PhotoGuard” that can help protect images from AI manipulation, reports Ross Cristantiello for Boston.com. The tool “is designed to make real images resistant to advanced models that can generate new images, such as DALL-E and Midjourney,” writes Cristantiello.

The Boston Globe

Ivan Sutherland PhD ’63, whose work “laid some of the foundations of the digital world that surrounds us today,” speaks with Boston Globe columnist Scott Kirsner about the importance of fun and play in advancing technological research. “You’re no good at things you think aren’t fun,” Sutherland said. If you want to expand the scope of what’s possible today, he noted, “you need to play around with stuff to understand what it will do, and what it won’t do.”

USA Today

A working paper co-authored by Prof. John Horton and graduate students Emma van Inwegen and Zanele Munyikwa has found that “AI has the potential to level the playing field for non-native English speakers applying for jobs by helping them better present themselves to English-speaking employers,” reports Medora Lee for USA Today. “Between June 8 and July 14, 2021, [Inwegen] studied 480,948 job seekers, who applied for jobs that require English to be spoken but who mostly lived in nations where English is not the native language,” explains Lee. “Of those who used AI, 7.8% were more likely to be hired.”