Nature
Prof. Jacopo Buongiorno speaks with Nature reporter Davide Castelvecchi about how AI has increased energy demand and the future of nuclear energy.
Prof. Jacopo Buongiorno speaks with Nature reporter Davide Castelvecchi about how AI has increased energy demand and the future of nuclear energy.
Researchers at MIT have developed “Clio,” a new technique that “enables robots to make intuitive, task-relevant decisions,” reports Jennifer Kite-Powell for Forbes. The team’s new approach allows “a robot to quickly map a scene and identify the items they need to complete a given set of tasks,” writes Kite-Powell.
MIT researchers have developed a security protocol that utilizes quantum properties to ensure the security of data in cloud servers, reports Andrew Corselli for Tech Briefs. “Our protocol uses the quantum properties of light to secure the communication between a client (who owns confidential data) and a server (that holds a confidential deep learning model),” explains postdoc Sri Krishna Vadlamani.
Liquid AI, an MIT startup, is unveiling a new AI model based on a liquid neural network that “has the potential to be more efficient, less power-hungry, and more transparent than the ones that underpin everything from chatbots to image generators to facial recognition systems, reports Will Knight for Wired.
Prof. Daron Acemoglu, a recipient of the 2024 Nobel Prize in economic sciences, speaks with CNBC about the challenges facing the American economy. Acemoglu notes that in his view the coming economic storm is really “both a challenge and an opportunity,” explains Acemoglu. “I talk about AI, I talk about aging, I talk about the remaking of globalization. All of these things are threats because they are big changes, but they’re also opportunities that we could use in order to make ourselves more productive, workers more productive, workers earn more. In fact, even reduce inequality, but the problem is that we’re not prepared for it.”
Graduate student Nouran Soliman speaks with NBC Boston about the use of “personhood credentials,” a new technique that can be used to verify online users as human beings to help combat issues such as fraud and misinformation. “We are trying to also think about ways of implementing a system that incorporates personal credentials in a decentralized way,” explains Soliman. “It's also important not to have the power in one place because that compromises democracy.”
Prof. Evelina Fedorenko speaks with Scientific American reporter Gary Stix about her research demonstrating that “language and thought are, in fact, distinct entities that the brain processes separately.” Speaking about how large language models could be used to help scientists better understand the neuroscience of how language works, Fedorenko explains that "there are many, many questions that we can now ask that had been totally out of reach: for example, questions about [language] development.”
Researchers at MIT and elsewhere have developed “Future You” – an AI platform that uses generative AI to allow users to talk with an AI-generated simulation of a potential future you, reports Sammi Caramela for Vice. The research team hopes “talking to a relatable, virtual version of your future self about your current stressors, future goals, and your beliefs can improve anxiety, quell any obsessive thoughts, and help you make better decisions,” writes Caramela.
Writing for Fast Company, Senior Lecturer Guadalupe Hayes-Mota SB '08, MS '16, MBA '16, explores new approaches to improve the drug development process and more effectively connect scientific discoveries and treatment. “Transforming scientific discoveries into better treatments is a complex challenge, but it is also an opportunity to rethink our approach to healthcare innovation,” writes Hayes-Mota. “Through cross-disciplinary collaboration, leveraging AI, focusing on patient-centered innovation, and rethinking R&D, we can create a future where scientific breakthroughs translate into meaningful, accessible treatments for all.”
Researchers at MIT have found that commercially available AI models, “were more likely to recommend calling police when shown Ring videos captured in minority communities,” reports Kyle Wiggers for TechCrunch. “The study also found that, when analyzing footage from majority-white neighborhoods, the models were less likely to describe scenes using terms like ‘casing the property’ or ‘burglary tools,’” writes Wiggers.
Researchers at MIT have developed GenSQL, a new generative AI system that can be used “to ease answering data science questions,” reports Allison Proffitt for Bio-It World. “Look how much better data science could be if it was easier to use,” says Research Scientist Mathieu Huot. “It’s not perfect yet, but we believe it’s quite an improvement over other options.”
Prof. Daron Acemoglu speaks with Greg Rosalsky of NPR’s Planet Money about a recent survey that claims "almost 40% of Americans, ages 18 to 64, have used generative AI." "My concern with their numbers is that it does not distinguish fundamentally productive uses of generative AI from occasional/frivolous uses," says Acemoglu.
Sloan Visiting Senior Lecturer Paul McDonagh-Smith speaks with Joe McKendrick of Forbes about the ongoing discussions about AI safety guidelines. “While ensuring safety is crucial, especially for frontier AI models, there is also a need to strike a balance where AI is a catalyst for innovation without putting our organizations and broader society at risk,” explains McDonagh-Smith.
Prof. Yossi Sheffi speaks with Boston Globe reporter Hiawatha Bray about the challenges and risks posed by implementing automation, amid the dockworkers strike. Sheffi emphasized the importance of gradually introducing new technologies and offering workers training to work with AI. “There will be new jobs,” says Sheffi. “And we want the current workers to be able to get these new jobs.”
Prof. Yossi Sheffi speaks with Associated Press reporter Cathy Bussewitz about how automation could impact the workforce, specifically dockworkers. “You cannot bet against the march of technology,” says Sheffi. “You cannot ban automation, because it will creep up in other places... The trick is to make it over time, not do it haphazardly.”