Skip to content ↓

Topic

Quest for Intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 26 news clips related to this topic.
Show:

Forbes

Researchers at MIT have found large language models “often struggle to handle more complex problems that require true understanding,” reports Kirimgeray Kirimli for Forbes. “This underscores the need for future versions of LLMs to go beyond just these basic, shared capabilities,” writes Kirimli. 

Popular Mechanics

Researchers at CSAIL have created three “libraries of abstraction” – a collection of abstractions within natural language that highlight the importance of everyday words in providing context and better reasoning for large language models, reports Darren Orf for Popular Mechanics. “The researchers focused on household tasks and command-based video games, and developed a language model that proposes abstractions from a dataset,” explains Orf. “When implemented with existing LLM platforms, such as GPT-4, AI actions like ‘placing chilled wine in a cabinet' or ‘craft a bed’ (in the Minecraft sense) saw a big increase in task accuracy at 59 to 89 percent, respectively.”

Scientific American

Researchers from MIT and elsewhere have developed a new AI technique for teaching robots to pack items into a limited space while adhering to a range of constraints, reports Nick Hilden for Scientific American. “We want to have a learning-based method to solve constraints quickly because learning-based [AI] will solve faster, compared to traditional methods,” says graduate student Zhutian “Skye” Yang.

TechCrunch

MIT researchers have developed a new hardware that offers faster computation for artificial intelligence with less energy, reports Kyle Wiggers for TechCrunch. “The researchers’ processor uses ‘protonic programmable resistors’ arranged in an array to ‘learn’ skills” explains Wiggers.

New Scientist

Postdoctoral researcher Murat Onen  and his colleagues have created “a nanoscale resistor that transmits protons from one terminal to another,” reports Alex Wilkins for New Scientist. “The resistor uses powerful electric fields to transport protons at very high speeds without damaging or breaking the resistor itself, a problem previous solid-state proton resistors had suffered from,” explains Wilkins.

The Daily Beast

MIT researchers have developed a new computational model that could be used to help explain differences in how neurotypical adults and adults with autism recognize emotions via facial expressions, reports Tony Ho Tran for The Daily Beast. “For visual behaviors, the study suggests that [the IT cortex] pays a strong role,” says research scientist Kohitij Kar. “But it might not be the only region. Other regions like amygdala have been implicated strongly as well. But these studies illustrate how having good [AI models] of the brain will be key to identifying those regions as well.”

Economist

Graduate student Shashank Srikant speaks with The Economist about his work developing a new model that can detect computer bugs and vulnerabilities that have been maliciously inserted into computer code.

Wired

Wired reporter Will Knight spotlights how MIT researchers have showed that “an AI program trained to verify that code will run safely can be deceived by making a few careful changes, like substituting certain variables, to create a harmful program.”

ZDNet

A new tool developed by MIT researchers sheds light on the operations of generative adversarial network models and allows users to edit these machine learning models to generate new images, reports Daphne Leprince-Ringuet for ZDNet. "The real challenge I'm trying to breach here," says graduate student David Bau, "is how to create models of the world based on people's imagination."

VentureBeat

Researchers from MIT and a number of other institutions have found that grammar-enriched deep learning models had a better understanding of key linguistic rules, reports Kyle Wiggers for VentureBeat. The researchers found that an AI system provided with knowledge of basic grammar, “consistently performed better than systems trained on little-to-no grammar using a fraction of the data, and that it could comprehend ‘fairly sophisticated’ rules.”

New York Times

New York Times reporter Steve Lohr writes about the MIT AI Policy Conference, which examined how society, industry and governments should manage the policy questions surrounding the evolution of AI technologies. “If you want people to trust this stuff, government has to play a role,” says CSAIL principal research scientist Daniel Weitzner.

Boston Herald

Taylor Pettaway of the Boston Herald writes that MIT’s new college of computing will be one of the university’s largest structural changes made since 1950. Offering classes in different fields, “students will be able to experience on campus new computational tools and these new abilities transform academics on campus with every study,” says Provost Martin Schmidt.

Bloomberg

President L. Rafael Reif joins Bloomberg Bay State Business to speak with hosts Peter Barnes, Janet Wu and Pat Carroll about MIT’s $1 billion commitment to furthering the study of computer science and AI through a new college for computing.

Chronicle of Higher Education

Chronicle of Higher Education reporter Lee Gardner notes that MIT is making a $1 billion investment in furthering the study of computation and AI. “The institute’s project will support the search for solutions to two other daunting challenges,” Gardner explains, “how to handle the ethical and philosophical implications of AI for the societies it will transform, and how to break down institutional silos in academe.”

WGBH

WGBH reporter Maggie Penn examines how the MIT Stephen A. Schwarzman College of Computing will integrate the study of computer science and AI into every academic discipline. "Much of higher education is silo-ed, a lot of universities are dealing with that," explains Melissa Nobles, dean of SHASS. "This is a really creative way of getting around that and creating something new that is truly collaborative."