Skip to content ↓

Topic

Electrical engineering and computer science (EECS)

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 1011 news clips related to this topic.
Show:

New York Times

Prof. Armando Solar-Lezama speaks with New York Times reporter Sarah Kessler about the future of coding jobs, noting that AI systems still lack many essential skills. “When you’re talking about more foundational skills, knowing how to reason about a piece of code, knowing how to track down a bug across a large system, those are things that the current models really don’t know how to do,” says Solar-Lezama.

Forbes

Forbes contributor Michael T. Nietzel spotlights the newest cohort of Rhodes Scholars, which includes Yiming Chen '24, Wilhem Hector, Anushka Nair, and David Oluigbo from MIT. Nietzel notes that Oluigbo has “published numerous peer-reviewed articles and conducts research on applying artificial intelligence to complex medical problems and systemic healthcare challenges.” 

Associated Press

Yiming Chen '24, Wilhem Hector, Anushka Nair, and David Oluigbo have been named 2025 Rhodes Scholars, report Brian P. D. Hannon and John Hanna for the Associated Press. Undergraduate student David Oluigbo, one of the four honorees, has “volunteered at a brain research institute and the National Institutes of Health, researching artificial intelligence in health care while also serving as an emergency medical technician,” write Hannon and Hanna.

Forbes

Researchers at MIT have developed a new AI model capable of assessing a patient’s risk of pancreatic cancer, reports Erez Meltzer for Forbes. “The model could potentially expand the group of patients who can benefit from early pancreatic cancer screening from 10% to 35%,” explains Meltzer. “These kinds of predictive capabilities open new avenues for preventive care.” 

Craft in America

Craft in America visits Prof. Erik Demaine and Martin Demaine of CSAIL to learn more about their work with computational origami. “Computational origami is quite useful for the mathematical problems we are trying to solve,” Prof. Erik Demaine explains. “We try to integrate the math and the art together.”

TechCrunch

Neural Magic, an AI optimization startup co-founded by Prof. Nir Shavit and former Research Scientist Alex Matveev, aims to “process AI workloads on processors and GPUs at speeds equivalent to specialized AI chips,” reports Kyle Wiggers for TechCrunch. “By running models on off-the-shelf processors, which usually have more available memory, the company’s software can realize these performance gains,” explains Wiggers. 

New Scientist

Researchers at MIT have developed a robot capable of assembling “building blocks called voxels to build an object with almost any shape,” reports Alex Wilkins for New Scientist. “You can get furniture-scale objects really fast in a very sustainable way, because you can reuse these modular components and ask a robot to reassemble them into different large-scale objects,” says graduate student Alexander Htet Kyaw.

TechCrunch

Michael Truell '21, Sualeh Asif '22, Arvid Lunnemar '22, and Aman Sanger '22 co-founded Anysphere, an AI startup working on developing Cursor, an AI-powered coding assistant, reports Marina Temkin for TechCrunch.

Forbes

Researchers at MIT have developed a “new type of transistor using semiconductor nanowires made up of gallium antimonide and iridium arsenide,” reports Alex Knapp for Forbes. “The transistors were designed to take advantage of a property called quantum tunneling to move electricity through transistors,” explains Knapp. 

New Scientist

Researchers at MIT have developed a new virtual training program for four-legged robots by taking “popular computer simulation software that follows the principles of real-world physics and inserting a generative AI model to produce artificial environments,” reports Jeremy Hsu for New Scientist. “Despite never being able to ‘see’ the real world during training, the robot successfully chased real-world balls and climbed over objects 88 per cent of the time after the AI-enhanced training,” writes Hsu. "When the robot relied solely on training by a human teacher, it only succeeded 15 per cent of the time.”

Mashable

Graduate student Aruna Sankaranarayanan speaks with Mashable reporter Cecily Mauran the impact of political deepfakes and the importance of AI literacy, noting that the fabrication of important figures who aren’t as well known is one of her biggest concerns. “Fabrication coming from them, distorting certain facts, when you don’t know what they look like or sound like most of the time, that’s really hard to disprove,” says Sankaranarayanan.  

TechCrunch

Researchers at MIT have developed a new model for training robots dubbed Heterogeneous Pretrained Transformers (HPT), reports Brain Heater for TechCrunch. The new model “pulls together information from different sensors and different environments,” explains Heater. “A transformer was then used to pull together the data into training models. The larger the transformer, the better the output. Users then input the robot design, configuration, and the job they want done.” 

TechAcute

MIT researchers have developed a new training technique called Heterogeneous Pretrained Transformers (HPT) that could help make general-purpose robots more efficient and adaptable, reports Christopher Isak for TechAcute. “The main advantage of this technique is its ability to integrate data from different sources into a unified system,” explains Isak. “This approach is similar to how large language models are trained, showing proficiency across many tasks due to their extensive and varied training data. HPT enables robots to learn from a wide range of experiences and environments.” 

Tech Briefs

MIT researchers have developed a security protocol that utilizes quantum properties to ensure the security of data in cloud servers, reports Andrew Corselli for Tech Briefs. “Our protocol uses the quantum properties of light to secure the communication between a client (who owns confidential data) and a server (that holds a confidential deep learning model),” explains postdoc Sri Krishna Vadlamani. 

Wired

Liquid AI, an MIT startup, is unveiling a new AI model based on a liquid neural network that “has the potential to be more efficient, less power-hungry, and more transparent than the ones that underpin everything from chatbots to image generators to facial recognition systems, reports Will Knight for Wired.