3 Questions: Enhancing last-mile logistics with machine learning
MIT Center for Transportation and Logistics Director Matthias Winkenbach uses AI to make vehicle routing more efficient and adaptable for unexpected events.
MIT Center for Transportation and Logistics Director Matthias Winkenbach uses AI to make vehicle routing more efficient and adaptable for unexpected events.
A CSAIL study highlights why it is so challenging to program a quantum computer to run a quantum algorithm, and offers a conceptual model for a more user-friendly quantum computer.
The MIT Schwarzman College of Computing building will form a new cluster of connectivity across a spectrum of disciplines in computing and artificial intelligence.
Researchers create a curious machine-learning model that finds a wider variety of prompts for training a chatbot to avoid hateful or harmful output.
Researchers developed a simple yet effective solution for a puzzling problem that can worsen the performance of large language models such as ChatGPT.
The ambient light sensors responsible for smart devices’ brightness adjustments can capture images of touch interactions like swiping and tapping for hackers.
PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.
A multimodal system uses models trained on language, vision, and action data to help robots develop and execute plans for household, construction, and manufacturing tasks.
MIT researchers propose “PEDS” method for developing models of complex physical systems in mechanics, optics, thermal transport, fluid dynamics, physical chemistry, climate, and more.
MIT researchers introduce a method that uses artificial intelligence to automate the explanation of complex neural networks.
A new study finds that language regions in the left hemisphere light up when reading uncommon sentences, while straightforward sentences elicit little response.
Master’s students Irene Terpstra ’23 and Rujul Gandhi ’22 use language to design new integrated circuits and make it understandable to robots.
This new method draws on 200-year-old geometric foundations to give artists control over the appearance of animated characters.
“Minimum viewing time” benchmark gauges image recognition complexity for AI systems by measuring the time needed for accurate human identification.
Human Guided Exploration (HuGE) enables AI agents to learn quickly with some help from humans, even if the humans make mistakes.