AI-enabled translations initiative empowers Ukrainian learners with new skills
Ukrainian students and collaborators provide high-quality translations of MIT OpenCourseWare educational resources.
Ukrainian students and collaborators provide high-quality translations of MIT OpenCourseWare educational resources.
MAD Fellow Alexander Htet Kyaw connects humans, machines, and the physical world using AI and augmented reality.
TactStyle, a system developed by CSAIL researchers, uses image prompts to replicate both the visual appearance and tactile properties of 3D models.
The MIT Festival of Learning sparked discussions on better integrating a sense of purpose and social responsibility into hands-on education.
“InteRecon” enables users to capture items in a mobile app and reconstruct their interactive features in mixed reality. The tool could assist in education, medical environments, museums, and more.
Professor of media technology honored for research in human-computer interaction that is considered both fundamental and influential.
The Tactile Vega-Lite system, developed at MIT CSAIL, streamlines the tactile chart design process; could help educators efficiently create these graphics and aid designers in making precise changes.
“Xstrings” method enables users to produce cable-driven objects, automatically assembling bionic robots, sculptures, and dynamic fashion designs.
The system uses reconfigurable electromechanical building blocks to create structural electronics.
New research could allow a person to correct a robot’s actions in real-time, using the kind of feedback they’d give another human.
The consortium will bring researchers and industry together to focus on impact.
Projects from MIT course 4.043/4.044 (Interaction Intelligence) were presented at NeurIPS, showing how AI transforms creativity, education, and interaction in unexpected ways.
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.
Inspired by the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.
The neuroscientist turned entrepreneur will focus on advancing the intersection of behavioral science and AI across MIT.