Large language models are biased. Can logic help save them?
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.
Download RSS feed: News Articles / In the Media / Audio
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.
The prize is the top honor within the field of communications technology.
The Advanced Computing Users Survey, sampling sentiments from 120 top-tier universities, national labs, federal agencies, and private firms, finds the decline in America’s advanced computing lead spans many areas.
The program leverages MIT’s research expertise and Takeda’s industrial know-how for research in artificial intelligence and medicine.
The device could help workers locate objects for fulfilling e-commerce orders or identify parts for assembling products.
19th Microsystems Annual Research Conference reveals the next era of microsystems technologies, along with skiing and a dance party.
The chip, which can decipher any encoded signal, could enable lower-cost devices that perform better while requiring less hardware.
The receiver chip efficiently blocks signal interference that slows device performance and drains batteries.
A wireless technique enables a super-cold quantum computer to send and receive data without generating too much error-causing heat.
Annual award honors early-career researchers for creativity, innovation, and research accomplishments.
Seven researchers, along with 14 additional MIT alumni, are honored for significant contributions to engineering research, practice, and education.
The method enables a model to determine its confidence in a prediction, while using no additional data and far fewer computing resources than other methods.
MIT spinout Verta offers tools to help companies introduce, monitor, and manage machine-learning models safely and at scale.
“Squeezing” noise over a broad frequency bandwidth in a quantum system could lead to faster and more accurate quantum measurements.
A new study shows how large language models like GPT-3 can learn a new task from just a few examples, without the need for any new training data.