Study: AI could lead to inconsistent outcomes in home surveillance
Researchers find large language models make inconsistent decisions about whether to call the police when analyzing surveillance videos.
Researchers find large language models make inconsistent decisions about whether to call the police when analyzing surveillance videos.
“Co-LLM” algorithm helps a general-purpose AI model collaborate with an expert large language model by combining the best parts of both answers, leading to more factual responses.
“ScribblePrompt” is an interactive AI framework that can efficiently highlight anatomical structures across different medical scans, assisting medical workers to delineate regions of interest and abnormalities.
Researchers developed an easy-to-use tool that enables an AI practitioner to find data that suits the purpose of their model, which could improve accuracy and reduce bias.
The three-day, hands-on conference hosted by the MIT RAISE Initiative welcomed youths and adults from nearly 30 countries.
AI agents could soon become indistinguishable from humans online. Could “personhood credentials” protect people against digital imposters?
The software tool NeuroTrALE is designed to quickly and efficiently process large amounts of brain imaging data semi-automatically.
In controlled experiments, MIT CSAIL researchers discover simulations of reality developing deep within LLMs, indicating an understanding of language beyond simple mimicry.
The approach can detect anomalies in data recorded over time, without the need for any training.
A new algorithm helps robots practice skills like sweeping and placing objects, potentially helping them improve at important tasks in houses, hospitals, and factories.
CSAIL researchers introduce a novel approach allowing robots to be trained in simulations of scanned home environments, paving the way for customized household automation accessible to anyone.
More efficient than other approaches, the “Thermometer” technique could help someone know when they should trust a large language model.
Introducing structured randomization into decisions based on machine-learning model predictions can address inherent uncertainties while maintaining efficiency.
MAIA is a multimodal agent that can iteratively design experiments to better understand various components of AI systems.
Analysis and materials identified by MIT engineers could lead to more energy-efficient fuel cells, electrolyzers, batteries, or computing devices.