Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 166 - 180 of 700 news clips related to this topic.
Show:

Fortune

Writing for Fortune, Sloan research fellow Michael Schrage and his colleagues, explain how AI-enabled key performance indicators (KPIs) can help companies better understand and measure success. “Driving strategic alignment within their organization is an increasingly important priority for senior executives,” they write. “AI-enabled KPIs are powerful tools for achieving this. By getting their data right, using appropriate organizational constructs, and driving a cultural shift towards data-driven decision making, organizations can effectively govern the creation and deployment of AI-enabled KPIs." 

Los Angeles Times

Los Angeles Times reporter Brian Merchant spotlights Joy Buolamwini PhD '22 and her new book, “Unmasking AI: My Mission to Protect What is Human in a World of Machines.” “Buolamwini’s book recounts her journey to become one of the nation’s preeminent scholars and critics of artificial intelligence — she recently advised President Biden before the release of his executive order on AI — and offers readers a compelling, digestible guide to some of the most pressing issues in the field,” writes Merchant.

The Boston Globe

Joy Buolamwini PhD '22 speaks with Brian Bergstein of The Boston Globe’s “Say More” podcast about her academic and professional career studying bias in AI. “As I learned more and also became familiar with the negative impacts of things like facial recognition technologies, it wasn’t just the call to say let’s make systems more accurate but a call to say let’s reexamine the ways in which we create AI in the first place and let’s reexamine our measures of progress because so far they have been misleading,” says Buolamwini

The Boston Globe

Joy Buolamwini PhD '22 writes for The Boston Globe about her experience uncovering bias in artificial intelligence through her academic and professional career. “I critique AI from a place of having been enamored with its promise, as an engineer more eager to work with machines than with people at times, as an aspiring academic turned into an accidental advocate, and also as an artist awakened to the power of the personal when addressing the seemingly technical,” writes Buolamwini. “The option to say no, the option to halt a project, the option to admit to the creation of dangerous and harmful though well-intentioned tools must always be on the table.”

The Washington Post

Graduate student Shayne Longpre speaks with Washington Post reporter Nitasha Tiku about the ethical and legal implications surrounding language model datasets. Longpre says “the lack of proper documentation is a community-wide problem that stems from modern machine-learning practices.”

Axios

Researchers from MIT and elsewhere have developed a transparency index used to assess 10 key AI foundation models, reports Ryan Heath for Axios. Heath writes that the researchers emphasized that “unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.”

TechCrunch

TechCrunch reporter Kylie Wiggers spotlights the SmartEM project by researchers from MIT and Harvard, which is aimed at enhancing lab work using, “a computer vision system and ML control system inside a scanning electron microscope” to examine a specimen intelligently. “It can avoid areas of low importance, focus on interesting or clear ones, and do smart labeling of the resulting image as well,” writes Wiggers.

The World

Research scientist Nataliya Kosmyna speaks with The World host Chris Harland-Dunaway about the science behind using non-invasive brain scans and artificial intelligence to understand people’s different thoughts and mental images.  

Fortune

Graduate student Sarah Gurev and her colleagues have developed a new AI system named EVEscape that can, “predict alterations likely to occur to viruses as they evolve,” reports Erin Prater for Fortune. Gurev says that with the amount of data the system has amassed, it “can make surprisingly accurate predications.”

Tech Times

MIT CSAIL researchers have developed a new air safety system, called Air-Guardian, that is designed to serve as a “proactive co-pilot, enhancing safety during critical moments of flight,” reports Jace Dela Cruz for Tech Times

TechCrunch

Arvid Lunnemark '22, Michael Truell '22, Sualeh Asif '22, and Aman Sanger '22 co-founded Anysphere, a startup building an “‘AI-native’” software development environment, called Cursor,” reports Kyle Wiggers for TechCrunch. “In the next several years, our mission is to make programming an order of magnitude faster, more fun and creative,” says Truell. “Our platform enables all developers to build software faster.”

Forbes

Curtis Northcutt SM '17, PhD '21, Jonas Mueller PhD '18, and Anish Athalye SB '17, SM '17, PhD '23 have co-founded Cleanlab, a startup aimed at fixing data problems in AI models, reports Alex Konrad for Forbes. “The reality is that every single solution that’s data-driven — and the world has never been more data-driven — is going to be affected by the quality of the data,” says Northcutt.

Axios

Axios reporter Alison Snyder writes about how a new study by MIT researchers finds that preconceived notions about AI chatbots can impact people’s experiences with them. Prof. Pattie Maes explains, the technology's developers “always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned, but we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don't just depend on the AI and the quality of the AI. It depends on how the human responds to the AI.”

Scientific American

MIT researchers have found that user bias can drive interactions with AI chatbots, reports Nick Hilden for Scientific American.  “When people think that the AI is caring, they become more positive toward it,” graduate student Pat Pataranutaporn explains. “This creates a positive reinforcement feedback loop where, at the end, the AI becomes much more positive, compared to the control condition. And when people believe that the AI was manipulative, they become more negative toward the AI—and it makes the AI become more negative toward the person as well.”

The Boston Globe

Prof. Thomas Kochan and Prof. Thomas Malone speak with Boston Globe reporter Hiawatha Bray about the recent deal between the Writers Guild of America and the Alliance of Motion Picture and Television Producers, which will “protect movie screenwriters from losing their jobs to computers that could use artificial intelligence to generate screenplays.” Kochan notes that when it comes to AI, “where workers don’t have a voice through a union, most companies are not engaging their workers on these issues, and the workers have no rights, no redress.”