Skip to content ↓

Topic

Equity and inclusion

Download RSS feed: News Articles / In the Media / Audio

Displaying 151 - 165 of 251 news clips related to this topic.
Show:

CBS Boston

Boston Mayor Martin Walsh named Lecturer Karilyn Crockett, “a brilliant innovator and change maker,” as the head of Boston’s new Equity and Inclusion Cabinet, reports CBS Boston. “I need everyone standing here with me, and within the hearing of my voice, to be bold and move beyond what we may individually think is possible,” said Crockett. 

Boston Globe

Karilyn Crockett, a lecturer in DUSP, spoke with The Boston Globe’s Kelly Horan about her role as Boston’s chief of equity. “As I prioritize racial, gender, and health equity for a city of 700,000 that is majority people of color, it means that we have to recognize that the history that brought us here has to be looked at in a clear way.”

The Washington Post

Prof. T.L. Taylor speaks with The Washington Post’s Liz Clarke about the ways in which female gamers are often harassed and excluded. “What we have not fully grappled with is that the right to play extends to the digital space and gaming,” says Taylor. “For me, it is tied to democracy and civic engagement. It’s about participating in culture and having a voice and visibility.”

Gizmodo

In an article for Gizmodo, Dell Cameron writes that graduate student Joy Buolamwini testified before Congress about the inherent biases of facial recognition systems. Buolamwini’s research on face recognition tools “identified a 35-percent error rate for photos of darker skinned women, as opposed to database searches using photos of white men, which proved accurate 99 percent of the time.”

Wired

Wired reporter Lily Hay Newman highlights graduate student Joy Buolamwini’s Congressional testimony about the bias of facial recognition systems. “New research is showing bias in the use of facial analysis technology for health care purposes, and facial recognition is being sold to schools,” said Buolamwini. “Our faces may well be the final frontier of privacy.” 

WBUR

WBUR reporter Pamela Reynolds highlights graduate student Joy Buolamwini’s piece, “The Coded Gaze,” which is currently on display as part of the “Avatars//Futures” exhibit at the Nave Gallery. Reynolds writes that Buolamwini’s piece “questions the inherent bias of coding in artificial intelligence, which has resulted in facial recognition technology unable to recognize black faces.”

Forbes

The Sloan School of Management and the Ruderman Family Foundation’s LINK20 have started a new week-long program aimed at equipping social justice and inclusion advocates “with theories and strategies in the areas of digital leadership, networking and entrepreneurship to become high-impact social influencers,” reports Sarah Kim for Forbes.

Time

Graduate student Joy Buolamwini writes for TIME about the need to tackle gender and racial bias in AI systems. “By working to reduce the exclusion overhead and enabling marginalized communities to engage in the development and governance of AI, we can work toward creating systems that embrace full spectrum inclusion,” writes Buolamwini.

Fast Company

In an article for Fast Company about hackathons, Dan Formosa highlights how the Make the Breast Pump Not Suck Hackathon held at MIT was an inclusive event focused on addressing issues of bias, inequality and accessibility, noting how the organizers “went to extremes to assure diversity.”

Wired

Prof. Joi Ito, director of the Media Lab, writes for Wired about how AI systems can help perpetuate longstanding discriminatory practices. “By merely relying on historical data and current definitions of fairness, we will lock in the accumulated unfairnesses of the past,” argues Ito, “and our algorithms and the products they support will always trail the norms.”

Associated Press

Associated Press reporter Tali Arbel writes that MIT researchers have found that Amazon’s facial detection technology often misidentifies women and women with darker skin. Arbel writes that the study, “warns of the potential of abuse and threats to privacy and civil liberties from facial-detection technology.”

The Washington Post

A new study by Media Lab researchers finds that Amazon’s Rekognition facial recognition system performed more accurately when identifying lighter-skinned faces, reports Drew Harrell for The Washington Post. The system “performed flawlessly in predicting the gender of lighter-skinned men,” writes Harrell, “but misidentified the gender of darker-skinned women in roughly 30 percent of their tests.”

The Verge

Verge reporter James Vincent writes that Media Lab researchers have found that the facial recognition system Rekognition performed worse at identifying an individual’s gender if they were female or dark-skinned. In experiments, the researchers found that the system “mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time,” Vincent explains.

New York Times

MIT researchers have found that the Rekognition facial recognition system has more difficulty identifying the gender of female and darker-skinned faces than similar services, reports Natasha Singer for The New York Times. Graduate student Joy Buolamwini said “the results of her studies raised fundamental questions for society about whether facial technology should not be used in certain situations,” writes Singer.

WGBH

Graduate student Irene Chen speaks with WGBH’s Living Lab Radio about her work trying to reduce bias in health care algorithms. “The results that we’ve shown from healthcare algorithms are so powerful that we really do need to see how we could implement those carefully, safely, robustly and fairly,” she explains.