Skip to content ↓

Topic

Equity and inclusion

Download RSS feed: News Articles / In the Media / Audio

Displaying 181 - 195 of 251 news clips related to this topic.
Show:

Newsweek

To prove that the data used to train machine learning algorithms can greatly influence its behavior, MIT researchers input gruesome and violent content into an AI algorithm, writes Benjamin Fearnow for Newsweek. The result is “Norman,” an AI system in which “empathy logic simply failed to turn on,” explains Fearnow.

HuffPost

HuffPost reporter Thomas Tamblyn writes that MIT researchers developed a new AI system that sees the worst in humanity to illustrate what happens when bias enters the machine learning process. “An AI learns only what it is fed, and if the humans that are feeding it are biased (consciously or not) then the results can be extremely problematic.”

Forbes

Forbes contributor Frederick Daso describes how two female MBA students at the MIT Sloan School of Management, Preeti Sampat and Jaida Yang, started their own venture capital firm in an effort to, “bridge the geographical and diversity gaps in the current early-stage investing ecosystem.”

The Atlantic

Writing for The Atlantic, MIT lecturer Amy Carleton describes the focus on public policy, as well as engineering and product design, at this year’s “Make the Breast Pump Not Suck” hackathon. “What emerged [at the inaugural hackathon] was an awareness that the challenges surrounding breastfeeding were not just technical and equipment-based,” explains Carleton.

WGBH

A recent study from Media Lab graduate student Joy Buolamwini addresses errors in facial recognition software that create concern for civil liberties. “If programmers are training artificial intelligence on a set of images primarily made up of white male faces, their systems will reflect that bias,” writes Cristina Quinn for WGBH.

Boston Magazine

Spencer Buell of Boston Magazine speaks with graduate student Joy Buolamwini, whose research shows that many AI programs are unable to recognize non-white faces. “‘We have blind faith in these systems,’ she says. ‘We risk perpetuating inequality in the guise of machine neutrality if we’re not paying attention.’”

NBC Boston

NBC Boston reporter Frank Holland visits MIT to discuss the Institute’s ties to slavery, which is the subject of a new undergraduate research course. “MIT and Slavery class is pushing us into a national conversation. A conversation that’s well underway in the rest of country regarding the role of slavery and institutions of higher learning,” said Dean Melissa Nobles.

Boston 25 News

Mel King, who founded the Community Fellows Program in 1996, spoke to Crystal Haynes at Boston 25 News for a feature about his lifelong efforts to promote inclusion and equal access to technology. Haynes notes that King, a senior lecturer emeritus at MIT, “is credited with forming Boston into the city it is today; bringing groups separated by race, gender and sexuality together in a time when it was not only unexpected, but dangerous.”

The Economist

An article in The Economist states that new research by MIT grad student Joy Buolamwini supports the suspicion that facial recognition software is better at processing white faces than those of other people. The bias probably arises “from the sets of data the firms concerned used to train their software,” the article suggests.

Quartz

Dave Gershgorn writes for Quartz, highlighting congress’ concerns around the dangers of inaccurate facial recognition programs. He cites Joy Buolamwini’s Media Lab research on facial recognition, which he says “maintains that facial recognition is still significantly worse for people of color.”

New Scientist

Graduate student Joy Buolamwini tested three different face-recognition systems and found that the accuracy is best when the subject is a lighter skinned man, reports Timothy Revell for New Scientist. With facial recognition software being used by police to identify suspects, “this means inaccuracies could have consequences, such as systematically ingraining biases in police stop and searches,” writes Revell.

Marketplace

Molly Wood at Marketplace speaks with Media Lab graduate student Joy Buolamwini about the findings of her recent research, which examined widespread bias in AI-supported facial recognition programs. “At the end of the day, data reflects our history, and our history has been very biased to date,” Buolamwini said.

co.design

Recent research from graduate student Joy Buolamwini shows that facial recognition programs, which are increasingly being used by law enforcement, are failing to identify non-white faces. “When these systems can’t recognize darker faces with as much accuracy as lighter faces, there’s a higher likelihood that innocent people will be targeted by law enforcement,” writes Katharine Schwab for Co. Design

Gizmodo

Writing for Gizmodo, Sidney Fussell explains that a new Media Lab study finds facial-recognition software is most accurate when identifying men with lighter skin and least accurate for women with darker skin. The software analyzed by graduate student Joy Buolamwini “misidentified the gender of dark-skinned females 35 percent of the time,” explains Fussell.

Quartz

A study co-authored by MIT graduate student Joy Buolamwini finds that facial-recognition software is less accurate when identifying darker skin tones, especially those of women, writes Josh Horwitz of Quartz. According to the study, these errors could cause AI services to “treat individuals differently based on factors such as skin color or gender,” explains Horwitz.