Google and Flickr's automatic photo tagging programs took flag for being racist (early to mid 2015). Labeling black people "Gorilla" or "ape" is the worst unPC gaffe imaginable. Other labeling mistakes are simply funny or dumb. These labeling programs are self learning, so no person could be blamed for the mistake. Racists say that such mistakes are normal, due to apparent similarities, that babies make exactly the same embarrassing mistakes. Racist babies Racist baby raised in a non-diverse white family sometimes blurt out loudly "monkey, monkey" when they see a black person.
New Google app blunders, calls black people ‘gorillas’ Times of Israel Company officials apologize for mortifying blunder, explain that technology still needs more development
SAN FRANCISCO (AP) — Google’s new image-recognition program misfired badly this week by identifying two black people as gorillas, delivering a reminder that even the most intelligent machines still have lot to learn about human sensitivity.
The blunder surfaced in a smartphone screen shot posted online Sunday by a New York man on his Twitter account, @jackyalcine. The images showed the recently released Google Photos app had sorted a picture of two black people into a category labeled as “gorillas.”
The account holder used a profanity while expressing his dismay about the app likening his friend to an ape, a comparison widely regarded as a racial slur when applied to a black person.
“We’re appalled and genuinely sorry that this happened,” Google spokeswoman Katie Watson said. “We are taking immediate action to prevent this type of result from appearing.”</blockquote>
Google is fully aware that blatantly offending a disadvantaged minority with a slur is the worst sin possible. So much Google invited the offended party to an interview.
A tweet to @jackyalcine requesting an interview hadn’t received a response several hours after it was sent Thursday.Despite Google’s apology, the gaffe threatens to cast the Internet company in an unflattering light at a time when it and its Silicon Valley peers have already been fending off accusations of discriminatory hiring practices. Those perceptions have been fed by the composition of most technology companies’ workforces, which mostly consist of whites and Asians with a paltry few blacks and Hispanics sprinkled in.
The mix-up also surfaced amid rising US racial tensions that have been fueled by recent police killings of blacks and last month’s murder of nine black churchgoers in Charleston, South Carolina.
Google’s error underscores the pitfalls of relying on machines to handle tedious tasks that people have typically handled in the past. In this case, the Google Photo app released in late May uses recognition software to analyze images in pictures to sort them into a variety of categories, including places, names, activities and animals.
When the app came out, Google executives warned it probably wouldn’t get everything right — a point that has now been hammered home. Besides mistaking humans for gorillas, the app also has been mocked for labeling some people as seals and some dogs as horses.
“There is still clearly a lot of work to do with automatic image labeling,” Watson conceded.
Some commentators in social media, though, wondered if the flaws in Google’s automatic-recognition software may have stemmed on its reliance on white and Asian engineers who might not be sensitive to labels that would offend black people. About 94 percent of Google’s technology workers are white or Asian and just 1 percent is black, according to the company’s latest diversity disclosures.
Google isn’t the only company still trying to work out the bugs in its image-recognition technology.
Shortly after Yahoo’s Flickr introduced an automated service for tagging photos in May, it fielded complaints about identifying black people as “apes” and “animals.” Flickr also mistakenly identified a Nazi concentration camp as a “jungle gym.”
Google reacted swiftly to the mess created by its machines, long before the media began writing about it.
Less than two hours after @jackyalcine posted his outrage over the gorilla label, one of Google’s top engineers had posted a response seeking access to his account to determine what went wrong. Yonatan Zunger, chief architect of Google’s social products, later tweeted: “Sheesh. High on my list of bugs you never want to see happen. Shudder.”
Copyright 2015 The Associated Press.
Google ‘appalled’ as Photos app labels black people ‘gorillas’ |Irish Times Google removes ‘gorilla’ tag from app, after it misidentified images of black people
Google apologized and went to work fixing the problem earlier in the week after the offensive blunder was pointed out in a Twitter message from @JackyAlcine.
"Google Photos, y'all (messed) up," Jacky Alcine said in a series of emphatic messages.
"My friend's not a gorilla."
OVERHAULED PHOTO APP
Google released its overhauled photo app for smartphones in May, touting it as a major advancement in sorting, organizing, and handling pictures.
Google engineer Yonatan Zunger put the blame for the labelling on the artificial intelligence software designed to let machines learn how to recognise places, people and objects in pictures.
"Sheesh," Zunger said in the exchange of Twitter messages. "High on my list of bugs you never want to see happen. Shudder."
Zunger told of problems such as not seeing faces in pictures at all, or even identifying some people as dogs.
Picture recognition has proven challenging for computers and numerous companies are working on programs to improve identification.
Google and Facebook are among Silicon Valley technology giants investing heavily in artificial intelligence to get machines to think more like the way people do.
"There is still clearly a lot of work to do with automatic image labelling, and we're looking at how we can prevent these types of mistakes from happening in the future," the Google representative said of the photo gaffe.
"I understand how this happens," Alcine said in the online exchange. "The problem is more so on the why."
The incident points to the problem tech companies face as computers get smarter and are expected to take on more more tasks a human normally would do. Those areas of computer science -- such as artificial intelligence or machine learning -- are some of the biggest engineering focuses in Silicon Valley. But with that focus comes another task that computers have not traditionally tackled: grappling with the challenge of sensitivity
Search for “ape” on Flickr and you’ll witness an endlessly scrolling cavalcade of primate photography, from monkeys glimpsed on safari to those held in captivity at the zoo. Until recently, you’d also see <a href="https://www.flickr.com/photos/thirteenthfloormedia/14570569401">a portrait of a middle-aged black man</a> named William. Flickr thought William was an ape, too.
The accidental racism <a href="http://www.theguardian.com/technology/2015/may/20/flickr-complaints-offensive-auto-tagging-photos">came via Flickr’s new auto-tagging system</a>, which aimed to be helpful in appending broad labels to users’ photographs without asking them first. Previously, if you took a picture of your new Harley but didn’t tag it with “motorcycle,” other users might not find it when performing a search for pictures of two-wheelers. Auto-tags are meant to rectify a situation that didn’t need rectifying in the first place. (Maybe you didn’t tag a picture of your newborn because you didn’t want him turning up in someone else’s search for generic baby pics.)
In a comment to the Guardian about the snafu, which was pointed out by a user, Flickr touted the “advanced image recognition technology” behind the auto-tagging feature. That technology, it turns out, possesses the discerning eye of a shar-pei with cataracts. In addition to labeling Corey Deshon’s portrait of William with “ape” and “animal,” Flickr did the same for <a href="https://www.flickr.com/photos/132452869@N04/17801937502/">this photo of a white woman</a> with multicolored paint on her face—the software’s intentions apparently aren’t racist, even if the results sometimes are—and tagged photos of the Dachau and Auschwitz concentrations with “sport.”
All of the offending examples listed here have since been corrected, though the two portraits are still labeled with “animal,” which is I suppose technically accurate. And users can manually remove bad auto-tags from their pictures. As the Guardian notes, Flickr appears to have wisely removed “ape” entirely from its auto-tagger’s list of choices. Maybe leave this stuff to humans with eyes next time.
The photo service had also labeled a " white woman wearing face paint as "ape" and "animal," so Flickr's algorithm does not appear to be taking a person's skin color into consideration when auto-tagging them.
Flickr has since corrected both of those mistakes, but the concentration camp errors remain.
"We are aware of issues with inaccurate auto-tags on Flickr and are working on a fix," a spokesman for Flickr said in a statement. "While we are very proud of this advanced image-recognition technology, we're the first to admit there will be mistakes and we are constantly working to improve the experience."
Flickr noted that deleting incorrect tags teaches the new algorithm to learn from its mistake and improve its results in the future. The company also noted that Flickr staff does not personally tag photos -- it's all automated.
Not surprisingly, the conventional wisdom increasingly believes Artificial Intelligence needs a dose of Artificial Stupidity to keep it from being as racist and sexist as Natural Intelligence. Otherwise, the Robot Permit Patties will run amok, says Nature:
… As much as possible, data curators should provide the precise definition of descriptors tied to the data. For instance, in the case of criminal-justice data, appreciating the type of ‘crime’ that a model has been trained on will clarify how that model should be applied and interpreted. …
Lastly, computer scientists should strive to develop algorithms that are more robust to human biases in the data.
Various approaches are being pursued. One involves incorporating constraints and essentially nudging the machine-learning model to ensure that it achieves equitable performance across different subpopulations and between similar individuals.
A related approach involves changing the learning algorithm to reduce its dependence on sensitive attributes, such as ethnicity, gender, income — and any information that is correlated with those characteristics.