Google and Flickr's automatic photo tagging programs took flag for being racist (early to mid 2015). Labeling black people "Gorilla" or "ape" is the worst unPC gaffe imaginable. Other labeling mistakes are simply funny or dumb. These labeling programs are self learning, so no person could be blamed for the mistake. [[Racists say]] that such mistakes are normal, due to apparent similarities, that infants babies make exactly the same embarrassing mistakes. [[Racist babies]] [[Racist baby]] raised in a non-diverse white family sometimes blurt out loudly "monkey, monkey" when they see a black person.
[http://www.timesofisrael.com/new-google-app-blunders-calls-black-people-gorillas/ New Google app blunders, calls black people ‘gorillas’ Times of Israel] Company officials apologize for mortifying blunder, explain that technology still needs more development
[http://www.timesofisrael.com/new-google-app-blunders-calls-black-people-gorillas/ New Google app blunders, calls black people ‘gorillas’ Times of Israel] Company officials apologize for mortifying blunder, explain that technology still needs more development
[[File:Racist-computer-calls-black-gorilla.jpg|frameless|left|Racist google algorithm calls Black "gorilla"]]
<blockquote>SAN FRANCISCO (AP) — Google’s new image-recognition program misfired badly this week by identifying two black people as gorillas, delivering a reminder that even the most intelligent machines still have lot to learn about human sensitivity. </blockquote>
SAN FRANCISCO (AP) — Google’s new image-recognition program misfired badly this week [[racists say|R:]] Computers, and babies, need special instructions in [[Politically correct |PC]] [[sensitivities]]. Other mistakes, by identifying two black people as gorillascomputers or babies, delivering a reminder that even the most intelligent machines still have lot to learn about human sensitivityare funny or dumb. But offending PC sensitivities is worse than many violent crimes.
<blockquote>The blunder surfaced in a smartphone screen shot posted online Sunday by a New York man on his Twitter account, @jackyalcine. The images showed the recently released Google Photos app had sorted a picture of two black people into a category labeled as “gorillas.”</blockquote>
The account holder used a profanity while expressing his dismay about the app likening his friend to an ape, a comparison widely regarded as a racial slur when applied to a black person.
“We’re appalled and genuinely sorry that this happened,” Google spokeswoman Katie Watson said. “We are taking immediate action to prevent this type of result from appearing.”</blockquote> Google is fully aware that blatantly offending a [[disadvantaged minority]] with a slur is the worst sin possible. So much Google invited the offended party to an interview.
<blockquote>
A tweet to @jackyalcine requesting an interview hadn’t received a response several hours after it was sent Thursday.
Despite Google’s apology, the gaffe threatens to cast the Internet company in an unflattering light at a time when it and its Silicon Valley peers have already been fending off accusations of discriminatory hiring practices. Those perceptions have been fed by the composition of most technology companies’ workforces, which mostly consist of whites and Asians with a paltry few blacks and Hispanics sprinkled in.</blockquote>
The mix-up also surfaced amid rising US racial tensions that have been fueled by recent police killings of blacks and last month’s murder of nine black churchgoers in Charleston, South Carolina.
[http://www.cnet.com/news/google-apologizes-for-algorithm-mistakenly-calling-black-people-gorillas/ Google apologizes for algorithm mistakenly calling black people 'gorillas' |CNET] The incident points to the problem tech companies face as computers get smarter and are expected to take on more more tasks a human normally would do. Those areas of computer science -- such as artificial intelligence or machine learning -- are some of the biggest engineering focuses in Silicon Valley. But ''with that focus comes another task that computers have not traditionally tackled: grappling with the challenge of sensitivity''</blockquote> === [http://internet.gawker.com/flickrs-auto-tagging-feature-accidentally-labeled-a-bla-1706045425 Flickr's Auto-Tagging Feature Accidentally Labeled a Black Man "Ape"] <blockquote><p>Search for “ape” on Flickr and you’ll witness an endlessly scrolling cavalcade of primate photography, from monkeys glimpsed on safari to those held in captivity at the zoo. Until recently, you’d also see <a href="https://www.flickr.com/photos/thirteenthfloormedia/14570569401">a portrait of a middle-aged black man</a> named William. Flickr thought William was an ape, too.</p> <p>The accidental racism <a href="http://www.theguardian.com/technology/2015/may/20/flickr-complaints-offensive-auto-tagging-photos">came via Flickr’s new auto-tagging system</a>, which aimed to be helpful in appending broad labels to users’ photographs without asking them first. Previously, if you took a picture of your new Harley but didn’t tag it with “motorcycle,” other users might not find it when performing a search for pictures of two-wheelers. Auto-tags are meant to rectify a situation that didn’t need rectifying in the first place. (Maybe you didn’t tag a picture of your newborn because you didn’t want him turning up in someone else’s search for generic baby pics.)</p> <p>In a comment to the <em>Guardian </em>about the snafu, which was pointed out by a user, Flickr touted the “advanced image recognition technology” behind the auto-tagging feature. That technology, it turns out, possesses the discerning eye of a shar-pei with cataracts. In addition to labeling Corey Deshon’s portrait of William with “ape” and “animal,” Flickr did the same for <a href="https://www.flickr.com/photos/132452869@N04/17801937502/">this photo of a white woman</a> with multicolored paint on her face—the software’s intentions apparently aren’t racist, even if the results sometimes are—and tagged photos of the Dachau and Auschwitz concentrations with “sport.”</p> <p>All of the offending examples listed here have since been corrected, though the two portraits are still labeled with “animal,” which is I suppose technically accurate. And users can manually remove bad auto-tags from their pictures. As the <em>Guardian </em>notes, Flickr appears to have wisely removed “ape” entirely from its auto-tagger’s list of choices. Maybe leave this stuff to humans with eyes next time. </p></blockquote>* [http://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app Google says sorry for racist auto-tag in photo app] === [http://money.cnn.com/2015/05/21/technology/flickr-racist-tags/ Flickr's new auto-tags are racist and offensive | CNN Money] === <p>Some [https://www.flickr.com/photos/30926561@N02/17730858426/in/photolist-t1PhtC-t9UdPS-s6sGU9-smmvJp-sPvwCJ-tgaLPN-teWtUw-sZttTb-sKNEiW-eHmxpn-7Vk6Ej-dt8Tcc-d5A1RW-smkQik-tfXmoY-t2WmHi-s3KiFZ-sZkEgy-t767zr-sKnhzW-gqKgNE-85X9ph-8ekV5V-5RaoV2-bURk9k-aC1Qdt-55ns6z-4bm4LE-4JKVz1-55rDAd-4zsDnW-dBCSe-cJXXbs-55ns5r-a2pVNe-6s2ZM2-6s38R6-6sdaGL-apBu25-ccdzgb-dQmvU5-6g232V-c9pxY3-eTpvDQ-fCgvGy-4zsDHw-2QyZUa-4zsCK9-4JFDHi-6s79UA concentration camp photos] received inappropriate tags, including " [https://www.flickr.com/photos/30926561@N02/17730858426/in/photolist-t1PhtC-t9UdPS-s6sGU9-smmvJp-sPvwCJ-tgaLPN-teWtUw-sZttTb-sKNEiW-eHmxpn-7Vk6Ej-dt8Tcc-d5A1RW-smkQik-tfXmoY-t2WmHi-s3KiFZ-sZkEgy-t767zr-sKnhzW-gqKgNE-85X9ph-8ekV5V-5RaoV2-bURk9k-aC1Qdt-55ns6z-4bm4LE-4JKVz1-55rDAd-4zsDnW-dBCSe-cJXXbs-55ns5r-a2pVNe-6s2ZM2-6s38R6-6sdaGL-apBu25-ccdzgb-dQmvU5-6g232V-c9pxY3-eTpvDQ-fCgvGy-4zsDHw-2QyZUa-4zsCK9-4JFDHi-6s79UA sport]" and "[https://www.flickr.com/photos/laughingsquid/8201658514/in/photolist-duKDkq-9PKSj-cXnmV3-fTKqzx-a7YLG-bkwWS9-W7SqH-4H8i7U-maaRAe-4H847H-8mzCSf-dBzph-2ZdhDw-4he7Rw-kgxaY7-5hALSZ-5yyGsr-b4HESt-4S1hBe-nypSAa-6iVXQd-54wFUE-9YqupJ-5xS4xE-wqoBQ-8CMD9P-7erYPR-b9xF1-9HXGYE-JzjuB-bUHrXa-92saU-75dzRW-e5NfKv-dodUBM-c8aVUW-4VAMGs-bs1fes-cSWgM-6p1LkQ-6T9fQ3-ahjXEK-5MUzNh-9NERVS-gF9q4s-9iPPE2-c3HUcG-4S17ci-6riRgJ-cRxAKE jungle gym] ." </p> <p>Flickr had also been tagging some images of people as "ape" and "animal," including a [https://www.flickr.com/photos/thirteenthfloormedia/14570569401" photo of a black man] named William taken by photographer Corey Deshon, according to [http://www.theguardian.com/technology/2015/may/20/flickr-complaints-offensive-auto-tagging-photos" Guardian]. </p> The photo service had also labeled a [https://www.flickr.com/photos/132452869@N04/17801937502/" white woman wearing face paint] as "ape" and "animal," so Flickr's algorithm does not appear to be taking a person's skin color into consideration when auto-tagging them. <p>Flickr has since corrected both of those mistakes, but the concentration camp errors remain. </p> <p>"We are aware of issues with inaccurate auto-tags on Flickr and are working on a fix," a spokesman for Flickr said in a statement. "While we are very proud of this advanced image-recognition technology, we're the first to admit there will be mistakes and we are constantly working to improve the experience." </p> <p>Flickr noted that deleting incorrect tags teaches the new algorithm to learn from its mistake and improve its results in the future. The company also noted that Flickr staff does not personally tag photos -- it's all automated. </p> * [http://petapixel.com/2015/05/20/flickr-fixing-racist-auto-tagging-feature-after-black-man-mislabeled-ape/ Flickr Fixing ‘Racist’ Auto-Tagging Feature After Black Man Mislabeled ‘Ape’] * [http://www.cnet.com/news/flickr-misfires-with-automated-photo-tags/ Flickr misfires with automated photo tags] ---------------------------[https://vdare.com/posts/artificial-intelligence-the-robot-becky-menace / Artificial Intelligence--The Robot Becky Menace] Not surprisingly, the conventional wisdom increasingly believes Artificial Intelligence needs a dose of Artificial Stupidity to keep it from being as racist and sexist as Natural Intelligence. Otherwise, the Robot Permit Patties will run amok, says Nature: [https://www.nature.com/articles/d41586-018-05707-8?utm_source=briefing-wk&utm_medium=email&utm_campaign=briefing&utm_content=20180720 / AI can be sexist and racist — it’s time to make it fair] <blockquote>… As much as possible, data curators should provide the precise definition of descriptors tied to the data. For instance, in the case of criminal-justice data, appreciating the type of ‘crime’ that a model has been trained on will clarify how that model should be applied and interpreted. … Lastly, computer scientists should strive to develop algorithms that are more robust to human biases in the data. Various approaches are being pursued. One involves incorporating constraints and essentially nudging the machine-learning model to ensure that it achieves equitable performance across different subpopulations and between similar individuals. A related approach involves changing the learning algorithm to reduce its dependence on sensitive attributes, such as ethnicity, gender, income — and any information that is correlated with those characteristics.</blockquote>
[[Category:Under construction]]