Difference between revisions of "Racist computers"

From Racism wiki
Jump to navigation Jump to search
(added a recent article by steve Sailor)
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
Google and Flickr's automatic photo tagging programs took flag for being racist (early to mid 2015). Labeling black people "Gorilla" or "ape" is the worst unPC gaffe imaginable. Other labeling mistakes are simply funny or dumb. These labeling programs are self learning, so no person could be blamed for the mistake. [[Racists say]] that such mistakes are normal, due to apparent similarities, that infants make exactly the same embarrassing mistakes.
+
Google and Flickr's automatic photo tagging programs took flag for being racist (early to mid 2015). Labeling black people "Gorilla" or "ape" is the worst unPC gaffe imaginable. Other labeling mistakes are simply funny or dumb. These labeling programs are self learning, so no person could be blamed for the mistake. [[Racists say]] that such mistakes are normal, due to apparent similarities, that babies make exactly the same embarrassing mistakes. [[Racist babies]] [[Racist baby]] raised in a non-diverse white family sometimes blurt out loudly "monkey, monkey" when they see a black person.
 
 
  
  
Line 7: Line 6:
  
  
 +
[[File:Racist-computer-calls-black-gorilla.jpg|frameless|left|Racist google algorithm calls Black "gorilla"]]
  
<bq>SAN FRANCISCO (AP) — Google’s new image-recognition program misfired badly this week by identifying two black people as gorillas, delivering a reminder that even the most intelligent machines still have lot to learn about human sensitivity. </bq>
+
<blockquote>SAN FRANCISCO (AP) — Google’s new image-recognition program misfired badly this week by identifying two black people as gorillas, delivering a reminder that even the most intelligent machines still have lot to learn about human sensitivity. </blockquote>
  
[[racists say|R:]] Computers, and babies, need special instructions in [[PC]] [[sensitivities]]. Other mistakes, by computers or babies, are funny or dumb. But offending PC sensitivities is worse than many violent crimes.
+
[[racists say|R:]] Computers, and babies, need special instructions in [[Politically correct |PC]] [[sensitivities]]. Other mistakes, by computers or babies, are funny or dumb. But offending PC sensitivities is worse than many violent crimes.
  
<bq>The blunder surfaced in a smartphone screen shot posted online Sunday by a New York man on his Twitter account, @jackyalcine. The images showed the recently released Google Photos app had sorted a picture of two black people into a category labeled as “gorillas.”</bq>
+
<blockquote>The blunder surfaced in a smartphone screen shot posted online Sunday by a New York man on his Twitter account, @jackyalcine. The images showed the recently released Google Photos app had sorted a picture of two black people into a category labeled as “gorillas.”</blockquote>
  
 
The account holder used a profanity while expressing his dismay about the app likening his friend to an ape, a comparison widely regarded as a racial slur when applied to a black person.
 
The account holder used a profanity while expressing his dismay about the app likening his friend to an ape, a comparison widely regarded as a racial slur when applied to a black person.
  
“We’re appalled and genuinely sorry that this happened,” Google spokeswoman Katie Watson said. “We are taking immediate action to prevent this type of result from appearing.”</bq>
+
“We’re appalled and genuinely sorry that this happened,” Google spokeswoman Katie Watson said. “We are taking immediate action to prevent this type of result from appearing.”</blockquote>
  
 
Google is fully aware that blatantly offending a [[disadvantaged minority]] with a slur is the worst sin possible. So much Google invited the offended party to an interview.
 
Google is fully aware that blatantly offending a [[disadvantaged minority]] with a slur is the worst sin possible. So much Google invited the offended party to an interview.
  
<bq>
+
<blockquote>
 
A tweet to @jackyalcine requesting an interview hadn’t received a response several hours after it was sent Thursday.
 
A tweet to @jackyalcine requesting an interview hadn’t received a response several hours after it was sent Thursday.
  
Despite Google’s apology, the gaffe threatens to cast the Internet company in an unflattering light at a time when it and its Silicon Valley peers have already been fending off accusations of discriminatory hiring practices. Those perceptions have been fed by the composition of most technology companies’ workforces, which mostly consist of whites and Asians with a paltry few blacks and Hispanics sprinkled in.</bq>
+
Despite Google’s apology, the gaffe threatens to cast the Internet company in an unflattering light at a time when it and its Silicon Valley peers have already been fending off accusations of discriminatory hiring practices. Those perceptions have been fed by the composition of most technology companies’ workforces, which mostly consist of whites and Asians with a paltry few blacks and Hispanics sprinkled in.</blockquote>
  
 
The mix-up also surfaced amid rising US racial tensions that have been fueled by recent police killings of blacks and last month’s murder of nine black churchgoers in Charleston, South Carolina.
 
The mix-up also surfaced amid rising US racial tensions that have been fueled by recent police killings of blacks and last month’s murder of nine black churchgoers in Charleston, South Carolina.
Line 80: Line 80:
  
 
The incident points to the problem tech companies face as computers get smarter and are expected to take on more more tasks a human normally would do. Those areas of computer science -- such as artificial intelligence or machine learning -- are some of the biggest engineering focuses in Silicon Valley. But ''with that focus comes another task that computers have not traditionally tackled: grappling with the challenge of sensitivity''
 
The incident points to the problem tech companies face as computers get smarter and are expected to take on more more tasks a human normally would do. Those areas of computer science -- such as artificial intelligence or machine learning -- are some of the biggest engineering focuses in Silicon Valley. But ''with that focus comes another task that computers have not traditionally tackled: grappling with the challenge of sensitivity''
 
+
</blockquote>
  
 
=== [http://internet.gawker.com/flickrs-auto-tagging-feature-accidentally-labeled-a-bla-1706045425 Flickr's Auto-Tagging Feature Accidentally Labeled a Black Man "Ape"]
 
=== [http://internet.gawker.com/flickrs-auto-tagging-feature-accidentally-labeled-a-bla-1706045425 Flickr's Auto-Tagging Feature Accidentally Labeled a Black Man "Ape"]
  
 +
<blockquote>
 
<p>Search for “ape” on Flickr and you’ll witness an endlessly scrolling cavalcade of primate photography, from monkeys glimpsed on safari to those held in captivity at the zoo. Until recently, you’d also see <a href="https://www.flickr.com/photos/thirteenthfloormedia/14570569401">a portrait of a middle-aged black man</a> named William. Flickr thought William was an ape, too.</p>
 
<p>Search for “ape” on Flickr and you’ll witness an endlessly scrolling cavalcade of primate photography, from monkeys glimpsed on safari to those held in captivity at the zoo. Until recently, you’d also see <a href="https://www.flickr.com/photos/thirteenthfloormedia/14570569401">a portrait of a middle-aged black man</a> named William. Flickr thought William was an ape, too.</p>
  
Line 91: Line 92:
  
 
<p>All of the offending examples listed here have since been corrected, though the two portraits are still labeled with “animal,” which is I suppose technically accurate. And users can manually remove bad auto-tags from their pictures. As the <em>Guardian </em>notes, Flickr appears to have wisely removed “ape” entirely from its auto-tagger’s list of choices. Maybe leave this stuff to humans with eyes next time. </p>
 
<p>All of the offending examples listed here have since been corrected, though the two portraits are still labeled with “animal,” which is I suppose technically accurate. And users can manually remove bad auto-tags from their pictures. As the <em>Guardian </em>notes, Flickr appears to have wisely removed “ape” entirely from its auto-tagger’s list of choices. Maybe leave this stuff to humans with eyes next time. </p>
 +
</blockquote>
 +
* [http://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app Google says sorry for racist auto-tag in photo app]
  
=== [http://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app Google says sorry for racist auto-tag in photo app] ===
+
=== [http://money.cnn.com/2015/05/21/technology/flickr-racist-tags/ Flickr's new auto-tags are racist and offensive | CNN Money]===
 
 
 
 
[http://money.cnn.com/2015/05/21/technology/flickr-racist-tags/ Flickr's new auto-tags are racist and offensive | CNN Money]
 
  
 
<p>Some [https://www.flickr.com/photos/30926561@N02/17730858426/in/photolist-t1PhtC-t9UdPS-s6sGU9-smmvJp-sPvwCJ-tgaLPN-teWtUw-sZttTb-sKNEiW-eHmxpn-7Vk6Ej-dt8Tcc-d5A1RW-smkQik-tfXmoY-t2WmHi-s3KiFZ-sZkEgy-t767zr-sKnhzW-gqKgNE-85X9ph-8ekV5V-5RaoV2-bURk9k-aC1Qdt-55ns6z-4bm4LE-4JKVz1-55rDAd-4zsDnW-dBCSe-cJXXbs-55ns5r-a2pVNe-6s2ZM2-6s38R6-6sdaGL-apBu25-ccdzgb-dQmvU5-6g232V-c9pxY3-eTpvDQ-fCgvGy-4zsDHw-2QyZUa-4zsCK9-4JFDHi-6s79UA concentration camp photos] received inappropriate tags, including &quot; [https://www.flickr.com/photos/30926561@N02/17730858426/in/photolist-t1PhtC-t9UdPS-s6sGU9-smmvJp-sPvwCJ-tgaLPN-teWtUw-sZttTb-sKNEiW-eHmxpn-7Vk6Ej-dt8Tcc-d5A1RW-smkQik-tfXmoY-t2WmHi-s3KiFZ-sZkEgy-t767zr-sKnhzW-gqKgNE-85X9ph-8ekV5V-5RaoV2-bURk9k-aC1Qdt-55ns6z-4bm4LE-4JKVz1-55rDAd-4zsDnW-dBCSe-cJXXbs-55ns5r-a2pVNe-6s2ZM2-6s38R6-6sdaGL-apBu25-ccdzgb-dQmvU5-6g232V-c9pxY3-eTpvDQ-fCgvGy-4zsDHw-2QyZUa-4zsCK9-4JFDHi-6s79UA sport]&quot; and &quot;[https://www.flickr.com/photos/laughingsquid/8201658514/in/photolist-duKDkq-9PKSj-cXnmV3-fTKqzx-a7YLG-bkwWS9-W7SqH-4H8i7U-maaRAe-4H847H-8mzCSf-dBzph-2ZdhDw-4he7Rw-kgxaY7-5hALSZ-5yyGsr-b4HESt-4S1hBe-nypSAa-6iVXQd-54wFUE-9YqupJ-5xS4xE-wqoBQ-8CMD9P-7erYPR-b9xF1-9HXGYE-JzjuB-bUHrXa-92saU-75dzRW-e5NfKv-dodUBM-c8aVUW-4VAMGs-bs1fes-cSWgM-6p1LkQ-6T9fQ3-ahjXEK-5MUzNh-9NERVS-gF9q4s-9iPPE2-c3HUcG-4S17ci-6riRgJ-cRxAKE jungle gym] .&quot; </p>
 
<p>Some [https://www.flickr.com/photos/30926561@N02/17730858426/in/photolist-t1PhtC-t9UdPS-s6sGU9-smmvJp-sPvwCJ-tgaLPN-teWtUw-sZttTb-sKNEiW-eHmxpn-7Vk6Ej-dt8Tcc-d5A1RW-smkQik-tfXmoY-t2WmHi-s3KiFZ-sZkEgy-t767zr-sKnhzW-gqKgNE-85X9ph-8ekV5V-5RaoV2-bURk9k-aC1Qdt-55ns6z-4bm4LE-4JKVz1-55rDAd-4zsDnW-dBCSe-cJXXbs-55ns5r-a2pVNe-6s2ZM2-6s38R6-6sdaGL-apBu25-ccdzgb-dQmvU5-6g232V-c9pxY3-eTpvDQ-fCgvGy-4zsDHw-2QyZUa-4zsCK9-4JFDHi-6s79UA concentration camp photos] received inappropriate tags, including &quot; [https://www.flickr.com/photos/30926561@N02/17730858426/in/photolist-t1PhtC-t9UdPS-s6sGU9-smmvJp-sPvwCJ-tgaLPN-teWtUw-sZttTb-sKNEiW-eHmxpn-7Vk6Ej-dt8Tcc-d5A1RW-smkQik-tfXmoY-t2WmHi-s3KiFZ-sZkEgy-t767zr-sKnhzW-gqKgNE-85X9ph-8ekV5V-5RaoV2-bURk9k-aC1Qdt-55ns6z-4bm4LE-4JKVz1-55rDAd-4zsDnW-dBCSe-cJXXbs-55ns5r-a2pVNe-6s2ZM2-6s38R6-6sdaGL-apBu25-ccdzgb-dQmvU5-6g232V-c9pxY3-eTpvDQ-fCgvGy-4zsDHw-2QyZUa-4zsCK9-4JFDHi-6s79UA sport]&quot; and &quot;[https://www.flickr.com/photos/laughingsquid/8201658514/in/photolist-duKDkq-9PKSj-cXnmV3-fTKqzx-a7YLG-bkwWS9-W7SqH-4H8i7U-maaRAe-4H847H-8mzCSf-dBzph-2ZdhDw-4he7Rw-kgxaY7-5hALSZ-5yyGsr-b4HESt-4S1hBe-nypSAa-6iVXQd-54wFUE-9YqupJ-5xS4xE-wqoBQ-8CMD9P-7erYPR-b9xF1-9HXGYE-JzjuB-bUHrXa-92saU-75dzRW-e5NfKv-dodUBM-c8aVUW-4VAMGs-bs1fes-cSWgM-6p1LkQ-6T9fQ3-ahjXEK-5MUzNh-9NERVS-gF9q4s-9iPPE2-c3HUcG-4S17ci-6riRgJ-cRxAKE jungle gym] .&quot; </p>
Line 112: Line 112:
  
 
* [http://www.cnet.com/news/flickr-misfires-with-automated-photo-tags/ Flickr misfires with automated photo tags]
 
* [http://www.cnet.com/news/flickr-misfires-with-automated-photo-tags/ Flickr misfires with automated photo tags]
 +
 +
---------------------------
 +
[https://vdare.com/posts/artificial-intelligence-the-robot-becky-menace / Artificial Intelligence--The Robot Becky Menace]
 +
 +
Not surprisingly, the conventional wisdom increasingly believes Artificial Intelligence needs a dose of Artificial Stupidity to keep it from being as racist and sexist as Natural Intelligence. Otherwise, the Robot Permit Patties will run amok, says Nature:
 +
 +
[https://www.nature.com/articles/d41586-018-05707-8?utm_source=briefing-wk&utm_medium=email&utm_campaign=briefing&utm_content=20180720 / AI can be sexist and racist — it’s time to make it fair]
 +
 +
<blockquote>
 +
… As much as possible, data curators should provide the precise definition of descriptors tied to the data. For instance, in the case of criminal-justice data, appreciating the type of ‘crime’ that a model has been trained on will clarify how that model should be applied and interpreted. …
 +
 +
Lastly, computer scientists should strive to develop algorithms that are more robust to human biases in the data.
 +
 +
Various approaches are being pursued. One involves incorporating constraints and essentially nudging the machine-learning model to ensure that it achieves equitable performance across different subpopulations and between similar individuals.
 +
 +
A related approach involves changing the learning algorithm to reduce its dependence on sensitive attributes, such as ethnicity, gender, income — and any information that is correlated with those characteristics.
 +
</blockquote>
 +
  
 
[[Category:Under construction]]
 
[[Category:Under construction]]

Latest revision as of 12:19, 24 October 2018

Google and Flickr's automatic photo tagging programs took flag for being racist (early to mid 2015). Labeling black people "Gorilla" or "ape" is the worst unPC gaffe imaginable. Other labeling mistakes are simply funny or dumb. These labeling programs are self learning, so no person could be blamed for the mistake. Racists say that such mistakes are normal, due to apparent similarities, that babies make exactly the same embarrassing mistakes. Racist babies Racist baby raised in a non-diverse white family sometimes blurt out loudly "monkey, monkey" when they see a black person.


New Google app blunders, calls black people ‘gorillas’ Times of Israel Company officials apologize for mortifying blunder, explain that technology still needs more development


Racist google algorithm calls Black "gorilla"

SAN FRANCISCO (AP) — Google’s new image-recognition program misfired badly this week by identifying two black people as gorillas, delivering a reminder that even the most intelligent machines still have lot to learn about human sensitivity.

R: Computers, and babies, need special instructions in PC sensitivities. Other mistakes, by computers or babies, are funny or dumb. But offending PC sensitivities is worse than many violent crimes.

The blunder surfaced in a smartphone screen shot posted online Sunday by a New York man on his Twitter account, @jackyalcine. The images showed the recently released Google Photos app had sorted a picture of two black people into a category labeled as “gorillas.”

The account holder used a profanity while expressing his dismay about the app likening his friend to an ape, a comparison widely regarded as a racial slur when applied to a black person.

“We’re appalled and genuinely sorry that this happened,” Google spokeswoman Katie Watson said. “We are taking immediate action to prevent this type of result from appearing.”

Google is fully aware that blatantly offending a disadvantaged minority with a slur is the worst sin possible. So much Google invited the offended party to an interview.

A tweet to @jackyalcine requesting an interview hadn’t received a response several hours after it was sent Thursday.

Despite Google’s apology, the gaffe threatens to cast the Internet company in an unflattering light at a time when it and its Silicon Valley peers have already been fending off accusations of discriminatory hiring practices. Those perceptions have been fed by the composition of most technology companies’ workforces, which mostly consist of whites and Asians with a paltry few blacks and Hispanics sprinkled in.

The mix-up also surfaced amid rising US racial tensions that have been fueled by recent police killings of blacks and last month’s murder of nine black churchgoers in Charleston, South Carolina.

Google’s error underscores the pitfalls of relying on machines to handle tedious tasks that people have typically handled in the past. In this case, the Google Photo app released in late May uses recognition software to analyze images in pictures to sort them into a variety of categories, including places, names, activities and animals.

When the app came out, Google executives warned it probably wouldn’t get everything right — a point that has now been hammered home. Besides mistaking humans for gorillas, the app also has been mocked for labeling some people as seals and some dogs as horses.

“There is still clearly a lot of work to do with automatic image labeling,” Watson conceded.

Some commentators in social media, though, wondered if the flaws in Google’s automatic-recognition software may have stemmed on its reliance on white and Asian engineers who might not be sensitive to labels that would offend black people. About 94 percent of Google’s technology workers are white or Asian and just 1 percent is black, according to the company’s latest diversity disclosures.

Google isn’t the only company still trying to work out the bugs in its image-recognition technology.

Shortly after Yahoo’s Flickr introduced an automated service for tagging photos in May, it fielded complaints about identifying black people as “apes” and “animals.” Flickr also mistakenly identified a Nazi concentration camp as a “jungle gym.”

Google reacted swiftly to the mess created by its machines, long before the media began writing about it.

Less than two hours after @jackyalcine posted his outrage over the gorilla label, one of Google’s top engineers had posted a response seeking access to his account to determine what went wrong. Yonatan Zunger, chief architect of Google’s social products, later tweeted: “Sheesh. High on my list of bugs you never want to see happen. Shudder.”

Copyright 2015 The Associated Press.


Google ‘appalled’ as Photos app labels black people ‘gorillas’ |Irish Times Google removes ‘gorilla’ tag from app, after it misidentified images of black people

Google apologises photo app tagged black couple 'gorillas'

Google apologized and went to work fixing the problem earlier in the week after the offensive blunder was pointed out in a Twitter message from @JackyAlcine.

"Google Photos, y'all (messed) up," Jacky Alcine said in a series of emphatic messages.

"My friend's not a gorilla."

OVERHAULED PHOTO APP

Google released its overhauled photo app for smartphones in May, touting it as a major advancement in sorting, organizing, and handling pictures.

Google engineer Yonatan Zunger put the blame for the labelling on the artificial intelligence software designed to let machines learn how to recognise places, people and objects in pictures.

"Sheesh," Zunger said in the exchange of Twitter messages. "High on my list of bugs you never want to see happen. Shudder."

Zunger told of problems such as not seeing faces in pictures at all, or even identifying some people as dogs.

Picture recognition has proven challenging for computers and numerous companies are working on programs to improve identification.

Google and Facebook are among Silicon Valley technology giants investing heavily in artificial intelligence to get machines to think more like the way people do.

"There is still clearly a lot of work to do with automatic image labelling, and we're looking at how we can prevent these types of mistakes from happening in the future," the Google representative said of the photo gaffe.

"I understand how this happens," Alcine said in the online exchange. "The problem is more so on the why."


Google apologizes for algorithm mistakenly calling black people 'gorillas' |CNET

The incident points to the problem tech companies face as computers get smarter and are expected to take on more more tasks a human normally would do. Those areas of computer science -- such as artificial intelligence or machine learning -- are some of the biggest engineering focuses in Silicon Valley. But with that focus comes another task that computers have not traditionally tackled: grappling with the challenge of sensitivity

=== Flickr's Auto-Tagging Feature Accidentally Labeled a Black Man "Ape"

Search for “ape” on Flickr and you’ll witness an endlessly scrolling cavalcade of primate photography, from monkeys glimpsed on safari to those held in captivity at the zoo. Until recently, you’d also see <a href="https://www.flickr.com/photos/thirteenthfloormedia/14570569401">a portrait of a middle-aged black man</a> named William. Flickr thought William was an ape, too.

The accidental racism <a href="http://www.theguardian.com/technology/2015/may/20/flickr-complaints-offensive-auto-tagging-photos">came via Flickr’s new auto-tagging system</a>, which aimed to be helpful in appending broad labels to users’ photographs without asking them first. Previously, if you took a picture of your new Harley but didn’t tag it with “motorcycle,” other users might not find it when performing a search for pictures of two-wheelers. Auto-tags are meant to rectify a situation that didn’t need rectifying in the first place. (Maybe you didn’t tag a picture of your newborn because you didn’t want him turning up in someone else’s search for generic baby pics.)

In a comment to the Guardian about the snafu, which was pointed out by a user, Flickr touted the “advanced image recognition technology” behind the auto-tagging feature. That technology, it turns out, possesses the discerning eye of a shar-pei with cataracts. In addition to labeling Corey Deshon’s portrait of William with “ape” and “animal,” Flickr did the same for <a href="https://www.flickr.com/photos/132452869@N04/17801937502/">this photo of a white woman</a> with multicolored paint on her face—the software’s intentions apparently aren’t racist, even if the results sometimes are—and tagged photos of the Dachau and Auschwitz concentrations with “sport.”

All of the offending examples listed here have since been corrected, though the two portraits are still labeled with “animal,” which is I suppose technically accurate. And users can manually remove bad auto-tags from their pictures. As the Guardian notes, Flickr appears to have wisely removed “ape” entirely from its auto-tagger’s list of choices. Maybe leave this stuff to humans with eyes next time.

Flickr's new auto-tags are racist and offensive | CNN Money

Some concentration camp photos received inappropriate tags, including " sport" and "jungle gym ."

Flickr had also been tagging some images of people as "ape" and "animal," including a " photo of a black man named William taken by photographer Corey Deshon, according to " Guardian.

The photo service had also labeled a " white woman wearing face paint as "ape" and "animal," so Flickr's algorithm does not appear to be taking a person's skin color into consideration when auto-tagging them.

Flickr has since corrected both of those mistakes, but the concentration camp errors remain.

"We are aware of issues with inaccurate auto-tags on Flickr and are working on a fix," a spokesman for Flickr said in a statement. "While we are very proud of this advanced image-recognition technology, we're the first to admit there will be mistakes and we are constantly working to improve the experience."

Flickr noted that deleting incorrect tags teaches the new algorithm to learn from its mistake and improve its results in the future. The company also noted that Flickr staff does not personally tag photos -- it's all automated.


/ Artificial Intelligence--The Robot Becky Menace

Not surprisingly, the conventional wisdom increasingly believes Artificial Intelligence needs a dose of Artificial Stupidity to keep it from being as racist and sexist as Natural Intelligence. Otherwise, the Robot Permit Patties will run amok, says Nature:

/ AI can be sexist and racist — it’s time to make it fair

… As much as possible, data curators should provide the precise definition of descriptors tied to the data. For instance, in the case of criminal-justice data, appreciating the type of ‘crime’ that a model has been trained on will clarify how that model should be applied and interpreted. …

Lastly, computer scientists should strive to develop algorithms that are more robust to human biases in the data.

Various approaches are being pursued. One involves incorporating constraints and essentially nudging the machine-learning model to ensure that it achieves equitable performance across different subpopulations and between similar individuals.

A related approach involves changing the learning algorithm to reduce its dependence on sensitive attributes, such as ethnicity, gender, income — and any information that is correlated with those characteristics.