top of page
Arnav Das

Discriminating Algorithms


Pic credit: the newyork times.


Scholars from Princeton University did a research in 2017.

They took all texts corpora that we have on the Internet.

For example, all the texts they found on our social media, on newspapers, on Wikipedia, and so forth, encyclopedias, feed it into word2vector artificial intelligence, and then see, where in this vector space they are and what other words are In the vicinity of these words.

So here's an example, they take names, predominantly used by ethnic origin, white ethnic origin, for example, Harry, Katie, Jonathan, Nancy, and Emily and they see these names are in the vicinity of words like freedom, health, love, peace, heaven, gently, lucky, loyal, diploma, laughter, and vacation, pleasant words .

And , When they take names predominantly use by African-Americans; Jerome, Ebony, Jasmine, Latisha, Tia, they see , hey are in the vicinity of other words used to abuse, filth, sickness, accident, poison, assault, poverty, evil, agony, prison.


Where does this artificial intelligence gets such racist statement.

Now ,if it happens that somehow, this artificial intelligence is used to invite for a job interview, and we will ask this artificial intelligence, who should be invite for job interview? The artificial Intelligence says, well, don't invite anybody with these names because they are more likely to go to prison…


The UK is dropping an immigration algorithm that critics say is racist.

the algorithm’s use of nationality to decide which applications get fast-tracked has led to a system in which “people from rich white countries get “Speedy Boarding”; poorer people of color get pushed to the back of the queue.”




Comments


bottom of page