Mirror mirror on the screen - Are we really this obscene?
Mirror mirror on the screen - Are we really this obscene? learning stories

anonAnonymously Published Stories
Autoplay OFF  •  6 months ago
One of the most frequently mentioned fears and hopes about the rapidly developing
By psyandtech17 https://psyandtech17.tumb...

Mirror mirror on the screen - Are we really this obscene?

by psyandtech17

One of the most

frequently mentioned fears and hopes about the rapidly developing

computer programs and robots is that with their logical and cold

“way of thinking”, they will completely change our world, they

will erase those very human decision errors, racism, sexism, or even

that they will erase the human race completely, considering their

presence on this Earth useless and harmful. These opinions can be

regarded as paranoid or over-idealizing, and it is not my job to

decide if they are true or not, but I surely want to talk to you about the

way of machine learning, because in the near future, that is what

will characterize the behavior of machines.

Machine learning, by definition, is ‘a

means to derive artificial intelligence by discovering patterns in

existing data’, whether

that already existing data be the first one million digits of Pi or

movements of planets in a distant solar system.(This video, for example, is made using a visualization technique applied to a neural network trained to recognize a broad range of images.

Each frame is recursively fed back to the network starting with a frame of random noise.)But

what happens when we start to teach machines as we teach our

children: by talking to them?

A few weeks ago, Science published an article titled ‘Semantics

derived automatically from language corpora contain human-like


Researchers at Princeton University and Britain’s University of

Bath found that machine learning “absorbs stereotyped biases”

when trained on words from the internet. They

used an algorithm which analyzed 840 billion words from the internet, then

they developed a

word-embedding association test (WEAT), similar to the Implicit

Associations Test for humans.“Word

embeddings” established a computer’s definition of a word, based

on the contexts in which it usually appears. The WEAT found that male

names were associated with work, math, and science, and female names

with family and the arts, meaning these stereotypes were held by the


results are effectively highlighting how prejudices and biases can be

easily transferred from humans to machines.The program

in this research could learn in a controlled and relatively safe

environment, but we can easily find some examples when the algorithm

was set free to learn anything that humans want to teach them. For

instance, last year Microsoft launched a little chatbot called Tay,

who was supposed to learn from direct conversations, mostly on

Twitter. Initially, Tay got her knowledge from anonimized public

data, and she was using the language of the millennials, but in a few

hours she became openly racist, sexist and even publicly called out

for genocide. She had to be taken down in 24 hours, and Microsoft had

to publicly apologize for her tweets.

This is where Tay was still using her initial database. Then…Even our

beloved chatbot, Cleverbot can be awfully rude sometimes. Cleverbot’s

learning is based on his conversations with the users (that’s why

most of his conversations end up with discussing if the user is a

robot or not; and he already learned a bunch of really good

arguments). Fortunately, the users of Cleverbot are more considerate

and kind than those who taught Tay, so it is quite rare that he spits

out something racist, like this:The fact that we are our machine’s omnipotent teachers is machine learning’s biggest strength and flaw at the same time.

We can use it to see and observe humanity as it is, but in my opinion, we need to realize that machines, just like children, can (and maybe will) outgrow us. Á.F.

Stories We Think You'll Love