Cornell University study: Faster robots demoralize co-workers

13 Mar

Mojtaba Arvin wrote in the Machine Learning article, The robot that became racist:

AI that learnt from the web finds white-sounding names ‘pleasant’ and …
Humans look to the power of machine learning to make better and more effective decisions.
However, it seems that some algorithms are learning more than just how to recognize patterns – they are being taught how to be as biased as the humans they learn from.
Researchers found that a widely used AI characterizes black-sounding names as ‘unpleasant’, which they believe is a result of our own human prejudice hidden in the data it learns from on the World Wide Web.
Researchers found that a widely used AI characterizes black-sounding names as ‘unpleasant’, which they believe is a result of our own human prejudice hidden in the data it learns from on the World Wide Web
Machine learning has been adopted to make a range of decisions, from approving loans to determining what kind of health insurance, reports Jordan Pearson with Motherboard.
A recent example was reported by Pro Publica in May, when an algorithm used by officials in Florida automatically rated a more seasoned white criminal as being a lower risk of committing a future crime, than a black offender with only misdemeanors on her record.
Now, researchers at Princeton University have reproduced a stockpile of documented human prejudices in an algorithm using text pulled from the internet.
HOW A ROBOT BECAME RACIST
Princeton University conducted a word associate task with the popular algorithm GloVe, an unsupervised AI that uses online text to understand human language.
The team gave the AI words like ‘flowers’ and ‘insects’ to pair with other words that the researchers defined as being ‘pleasant’ or ‘unpleasant’ like ‘family’ or ‘crash’ – which it did successfully.
Then algorithm was given a list of white-sounding names, like Emily and Matt, and black-sounding ones, such as Ebony and Jamal’, which it was prompted to do the same word association.
The AI linked the white-sounding names with ‘pleasant’ and black-sounding names as ‘unpleasant’.
Princeton’s results do not just prove datasets are polluted with prejudices and assumptions, but the algorithms currently being used for researchers are reproducing human’s worst values – racism and assumption… https://www.artificialintelligenceonline.com/19050/the-robot-that-became-racist-ai-that-learnt-from-the-web-finds-white-sounding-names-pleasant-and/

See, The robot that became racist: AI that learnt from the web finds white-sounding names ‘pleasant’ and black-sounding names ‘unpleasant’ http://www.dailymail.co.uk/sciencetech/article-3760795/The-robot-racist-AI-learnt-web-finds-white-sounding-names-pleasant-black-sounding-names-unpleasant.html

Science Daily reported in Faster robots demoralize co-workers:

It’s not whether you win or lose; it’s how hard the robot is working.
A Cornell University-led team has found that when robots are beating humans in contests for cash prizes, people consider themselves less competent and expend slightly less effort — and they tend to dislike the robots.
The study, “Monetary-Incentive Competition Between Humans and Robots: Experimental Results,” brought together behavioral economists and roboticists to explore, for the first time, how a robot’s performance affects humans’ behavior and reactions when they’re competing against each other simultaneously.
Their findings validated behavioral economists’ theories about loss aversion, which predicts that people won’t try as hard when their competitors are doing better, and suggests how workplaces might optimize teams of people and robots working together.
“Humans and machines already share many workplaces, sometimes working on similar or even identical tasks,” said Guy Hoffman, assistant professor in the Sibley School of Mechanical and Aerospace Engineering. Hoffman and Ori Heffetz, associate professor of economics in the Samuel Curtis Johnson Graduate School of Management, are senior authors of the study.
“Think about a cashier working side-by-side with an automatic check-out machine, or someone operating a forklift in a warehouse which also employs delivery robots driving right next to them,” Hoffman said. “While it may be tempting to design such robots for optimal productivity, engineers and managers need to take into consideration how the robots’ performance may affect the human workers’ effort and attitudes toward the robot and even toward themselves. Our research is the first that specifically sheds light on these effects….”
After each round, participants filled out a questionnaire rating the robot’s competence, their own competence and the robot’s likability. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower.
The research was partly supported by the Israel Science Foundation. https://www.sciencedaily.com/releases/2019/03/190311173205.htm

Citation:

Faster robots demoralize co-workers
Date: March 11, 2019
Source: Cornell University
Summary:
New research finds that when robots are beating humans in contests for cash prizes, people consider themselves less competent and expend slightly less effort — and they tend to dislike the robots.

Journal Reference:
Alap Kshirsagar, Bnaya Dreyfuss, Guy Ishai, Ori Heffetz, Guy Hoffman. Monetary-Incentive Competition Between Humans and Robots: Experimental Results. In Proc. of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI’19), IEEE, 2019 (forthcoming); [link]

Here is the press release from Cornell University:

PUBLIC RELEASE: 11-MAR-2019

Faster robots demoralize co-workers

CORNELL UNIVERSITY

ITHACA, N.Y. – It’s not whether you win or lose; it’s how hard the robot is working.
A Cornell University-led team has found that when robots are beating humans in contests for cash prizes, people consider themselves less competent and expend slightly less effort – and they tend to dislike the robots.
The study, “Monetary-Incentive Competition Between Humans and Robots: Experimental Results,” brought together behavioral economists and roboticists to explore, for the first time, how a robot’s performance affects humans’ behavior and reactions when they’re competing against each other simultaneously.
Their findings validated behavioral economists’ theories about loss aversion, which predicts that people won’t try as hard when their competitors are doing better, and suggests how workplaces might optimize teams of people and robots working together.
“Humans and machines already share many workplaces, sometimes working on similar or even identical tasks,” said Guy Hoffman, assistant professor in the Sibley School of Mechanical and Aerospace Engineering. Hoffman and Ori Heffetz, associate professor of economics in the Samuel Curtis Johnson Graduate School of Management, are senior authors of the study.
“Think about a cashier working side-by-side with an automatic check-out machine, or someone operating a forklift in a warehouse which also employs delivery robots driving right next to them,” Hoffman said. “While it may be tempting to design such robots for optimal productivity, engineers and managers need to take into consideration how the robots’ performance may affect the human workers’ effort and attitudes toward the robot and even toward themselves. Our research is the first that specifically sheds light on these effects.”
Alap Kshirsagar, a doctoral student in mechanical engineering, is the paper’s first author. In the study, humans competed against a robot in a tedious task – counting the number of times the letter G appears in a string of characters, and then placing a block in the bin corresponding to the number of occurrences. The person’s chance of winning each round was determined by a lottery based on the difference between the human’s and robot’s scores: If their scores were the same, the human had a 50 percent chance of winning the prize, and that likelihood rose or fell depending which participant was doing better.
To make sure competitors were aware of the stakes, the screen indicated their chance of winning at each moment.
After each round, participants filled out a questionnaire rating the robot’s competence, their own competence and the robot’s likability. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower.
###
The research was partly supported by the Israel Science Foundation.
Cornell University has dedicated television and audio studios available for media interviews supporting full HD, ISDN and web-based platforms.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Evan Selinger and Woodrow Hartzog wrote about robots in The dangers of trusting robots.

According to Selinger and Hartzog:

We also need to think long and hard about how information is being stored and shared when it comes to robots that can record our every move. Some recording devices may have been designed for entertainment but can easily be adapted for more nefarious purposes. Take Nixie, the wearable camera that can fly off your wrist at a moment’s notice and take aerial shots around you. It doesn’t take much imagination to see how such technology could be abused.
Most people guard their secrets in the presence of a recording device. But what happens once we get used to a robot around the house, answering our every beck and call? We may be at risk of letting our guard down, treating them as extended members of the family. If the technology around us is able to record and process speech, images and movement – never mind eavesdrop on our juiciest secrets – what will happen to that information? Where will it be stored, who will have access? If our internet history is anything to go by, these details could be worth their weight in gold to advertising companies. If we grow accustomed to having trusted robots integrated into our daily lives, our words and deeds could easily become overly-exposed…. http://www.bbc.com/future/story/20150812-how-to-tell-a-good-robot-from-the-bad

We have to prove that digital manufacturing is inclusive. Then, the true narrative will emerge: Welcome, robots. You’ll help us. But humans are still our future.
Joe Kaeser

Resources:

Artificial Intelligence Will Redesign Healthcare                             https://medicalfuturist.com/artificial-intelligence-will-redesign-healthcare

9 Ways Artificial Intelligence is Affecting the Medical Field https://www.healthcentral.com/slideshow/8-ways-artificial-intelligence-is-affecting-the-medical-field#slide=2

Where information leads to Hope. © Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART©
http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©
http://drwildareviews.wordpress.com/

Dr. Wilda ©
https://drwilda.com/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: