Tag Archives: Machine Learning

Cornell University study: Faster robots demoralize co-workers

13 Mar

Mojtaba Arvin wrote in the Machine Learning article, The robot that became racist:

AI that learnt from the web finds white-sounding names ‘pleasant’ and …
Humans look to the power of machine learning to make better and more effective decisions.
However, it seems that some algorithms are learning more than just how to recognize patterns – they are being taught how to be as biased as the humans they learn from.
Researchers found that a widely used AI characterizes black-sounding names as ‘unpleasant’, which they believe is a result of our own human prejudice hidden in the data it learns from on the World Wide Web.
Researchers found that a widely used AI characterizes black-sounding names as ‘unpleasant’, which they believe is a result of our own human prejudice hidden in the data it learns from on the World Wide Web
Machine learning has been adopted to make a range of decisions, from approving loans to determining what kind of health insurance, reports Jordan Pearson with Motherboard.
A recent example was reported by Pro Publica in May, when an algorithm used by officials in Florida automatically rated a more seasoned white criminal as being a lower risk of committing a future crime, than a black offender with only misdemeanors on her record.
Now, researchers at Princeton University have reproduced a stockpile of documented human prejudices in an algorithm using text pulled from the internet.
HOW A ROBOT BECAME RACIST
Princeton University conducted a word associate task with the popular algorithm GloVe, an unsupervised AI that uses online text to understand human language.
The team gave the AI words like ‘flowers’ and ‘insects’ to pair with other words that the researchers defined as being ‘pleasant’ or ‘unpleasant’ like ‘family’ or ‘crash’ – which it did successfully.
Then algorithm was given a list of white-sounding names, like Emily and Matt, and black-sounding ones, such as Ebony and Jamal’, which it was prompted to do the same word association.
The AI linked the white-sounding names with ‘pleasant’ and black-sounding names as ‘unpleasant’.
Princeton’s results do not just prove datasets are polluted with prejudices and assumptions, but the algorithms currently being used for researchers are reproducing human’s worst values – racism and assumption… https://www.artificialintelligenceonline.com/19050/the-robot-that-became-racist-ai-that-learnt-from-the-web-finds-white-sounding-names-pleasant-and/

See, The robot that became racist: AI that learnt from the web finds white-sounding names ‘pleasant’ and black-sounding names ‘unpleasant’ http://www.dailymail.co.uk/sciencetech/article-3760795/The-robot-racist-AI-learnt-web-finds-white-sounding-names-pleasant-black-sounding-names-unpleasant.html

Science Daily reported in Faster robots demoralize co-workers:

It’s not whether you win or lose; it’s how hard the robot is working.
A Cornell University-led team has found that when robots are beating humans in contests for cash prizes, people consider themselves less competent and expend slightly less effort — and they tend to dislike the robots.
The study, “Monetary-Incentive Competition Between Humans and Robots: Experimental Results,” brought together behavioral economists and roboticists to explore, for the first time, how a robot’s performance affects humans’ behavior and reactions when they’re competing against each other simultaneously.
Their findings validated behavioral economists’ theories about loss aversion, which predicts that people won’t try as hard when their competitors are doing better, and suggests how workplaces might optimize teams of people and robots working together.
“Humans and machines already share many workplaces, sometimes working on similar or even identical tasks,” said Guy Hoffman, assistant professor in the Sibley School of Mechanical and Aerospace Engineering. Hoffman and Ori Heffetz, associate professor of economics in the Samuel Curtis Johnson Graduate School of Management, are senior authors of the study.
“Think about a cashier working side-by-side with an automatic check-out machine, or someone operating a forklift in a warehouse which also employs delivery robots driving right next to them,” Hoffman said. “While it may be tempting to design such robots for optimal productivity, engineers and managers need to take into consideration how the robots’ performance may affect the human workers’ effort and attitudes toward the robot and even toward themselves. Our research is the first that specifically sheds light on these effects….”
After each round, participants filled out a questionnaire rating the robot’s competence, their own competence and the robot’s likability. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower.
The research was partly supported by the Israel Science Foundation. https://www.sciencedaily.com/releases/2019/03/190311173205.htm

Citation:

Faster robots demoralize co-workers
Date: March 11, 2019
Source: Cornell University
Summary:
New research finds that when robots are beating humans in contests for cash prizes, people consider themselves less competent and expend slightly less effort — and they tend to dislike the robots.

Journal Reference:
Alap Kshirsagar, Bnaya Dreyfuss, Guy Ishai, Ori Heffetz, Guy Hoffman. Monetary-Incentive Competition Between Humans and Robots: Experimental Results. In Proc. of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI’19), IEEE, 2019 (forthcoming); [link]

Here is the press release from Cornell University:

PUBLIC RELEASE: 11-MAR-2019

Faster robots demoralize co-workers

CORNELL UNIVERSITY

ITHACA, N.Y. – It’s not whether you win or lose; it’s how hard the robot is working.
A Cornell University-led team has found that when robots are beating humans in contests for cash prizes, people consider themselves less competent and expend slightly less effort – and they tend to dislike the robots.
The study, “Monetary-Incentive Competition Between Humans and Robots: Experimental Results,” brought together behavioral economists and roboticists to explore, for the first time, how a robot’s performance affects humans’ behavior and reactions when they’re competing against each other simultaneously.
Their findings validated behavioral economists’ theories about loss aversion, which predicts that people won’t try as hard when their competitors are doing better, and suggests how workplaces might optimize teams of people and robots working together.
“Humans and machines already share many workplaces, sometimes working on similar or even identical tasks,” said Guy Hoffman, assistant professor in the Sibley School of Mechanical and Aerospace Engineering. Hoffman and Ori Heffetz, associate professor of economics in the Samuel Curtis Johnson Graduate School of Management, are senior authors of the study.
“Think about a cashier working side-by-side with an automatic check-out machine, or someone operating a forklift in a warehouse which also employs delivery robots driving right next to them,” Hoffman said. “While it may be tempting to design such robots for optimal productivity, engineers and managers need to take into consideration how the robots’ performance may affect the human workers’ effort and attitudes toward the robot and even toward themselves. Our research is the first that specifically sheds light on these effects.”
Alap Kshirsagar, a doctoral student in mechanical engineering, is the paper’s first author. In the study, humans competed against a robot in a tedious task – counting the number of times the letter G appears in a string of characters, and then placing a block in the bin corresponding to the number of occurrences. The person’s chance of winning each round was determined by a lottery based on the difference between the human’s and robot’s scores: If their scores were the same, the human had a 50 percent chance of winning the prize, and that likelihood rose or fell depending which participant was doing better.
To make sure competitors were aware of the stakes, the screen indicated their chance of winning at each moment.
After each round, participants filled out a questionnaire rating the robot’s competence, their own competence and the robot’s likability. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower.
###
The research was partly supported by the Israel Science Foundation.
Cornell University has dedicated television and audio studios available for media interviews supporting full HD, ISDN and web-based platforms.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Evan Selinger and Woodrow Hartzog wrote about robots in The dangers of trusting robots.

According to Selinger and Hartzog:

We also need to think long and hard about how information is being stored and shared when it comes to robots that can record our every move. Some recording devices may have been designed for entertainment but can easily be adapted for more nefarious purposes. Take Nixie, the wearable camera that can fly off your wrist at a moment’s notice and take aerial shots around you. It doesn’t take much imagination to see how such technology could be abused.
Most people guard their secrets in the presence of a recording device. But what happens once we get used to a robot around the house, answering our every beck and call? We may be at risk of letting our guard down, treating them as extended members of the family. If the technology around us is able to record and process speech, images and movement – never mind eavesdrop on our juiciest secrets – what will happen to that information? Where will it be stored, who will have access? If our internet history is anything to go by, these details could be worth their weight in gold to advertising companies. If we grow accustomed to having trusted robots integrated into our daily lives, our words and deeds could easily become overly-exposed…. http://www.bbc.com/future/story/20150812-how-to-tell-a-good-robot-from-the-bad

We have to prove that digital manufacturing is inclusive. Then, the true narrative will emerge: Welcome, robots. You’ll help us. But humans are still our future.
Joe Kaeser

Resources:

Artificial Intelligence Will Redesign Healthcare                             https://medicalfuturist.com/artificial-intelligence-will-redesign-healthcare

9 Ways Artificial Intelligence is Affecting the Medical Field https://www.healthcentral.com/slideshow/8-ways-artificial-intelligence-is-affecting-the-medical-field#slide=2

Where information leads to Hope. © Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART©
http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©
http://drwildareviews.wordpress.com/

Dr. Wilda ©
https://drwilda.com/

New York University study: Machine learning helps spot counterfeit consumer products

13 Aug

OECD reported in Global trade in fake goods worth nearly half a trillion dollars a year – OECD & EUIPO:

18/04/2016 – Imports of counterfeit and pirated goods are worth nearly half a trillion dollars a year, or around 2.5% of global imports, with US, Italian and French brands the hardest hit and many of the proceeds going to organised crime, according to a new report by the OECD and the EU’s Intellectual Property Office.
“Trade in Counterfeit and Pirated Goods: Mapping the Economic Impact” puts the value of imported fake goods worldwide at USD 461 billion in 2013, compared with total imports in world trade of USD 17.9 trillion. Up to 5% of goods imported into the European Union are fakes. Most originate in middle income or emerging countries, with China the top producer.
The report analyses nearly half a million customs seizures around the world over 2011-13 to produce the most rigorous estimate to date of the scale of counterfeit trade. It points to a larger volume than a 2008 OECD study which estimated fake goods accounted for up to 1.9% of global imports, though the 2008 study used more limited data and methodology.
“The findings of this new report contradict the image that counterfeiters only hurt big companies and luxury goods manufacturers. They take advantage of our trust in trademarks and brand names to undermine economies and endanger lives,” said OECD Deputy Secretary-General Doug Frantz, launching the report with EUIPO Executive Director António Campinos as part of OECD Integrity Week.
Fake products crop up in everything from handbags and perfumes to machine parts and chemicals. Footwear is the most-copied item though trademarks are infringed even on strawberries and bananas. Counterfeiting also produces knockoffs that endanger lives – auto parts that fail, pharmaceuticals that make people sick, toys that harm children, baby formula that provides no nourishment and medical instruments that deliver false readings.\
The report covers all physical counterfeit goods, which infringe trademarks, design rights or patents, and tangible pirated products, which breach copyright. It does not cover online piracy, which is a further drain on the formal economy… http://www.oecd.org/industry/global-trade-in-fake-goods-worth-nearly-half-a-trillion-dollars-a-year.htm

See, Remade In China: Where The World’s Fake Goods Come From [Infographic] https://www.forbes.com/sites/niallmccarthy/2016/04/19/remade-in-china-where-the-worlds-fake-goods-come-from-infographic/#5a56fb5f1b87

SAS described machine learning in Machine Learning: What it is & why it matters:

Machine learning is a method of data analysis that automates analytical model building. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look.
The iterative aspect of machine learning is important because as models are exposed to new data, they are able to independently adapt. They learn from previous computations to produce reliable, repeatable decisions and results. It’s a science that’s not new – but one that’s gaining fresh momentum.
Because of new computing technologies, machine learning today is not like machine learning of the past. While many machine learning algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data – over and over, faster and faster – is a recent development. Here are a few widely publicized examples of machine learning applications that you may be familiar with:
• The heavily hyped, self-driving Google car? The essence of machine learning.
• Online recommendation offers like those from Amazon and Netflix? Machine learning applications for everyday life.
• Knowing what customers are saying about you on Twitter? Machine learning combined with linguistic rule creation.
• Fraud detection? One of the more obvious, important uses in our world today.
Why the increased interest in machine learning?
Resurging interest in machine learning is due to the same factors that have made data mining and Bayesian analysis more popular than ever. Things like growing volumes and varieties of available data, computational processing that is cheaper and more powerful, and affordable data storage.
All of these things mean it’s possible to quickly and automatically produce models that can analyze bigger, more complex data and deliver faster, more accurate results – even on a very large scale. The result? High-value predictions that can guide better decisions and smart actions in real time without human intervention…. https://www.sas.com/en_id/insights/analytics/machine-learning.html

See, What is Machine Learning? https://www.youtube.com/watch?v=f_uwKZIAeM0

Science Daily reported in Machine learning helps spot counterfeit consumer products:

A team of researchers has developed a new mechanism that uses machine-learning algorithms to distinguish between genuine and counterfeit versions of the same product.
The work, led by New York University Professor Lakshminarayanan Subramanian, will be presented on Mon., Aug. 14 at the annual KDD Conference on Knowledge Discovery and Data Mining in Halifax, Nova Scotia….
The system described in the presentation is commercialized by Entrupy Inc., an NYU startup founded by Ashlesh Sharma, a doctoral graduate from the Courant Institute, Vidyuth Srinivasan, and Subramanian.
Counterfeit goods represent a massive worldwide problem with nearly every high-valued physical object or product directly affected by this issue, the researchers note. Some reports indicate counterfeit trafficking represents 7 percent of the world’s trade today.
While other counterfeit-detection methods exist, these are invasive and run the risk of damaging the products under examination.
The Entrupy method, by contrast, provides a non-intrusive solution to easily distinguish authentic versions of the product produced by the original manufacturer and fake versions of the product produced by counterfeiters….
“The classification accuracy is more than 98 percent, and we show how our system works with a cellphone to verify the authenticity of everyday objects,” notes Subramanian.
A demo of the technology may be viewed here: https://www.youtube.com/watch?v=DsdsY8-gljg (courtesy of Entrupy Inc.)
To date, Entrupy, which recently received $2.6 million in funding from a team of investors, has authenticated $14 million worth of goods.

Citation:

Machine learning helps spot counterfeit consumer products
Date: August 11, 2017
Source: New York University
Summary:
A team of researchers has developed a new mechanism that uses machine-learning algorithms to distinguish between genuine and counterfeit versions of the same product.

Here is the NYU press release:

News Release
Researchers Use Machine Learning to Spot Counterfeit Consumer Products
________________________________________
Aug 11, 2017
Engineering, Science and Technology Research Courant Institute of Mathematical Sciences Faculty
New York City
A team of researchers has developed a new mechanism that uses machine-learning algorithms to distinguish between genuine and counterfeit versions of the same product.

A team of researchers has developed a new mechanism that uses machine-learning algorithms to distinguish between genuine and counterfeit versions of the same product. Image courtesy of Entrupy, Inc.
A team of researchers has developed a new mechanism that uses machine-learning algorithms to distinguish between genuine and counterfeit versions of the same product.

The work, led by New York University Professor Lakshminarayanan Subramanian, will be presented on Mon., Aug. 14 at the annual KDD Conference on Knowledge Discovery and Data Mining in Halifax, Nova Scotia.
“The underlying principle of our system stems from the idea that microscopic characteristics in a genuine product or a class of products—corresponding to the same larger product line—exhibit inherent similarities that can be used to distinguish these products from their corresponding counterfeit versions,” explains Subramanian, a professor at NYU’s Courant Institute of Mathematical Sciences.
The system described in the presentation is commercialized by Entrupy Inc., an NYU startup founded by Ashlesh Sharma, a doctoral graduate from the Courant Institute, Vidyuth Srinivasan, and Subramanian.
Counterfeit goods represent a massive worldwide problem with nearly every high-valued physical object or product directly affected by this issue, the researchers note. Some reports indicate counterfeit trafficking represents 7 percent of the world’s trade today.
While other counterfeit-detection methods exist, these are invasive and run the risk of damaging the products under examination.
The Entrupy method, by contrast, provides a non-intrusive solution to easily distinguish authentic versions of the product produced by the original manufacturer and fake versions of the product produced by counterfeiters.
It does so by deploying a dataset of three million images across various objects and materials such as fabrics, leather, pills, electronics, toys and shoes.
“The classification accuracy is more than 98 percent, and we show how our system works with a cellphone to verify the authenticity of everyday objects,” notes Subramanian.
A demo of the technology may be viewed here (courtesy of Entrupy Inc.).
To date, Entrupy, which recently received $2.6 million in funding from a team of investors, has authenticated $14 million worth of goods.
For a copy of the paper, “The Fake vs Real Goods Problem: Microscopy and Machine Learning to the Rescue,” please contact James Devitt, NYU’s Office of Public Affairs, at 212.998.6808 or james.devitt@nyu.edu.

Press Contact
James Devitt
James Devitt
(212) 998-6808

Employment opportunities in machine learning are expected to increase.

UDACITY described machine learning employment opportunities in :5 Skills You Need to Become a Machine Learning Engineer:

To begin, there are two very important things that you should understand if you’re considering a career as a Machine Learning engineer. First, it’s not a “pure” academic role. You don’t necessarily have to have a research or academic background. Second, it’s not enough to have either software engineering or data science experience. You ideally need both.
Data Analyst vs. Machine Learning Engineer
It’s also critical to understand the differences between a Data Analyst and a Machine Learning engineer. In simplest form, the key distinction has to do with the end goal. As a Data Analyst, you’re analyzing data in order to tell a story, and to produce actionable insights. The emphasis is on dissemination—charts, models, visualizations. The analysis is performed and presented by human beings, to other human beings who may then go on to make business decisions based on what’s been presented. This is especially important to note—the “audience” for your output is human. As a Machine Learning engineer, on the other hand, your final “output” is working software (not the analyses or visualizations that you may have to create along the way), and your “audience” for this output often consists of other software components that run autonomously with minimal human supervision. The intelligence is still meant to be actionable, but in the Machine Learning model, the decisions are being made by machines and they affect how a product or service behaves. This is why the software engineering skill set is so important to a career in Machine Learning.
Understanding The Ecosystem
Before getting into specific skills, there is one more concept to address. Being a Machine Learning engineer necessitates understanding the entire ecosystem that you’re designing for.
Let’s say you’re working for a grocery chain, and the company wants to start issuing targeted coupons based on things like the past purchase history of customers, with a goal of generating coupons that shoppers will actually use. In a Data Analysis model, you could collect the purchase data, do the analysis to figure out trends, and then propose strategies. The Machine Learning approach would be to write an automated coupon generation system. But what does it take to write that system, and have it work? You have to understand the whole ecosystem—inventory, catalog, pricing, purchase orders, bill generation, Point of Sale software, CRM software, etc.
Ultimately, the process is less about understanding Machine Learning algorithms—or when and how to apply them—and more about understanding the systemic interrelationships, and writing working software that will successfully integrate and interface. Remember, Machine Learning output is actually working software! http://blog.udacity.com/2016/04/5-skills-you-need-to-become-a-machine-learning-engineer.html

Education guidance counselors should be informed about opportunities in machine learning.

Where information leads to Hope. © Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART©
http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©
http://drwildareviews.wordpress.com/

Dr. Wilda ©
https://drwilda.com/

Princeton University study: A tale of a racist robot and AI from the web

26 Aug

Jenna Goudreau of Business Insider wrote in 13 surprising ways your name affects your success:

If your name is easy to pronounce, people will favor you more….

In a New York University study, researchers found that people with easier-to-pronounce names often have higher-status positions at work. One of the psychologists, Adam Alter, explains to Wired, “When we can process a piece of information more easily, when it’s easier to comprehend, we come to like it more.” In a further study, Alter also found that companies with simpler names and ticker symbols tended to perform better in the stock market.

If your name is common, you are more likely to be hired….

In a Marquette University study, the researchers found evidence to suggest that names that were viewed as the least unique were more likable. People with common names were more likely to be hired, and those with rare names were least likely to be hired. That means that the Jameses, Marys, Johns, and Patricias of the world are in luck.

Uncommon names are associated with juvenile delinquency….

A 2009 study at Shippensburg University suggested that there’s a strong relationship between the popularity of one’s first name and juvenile criminal behavior. Researchers found that, regardless of race, young people with unpopular names were more likely to engage in criminal activity. The findings obviously don’t show that the unusual names caused the behavior, but merely show a link between the two things. And the researchers have some theories about their findings. “Adolescents with unpopular names may be more prone to crime because they are treated differently by their peers, making it more difficult for them to form relationships,” they write in a statement from the journal’s publisher. “Juveniles with unpopular names may also act out because they … dislike their names.”

If you have a white-sounding name, you’re more likely to get hired….

In one study cited by The Atlantic, white-sounding names like Emily Walsh and Greg Baker got nearly 50% more callbacks than candidates with black-sounding names like Lakisha Washington and Jamal Jones. Researchers determined that having a white-sounding name is worth as much as eight years of work experience.

If your last name is closer to the beginning of the alphabet, you could get into a better school….

For a study published in the Economics of Education Review, researchers studied the relationship between the position in the alphabet of more than 90,000 Czech students’ last names and their admission chances at competitive schools. They found that even though students with last names that were low in the alphabet tended to get higher test scores overall, among the students who applied to universities and were on the margins of getting admitted or not, those with last names that were close to the top of the alphabet were more likely to be admitted.

If your last name is closer to the end of the alphabet, you’re more likely to be an impulse spender…

According to one study, people with last names such as Yardley or Zabar may be more susceptible to promotional strategies like limited-time offers. The authors speculate that spending your childhood at the end of the roll call may make you want to jump on offers before you miss the chance.

Using your middle initial makes people think you’re smarter and more competent….

According to research published in the European Journal of Social Psychology, using a middle initial increases people’s perceptions of your intellectual capacity and performance. In one study, students were asked to rate an essay with one of four styles of author names. Not only did the authors with a middle initial receive top marks, but the one with the most initials, David F.P.R. Clark, received the best reviews.

You are more likely to work in a company that matches your initials….

Since we identify with our names, we prefer things that are similar to them. In a Ghent University study, researchers found that people are more likely to work for companies matching their own initials. For example, Brian Ingborg might work for Business Insider. The rarer the initials, the more likely people were to work for companies with names similar to their own.

If your name sounds noble, you are more likely to work in a high-ranking position….

In a European study, researchers studied German names and ranks within companies. Those with last names such as Kaiser (“emperor”) or König (“king”) were in more managerial positions than those with last names that referred to common occupations, such as Koch (“cook”) or Bauer (“farmer”). This could be the result of associative reasoning, a psychological theory describing a type of thinking in which people automatically link emotions and previous knowledge with similar words or phrases.

If you are a boy with a girl’s name, you could be more likely to be suspended from school….

For his 2005 study, University of Florida economics professor David Figlio studied a large Florida school district from 1996 to 2000 and found that boys with names most commonly given to girls misbehaved more in middle school and were more likely to disrupt their peers. He also found that their behavioral problems were linked with increased disciplinary problems and lower test scores.

If you are a woman with a gender-neutral name, you may be more likely to succeed in certain fields….

According to The Atlantic, in male-dominated fields such as engineering and law, women with gender-neutral names may be more successful. One study found that women with “masculine names” like Leslie, Jan, or Cameron tended to be more successful in legal careers.

Men with shorter first names are overrepresented in the c-suite.

In 2011, LinkedIn analyzed more than 100 million user profiles to find out which names are most associated with the CEO position. The most common names for men were short, often one-syllable names like Bob, Jack, and Bruce. A name specialist speculates that men in power may use nicknames to offer a sense of friendliness and openness.

Women at the top are more likely to use their full names….

In the same study, LinkedIn researchers found that the most common names of female CEOs include Deborah, Cynthia, and Carolyn. Unlike the men, women may use their full names in an attempt to project professionalism and gravitas, according to the report. …

http://www.businessinsider.com/how-your-name-affects-your-success-2015-8

A Michigan State University study finds that the names of Black males affect their life expectancy.  https://www.sciencedaily.com/releases/2016/03/160326105659.htm

Mojtaba Arvin wrote in the Machine Learning article, The robot that became racist: AI that learnt from the web finds white-sounding names ‘pleasant’ and …

Humans look to the power of machine learning to make better and more effective decisions.

However, it seems that some algorithms are learning more than just how to recognize patterns – they are being taught how to be as biased as the humans they learn from.

Researchers found that a widely used AI characterizes black-sounding names as ‘unpleasant’, which they believe is a result of our own human prejudice hidden in the data it learns from on the World Wide Web.

Researchers found that a widely used AI characterizes black-sounding names as ‘unpleasant’, which they believe is a result of our own human prejudice hidden in the data it learns from on the World Wide Web

Machine learning has been adopted to make a range of decisions, from approving loans to determining what kind of health insurance, reports Jordan Pearson with Motherboard.

A recent example was reported by Pro Publica in May, when an algorithm used by officials in Florida automatically rated a more seasoned white criminal as being a lower risk of committing a future crime, than a black offender with only misdemeanors on her record.

Now, researchers at Princeton University have reproduced a stockpile of documented human prejudices in an algorithm using text pulled from the internet.

HOW A ROBOT BECAME RACIST

Princeton University conducted a word associate task with the popular algorithm GloVe, an unsupervised AI that uses online text to understand human language.

The team gave the AI words like ‘flowers’ and ‘insects’ to pair with other words  that the researchers defined as being ‘pleasant’ or ‘unpleasant’ like ‘family’ or ‘crash’ – which it did successfully.

Then algorithm was given a list of white-sounding names, like Emily and Matt, and black-sounding ones, such as Ebony and Jamal’, which it was prompted to do the same word association.

The AI linked the white-sounding names with ‘pleasant’ and black-sounding names as ‘unpleasant’.

Princeton’s results do not just prove datasets are polluted with prejudices and assumptions, but the algorithms currently being used for researchers are reproducing human’s worst values – racism and assumption…                                                                                                                 https://www.artificialintelligenceonline.com/19050/the-robot-that-became-racist-ai-that-learnt-from-the-web-finds-white-sounding-names-pleasant-and/

See, The robot that became racist: AI that learnt from the web finds white-sounding names ‘pleasant’ and black-sounding names ‘unpleasant’     http://www.dailymail.co.uk/sciencetech/article-3760795/The-robot-racist-AI-learnt-web-finds-white-sounding-names-pleasant-black-sounding-names-unpleasant.html

Here is a portion of the draft:

Semantics derived automatically from language

corpora necessarily contain human biases

Aylin Caliskan-Islam1 , Joanna J. Bryson 1,2 , and Arvind Narayanan1

1Princeton University

2 University of Bath

Address correspondence to aylinc@princeton.edu, bryson@conjugateprior.org, arvindn@cs.princeton.edu.

+

Draft date August 25, 2016.

ABSTRACT

Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day. We replicate a spectrum of standard human biases as exposed by the Implicit Association Test and other well-known

psychological studies. We replicate these using a widely used, purely statistical machine-learning model—namely, the GloVe word embedding—trained on a corpus of text from the Web. Our results indicate that language itself contains recoverable and accurate imprints of our historic biases, whether these are morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo for the distribution of gender with respect to careers or first

names. These regularities are captured by machine learning along with the rest of semantics. In addition to our empirical findings concerning language, we also contribute new methods for evaluating bias in text, the Word Embedding Association Test (WEAT) and the Word Embedding Factual Association Test (WEFAT). Our results have implications not only for AI and machine learning, but also for the fields of psychology, sociology, and human ethics, since they raise the possibility that mere exposure to everyday language can account for the biases we replicate here…..

http://randomwalker.info/publications/language-bias.pdf

See, Top 20 ‘Whitest’ and ‘Blackest’ Names      http://abcnews.go.com/2020/top-20-whitest-blackest-names/story?id=2470131

Moi wrote in Black people MUST develop a culture of success: Michigan State revokes a football scholarship because of raunchy rap video.

The question must be asked, who is responsible for MY or YOUR life choices? Let’s get real, certain Asian cultures kick the collective butts of the rest of Americans. Why? It’s not rocket science. These cultures embrace success traits of hard work, respect for education, strong families, and a reverence for success and successful people. Contrast the culture of success with the norms of hip-hop and rap oppositional culture. See, Hip-hop’s Dangerous Values
http://www.freerepublic.com/focus/f-news/1107107/posts and Hip-Hop and rap represent destructive life choices: How low can this genre sink? https://drwilda.com/2013/05/01/hip-hop-and-rap-represent-destructive-life-choices-how-low-can-this-genre-sink/

Resources:

Culture of Success
http://www.cato.org/publications/commentary/culture-success

How Do Asian Students Get to the Top of the Class?
http://www.greatschools.org/parenting/teaching-values/481-parenting-students-to-the-top.gs

Related:

Is there a model minority?
https://drwilda.com/2012/06/23/is-there-a-model-minority/

Where information leads to Hope. © Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART©
https://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©
http://drwildareviews.wordpress.com/

Dr. Wilda ©
https://drwilda.com/