Tag Archives: Is the GRE Too Influential?

Nature article: Does Graduate Record Exam pose a barrier to grad school admission for women and those of color

16 Jun

The Council of Graduate Schools report Graduate Enrollment and Degrees: 2002to 2012 by Leila M. Gonzales, Jeffrey R. Allum, and Robert S. Sowell describes enrollment in U.S. graduate schools. http://cgsnet.org/ckfinder/userfiles/files/GEDReport_2012.pdf California State at Long Beach has an excellent description of the application process and a good description of the tests required:

Admissions Examinations
• Graduate Records Exam (GRE)
http://www.gre.org
• Miller Analogies Test (MAT)
http://www.milleranalogies.com
• Law School Admissions Test (LSAT)
http://www.lsac.org
• Graduate Management Admissions Test (GMAT)
http://www.mba.com/MBA
• Medical College Admissions Test (MCAT)
http://www.aamc.org
• Dental Aptitude Test (DAT)
http://www.ada.org
• Veterinary Aptitude Test (VAT)
aavmc.org
• Optometry Admissions Test (OAT)
http://www.opted.org
• Pharmacy College Admissions Test (PCAT)
http://www.pcatweb.info
• Teacher Testing (PRAXIS)
http://www.ets.org/praxis
Plan to take the appropriate entrance examination during your junior year or at the latest during the fall of your senior year if you plan to go on to graduate school immediately after college…. http://careers.csulb.edu/majors_and_careers/applying_to_graduate_school.htm

Many women and students of color seem to be eliminated from admission to top graduate science programs by the Graduate Record Exam.

Manhattan Prep describes the Graduate Record Exam or GRE:

Basics: What is the GRE®?
The Graduate Record Examination (GRE®) is a standardized test used by graduate programs to help determine who gets in and who receives grants and fellowships. The exam comes in two types: the general exam, which covers a range of non-specific skills developed over a long period of time and years of schooling, and the subject tests, which test depth of knowledge in eight different fields. Worldwide, about half a million people take the general test each year, while a much smaller number takes the subject exams.
The general test is computer-based and consists of three sections, verbal, quantitative, and analytical writing. Verbal and quant are each scored on a scale of 130-170, in 1-point increments, plus a percentile rank. The writing section is scored on a scale of 0-6, in half-point increments. The test does not cover specifics in any field of study, but rather a set of skills thought to be important for prospective grad students.
The subject tests, on the other hand, are paper-based and administered 3 times a year. Unlike the general test, the subject test assumes extensive knowledge. Tests cover the following areas: Biochemistry, Cell and Molecular Biology; Biology; Chemistry; Computer Science; Literature in English; Mathematics; Physics; and Psychology. To determine whether you should take the general test or one of these subject-specific exams, you’ll need to check with the programs where you’re applying. For any field without a subject test, you’ll take the general exam…. https://www.manhattanprep.com/gre/gre-info.cfm

An article questions the influence of the GRE in the college admission process.

Charlie Tyson reported in the Inside Higher Education article, Is the GRE Too Influential?

The low numbers of female and minority students in science, technology, engineering and mathematics (STEM) fields has been fodder for much debate. A new analysis argues that the GRE, a standardized test that most U.S. graduate schools require, is in part to blame.
An article published in the June 12 issue of Nature contends that U.S. universities place too much stress on the GRE when making decisions about graduate admissions. Casey Miller, an associate professor of physics at the University of South Florida, and Keivan Stassun, a professor of physics and astronomy at Vanderbilt University and Fisk University, write that admissions committees, by focusing too squarely on the GRE, are shortchanging women and under-represented minorities and also failing to admit the best students into their Ph.D. programs.
The GRE is a poor predictor of success in the sciences, Miller and Stassun argue. Studies find “only a weak correlation” between high GRE scores and ultimate success in STEM fields.
The test does, however, reflect traits that are unrelated to scholarly potential – such as socioeconomic status, the authors say. (The SAT, a standardized test used in college admissions, perennially receives similar criticisms that high performance on the test is an artifact of family wealth.) The physicists put it bluntly: “the GRE is a better indicator of sex and skin colour than of ability and ultimate success.”
On the quantitative portion of the test, women in the physical sciences score 80 points lower, on average, than men do, according to data from the Educational Testing Service, the company that administers the GRE. African-American test-takers score 200 points lower than whites on the quantitative section.
Some admissions committees, Miller and Stassun report, filter applications using GRE scores. For example, a committee might reject any applicant who has scored below 700 on the GRE’s 800-point quantitative section. This use of GRE scores threatens to delete otherwise qualified female, black and Latino candidates from the applicant pool, Miller and Stassun argue.
The ETS’s guidelines explicitly advise against using cut-off scores for admissions.
The authors argue that admissions committees should attempt to identify applicants who demonstrate “grit and diligence” by (for example) conducting interviews instead of relying so heavily on GRE scores….
http://www.insidehighered.com/news/2014/06/16/stem-graduate-programs-place-too-much-emphasis-gre-scores-physicists-say#ixzz34rSzlnPP

Here is the press release from Nature:

A test that fails
• Casey Miller
• & Keivan Stassun
Nature 510, 303-304 (2014)
doi:10.1038/nj7504-303a

Published online
11 June 2014
This article was originally published in the journal Nature
A standard test for admission to graduate school misses potential winners, say Casey Miller and Keivan Stassun.
Universities in the United States rely too heavily on the graduate record examinations (GRE) — a standardized test introduced in 1949 that is an admissions requirement for most US graduate schools. This practice is poor at selecting the most capable students and severely restricts the flow of women and minorities into the sciences.
We are not the only ones to reach this conclusion. William Sedlacek, professor emeritus of education at the University of Maryland, College Park, who has written extensively on the issue, notes that studies find only a weak correlation between the test and ultimate success in science, technology, engineering and maths (STEM) fields. De-emphasizing the GRE and augmenting admissions procedures with measures of other attributes — such as drive, diligence and the willingness to take scientific risks — would not only make graduate admissions more predictive of the ability to do well but would also increase diversity in STEM.
Test disparities
The GRE, like most standardized tests, reflects certain demographic characteristics of test-takers — such as family socioeconomic status — that are unrelated to their intellectual capacity or academic preparation. The exam’s ‘quantitative score’ — the portion measuring maths acumen, which is most commonly scrutinized in admissions to STEM PhD programmes — correlates closely with gender and ethnicity (see ‘The great divide’). The effect is powerful. According to data from Educational Testing Service (ETS), based in Princeton, New Jersey, the company that administers the GRE, women score 80 points lower on average in the physical sciences than do men, and African Americans score 200 points below white people. In simple terms, the GRE is a better indicator of sex and skin colour than of ability and ultimate success.
These correlations and their magnitude are not well known to graduate-admissions committees, which have a changing rota of faculty members. Compounding the problem, some admissions committees use minimum GRE scores to rapidly filter applications; for example, any candidate scoring below 700 on the 800-point quantitative test section may be discarded. Using GRE scores to filter applicants in this way is a violation of ETS’s own guidelines.
This problem is rampant. If the correlation between GRE scores and gender and ethnicity is not accounted for, imposing such cut-offs adversely affects women and minority applicants. For example, in the physical sciences, only 26% of women, compared with 73% of men, score above 700 on the GRE Quantitative measure. For minorities, this falls to 5.2%, compared with 82% for white and Asian people.
” In simple terms, the GRE is a better indicator of sex and skin colour than of ability and ultimate success. ”
The misuse of GRE scores to select applicants may be a strong driver of the continuing under-representation of women and minorities in graduate school. Indeed, women earn barely 20% of US physical-sciences PhDs, and under-represented minorities — who account for 33% of the US university-age population — earn just 6%. These percentages are striking in their similarity to the percentage of students who score above 700 on the GRE quantitative measure.
Why is the GRE misused? Admissions committees are busy, and numerical rankings are easy to sort. We believe that faculty members also often presume that higher scores imply that the test-taker has a greater ability to become a PhD-level scientist. Yet research by ETS indicates that the predictive validity of the GRE tests is limited to first-year graduate-course grades, and even that correlation is meagre in maths-intensive STEM fields.
Why should graduate-admissions committees care about fixing the problem? First, diversity, in the form of individuals with different perspectives, backgrounds and experiences, is a key component of innovation and problem solving, a concept that business and industry have come to recognize. Less diversity in STEM graduate programmes means slower progress in tackling today’s scientific and technical challenges. Second, the overall PhD completion rate in US STEM graduate programmes is a disappointing 50%. Although graduate programmes certainly produce successful students who continue on to productive science careers, we think that many faculty members would agree that such a low PhD completion rate is a poor return on the investment in recruiting and training students. Indeed, STEM graduate programmes are failing not only from the diversity standpoint, but also from a success standpoint.
Alternative selection
So what should universities do? Instead of filtering by GRE scores, graduate programmes can select applicants on the basis of skills and character attributes that are more predictive of doing well in scientific research and of ultimate employability in the STEM workforce. Appraisers should look not only at indicators of previous achievements, but also at evidence of ability to overcome the tribulations of becoming a PhD-level scientist.
A few innovative PhD programmes, including the bridge programmes at the University of South Florida in Tampa and Fisk–Vanderbilt in Nashville, Tennessee (in which we are involved) are doing this. They have achieved completion rates above 80%, well above the national average, and are greatly boosting participation by women and minorities (see Nature 504, 471–473; 2013). The admissions process includes an interview that examines college and research experiences, key relationships, leadership experience, service to community and life goals. The result is a good indication of the individual’s commitment to scientific research and a good assessment of traits such as maturity, perseverance, adaptability and conscientiousness atop a solid academic foundation. The combination of academic aptitude and these other competencies points to the likelihood of high achievement in graduate school and in a STEM career.
How have the students admitted to these courses performed? In the Fisk–Vanderbilt programme, 81% of the 67 students who have entered the programme — including 56 under-represented minorities and 35 women — have earned, or are making good progress towards, their PhDs. And all students who have completed PhDs are employed in the STEM workforce as postdocs, university faculty members or staff scientists in national labs or industry. From the standpoint of optimal outcomes — earning a PhD and obtaining employment in the STEM workforce — the GRE has proved irrelevant. Indeed, 85% of these young scientists would have been eliminated from consideration for PhD programmes by a GRE quantitative cut-off score of 700.
The only downside is that interviews take about 30 minutes each. But the number of interviews need not be large, and the tremendous insight garnered justifies the time. ETS is even marketing a tool for referees to evaluate applicants’ personal attributes. The company developed it in part as a response to calls from applicants and graduate programmes for alternative measures of student potential for long-term achievement that is not captured by GRE.
We often hear admissions committee members say, ‘We would admit women and minorities if they were qualified’. This mindset reflects long-standing admissions practices that systematically, if inadvertently, filter out women and minorities. At the same time, these practices are no better than a coin flip at identifying candidates with the potential — and the mettle — to earn a PhD.
Let us be frank: we believe that many STEM faculty members on admissions committees and upper-level administrators hold a deep-seated and unfounded belief that these test scores are good measures of ability, of potential for doing well in graduate school and of long-term potential as a scientist, and that students who score poorly on standardized exams are not likely to become PhD-level scientists. These assumptions are false.
This is not a call to admit unqualified students in the name of social good. This is a call to acknowledge that the typical weight given to GRE scores in admissions is disproportionate. If we diminish reliance on GRE and instead augment current admissions practices with proven markers of achievement, such as grit and diligence, we will make our PhD programmes more inclusive and will more efficiently identify applicants with potential for long-term success as researchers. Isn’t that what graduate school is about?

Dave Jameson wrote at the American Psychological Association site in the article, The GRE: What it tells us, and what it doesn’t:

Fortunately, the question of the GRE’s validity has spawned its own subgenre of academic literature. Culled from the empirical data published over the last decade, here are a few things we know — and don’t know — about how well this examination predicts the future.
• There’s no way to know whether a low GRE score translates into failure. Students with the lowest GRE scores aren’t admitted into graduate psychology programs, so they never become psychologists. As a result, there’s no way for researchers to know whether the very lowest-scoring students would have gone on to prove their predictors wrong. “It’s certainly true that there’s a restriction of range,” says Robert Sternberg, PhD, a psychologist and provost at Oklahoma State University who’s examined the GRE in his research. “If you had [greater] range, the predictive value of these studies would increase.” This catch-22 makes some researchers wonder why the GRE looms so large in admissions decisions to begin with.
• GRE scores do help reveal which students will do well in the classroom and which won’t. Many studies have found that students with lower GRE scores are more likely to fail their preliminary examinations. Students with total scores higher than 1,167 usually end up with better grade-point averages than their classmates, more published papers and better ratings from faculty, according to a 2004 study by Dale Phillips, PhD, and Kristen McAuliffe in the School Psychologist Newsletter (Vol. 52, No. 2). “Based on the data that’s out there, the GRE is consistently the strongest [predictor] we have of student success,” says Nathan Kuncel, PhD, author of a 2001 GRE meta-analysis published in Psychological Bulletin (Vol. 127, No. 1).
• The GRE’s predictive powers diminish over time. In his 1997 study published in American Psychologist, (Vol. 52, No. 6) Sternberg found that GRE scores tell us most about how students will perform in the first year of grad school. That’s because “you need the same kinds of skills in introductory courses as you do for the GRE,” he says — namely, the basics, such as general reading and quantitative skills — but not necessarily imagination. As grad school grinds on, more abstract skills become increasingly important — for instance, intuiting which journal would be most likely to accept a particular kind of paper. “The GRE doesn’t measure that,” says Sternberg.
• GRE scores are less reliable when it comes to predicting whether a student will eventually complete a psychology program. The exams may predict classroom performance fairly well, but grades aren’t everything. Several researchers have found that the GRE tells us less about whether someone will finish school. Phillips and McAuliffe, for instance, found that GRE scores didn’t differ much between students who eventually graduated and students who didn’t. “Nothing predicts finishing very well,” says Kuncel. In many cases, students drop out because of life circumstances — leaving to take care of an ailing parent, for example. Phillips’s and McAuliffe’s study support that claim: Only 9 percent of students who dropped out said it was because they couldn’t hack the coursework.
• The GRE’s subject test in psychology tells us the most about a student’s potential. Kuncel’s meta-analysis found that the subject test outperformed the verbal, quantitative and analytical tests when it came to predicting students’ grades and whether they’ll eventually earn a degree. “That only makes sense,” says Stephen J. Dollinger, PhD, a psychology professor at Southern Illinois University who’s studied the validity of the GRE. “The student who enters graduate school knowing more psychology should have an easier time starting a thesis [and] passing prelims.”

But the subject test — usually 205 multiple-choice questions — measures more than just psychology knowledge, says Kuncel. A student who is especially passionate about psychology may outperform a fellow student who has been deemed brighter by the GRE’s verbal and quantitative tests. Still, most master’s programs and about half of doctoral programs in psychology don’t insist that you take it. According to Kuncel, many admissions programs probably worry that they’d alienate prospective students by giving them another hoop to jump through.

“That’s the irony,” he says. “The best single predictor is also not required at many programs.” Still, Kuncel “highly recommend[s]” that prospective students take the test anyway, if only to convey their enthusiasm for the field.
In the end, your GRE score will certainly affect which program you get into, but it won’t necessarily predict how well you do once you get there….. http://www.apa.org/gradpsych/2011/01/gre.aspx

See, Decide Between GMAT, GRE http://www.usnews.com/education/blogs/mba-admissions-strictly-business/2011/07/29/decide-between-gmat-gre

The question is how to teach critical thinking skills. David Carnes wrote the excellent Livestrong article, How to Build Critical Thinking Skills in Children.http://www.livestrong.com/article/167563-how-to-build-critical-thinking-skills-in-children/#ixzz1kB28AgFS

Related:

What , if anything, do education tests mean? https://drwilda.wordpress.com/2011/11/27/what-if-anything-do-education-tests-mean/

Complete College America report: The failure of remediation https://drwilda.wordpress.com/2012/06/21/complete-college-america-report-the-failure-of-remediation/

What the ACT college readiness assessment means https://drwilda.com/2012/08/25/what-the-act-college-readiness-assessment-means/

The importance of the National Assessment of Educational Progress https://drwilda.com/2012/09/12/the-importance-of-the-national-assessment-of-educational-progress/

Where information leads to Hope. © Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART© http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews © http://drwildareviews.wordpress.com/

Dr. Wilda © https://drwilda.com/