Tag Archives: High Stakes Testing

Dishonesty on the part of adults in schools

1 Apr

Ronda Cook reports in the Atlanta Journal Consttition article, APS officials to begin surrendering about the recent example of adults cheating to produce higher test scores:

Thirty-five former Atlanta public school employees were named in a 65-count indictment returned Friday alleging racketeering, false statements and writings and other charges related to alleged cheating on standardized test scores and the covering up of those actions.

Retired Atlanta school Superintendent Beverly Hall, some of her top deputies, principals, teachers and a secretary have until Tuesday to turn themselves in. Once processed in the jail, they will have to go before a magistrate, where bond is discussed. The grand jury said Hall’s bond should be set at $7.5 million, but the judge can set a lesser amount. http://www.ajc.com/news/news/aps-officials-to-begin-surrendering/nW72c/

See, Standardized Test Cheating http://www.huffingtonpost.com/news/standardized-test-cheating

Moi wrote about cheating teachers in ACT to assess college readiness for 3rd-10th Grades:

There have been a number of cheating scandals over the past couple of years. Benjamin Herold has a riveting blog post at The Notebook which describes itself as “An independent voice for parents, educators, students, and friends of Philadelphia Public Schools.” In the post, Confession of A Cheating Teacher Herold reports:

She said she knows she’s a good teacher.

But she still helped her students cheat.

What I did was wrong, but I don’t feel guilty about it,” said a veteran Philadelphia English teacher who shared her story with the Notebook/NewsWorks.

During a series of recent interviews, the teacher said she regularly provided prohibited assistance on the Pennsylvania System of School Assessment (PSSA) exams to 11th graders at a city neighborhood high school. At various times, she said, she gave the students definitions for unfamiliar words, discussed with students reading passages they didn’t understand, and commented on their writing samples.

On a few occasions, she said, she even pointed them to the correct answers on difficult questions.

They’d have a hard time, and I’d break it down for them,” said the teacher matter-of-factly.

Such actions are possible grounds for termination. As a result, the Notebook/NewsWorks agreed to protect her identity.

The teacher came forward following the recent publication of a 2009 report that identified dozens of schools across Pennsylvania and Philadelphia that had statistically suspicious test results. Though her school was not among those flagged, she claims that adult cheating there was “rampant.”

The Notebook/NewsWorks is also withholding the name of her former school. because the details of her account have been only partially corroborated.

But her story seems worth telling.

During multiple conversations with the Notebook/NewsWorks, both on the phone and in person, the teacher provided a detailed, consistent account of her own actions to abet cheating. Her compelling personal testimonial highlighted frequently shared concerns about the conditions that high-stakes testing have created in urban public schools. The Notebook and NewsWorks believe that her confession sheds important light on the recent spate of cheating scandals across the country….

She said she knows she’s a good teacher.

But she still helped her students cheat.

What I did was wrong, but I don’t feel guilty about it,” said a veteran Philadelphia English teacher who shared her story with the Notebook/NewsWorks.

During a series of recent interviews, the teacher said she regularly provided prohibited assistance on the Pennsylvania System of School Assessment (PSSA) exams to 11th graders at a city neighborhood high school. At various times, she said, she gave the students definitions for unfamiliar words, discussed with students reading passages they didn’t understand, and commented on their writing samples.

On a few occasions, she said, she even pointed them to the correct answers on difficult questions.

They’d have a hard time, and I’d break it down for them,” said the teacher matter-of-factly.

Such actions are possible grounds for termination. As a result, the Notebook/NewsWorks agreed to protect her identity.

The teacher came forward following the recent publication of a 2009 report that identified dozens of schools across Pennsylvania and Philadelphia that had statistically suspicious test results. Though her school was not among those flagged, she claims that adult cheating there was “rampant.”

The Notebook/NewsWorks is also withholding the name of her former school. because the details of her account have been only partially corroborated.

But her story seems worth telling.

During multiple conversations with the Notebook/NewsWorks, both on the phone and in person, the teacher provided a detailed, consistent account of her own actions to abet cheating. Her compelling personal testimonial highlighted frequently shared concerns about the conditions that high-stakes testing have created in urban public schools. The Notebook and NewsWorks believe that her confession sheds important light on the recent spate of cheating scandals across the country.

One might ask what the confessions of a cheating teacher have to do with the announcement by ACT that they will begin offering a series of assessments to measure skills needed in high school and college. Although, it is in the early stage of development, one could question whether this assessment will turn into a high-stakes test with pressures on students, teachers, and schools. Admittedly, it is early. https://drwilda.com/2012/07/04/act-to-assess-college-readiness-for-3rd-10th-grades/

Valerie Strauss reports in the Washington Post article, 50 ways adults in schools ‘cheat’ on standardized tests:

Pre-Testing
Fail to store test materials securely
Encourage teachers to view test forms before they are administered
Teach to the test by ignoring subjects not on exam
Drill students on actual test items
Share test items on Internet before administration
Practice on copies of previously administered “secure” tests
Exclude likely low-scorers from enrolling in school
Hold-back low scorers from tested grade
“Leap-frog” promote some students over tested grade
Transfer likely low-scoring students to charter schools with no required tests
Push likely low scorers out of school or enroll them in GED programs
Falsify student identification numbers so low scorers are not assigned to correct demographic group
Urge low-scoring students to be absent on test day
Leave test materials out so students can see them before exam

During Testing
Let high-scorers take tests for others
Overlook “cheat sheets” students bring into classroom
Post hints (e.g. formulas, lists, etc) on walls or whiteboard
Write answers on black/white board, then erase before supervisor arrives
Allow students to look up information on web with electronic devices
Allow calculator use where prohibited
Ignore test-takers copying or sharing answers with each other
Permit students to go to restroom in groups
Shout out correct answers
Use thumbs up/thumbs down signals to indicate right and wrong responses
Tell students to “double check” erroneous responses
Give students notes with correct answers
Read “silent reading” passages out loud
Encourage students who have completed sections to work on others
Allow extra time to complete test
Leave classroom unattended during test
Warn staff if test security monitors are in school
Refuse to allow test security personnel access to testing rooms
Cover doors and windows of testing rooms to prevent monitoring
Give accommodations to students who didn’t officially request them

Post-Testing
Allow students to “make up” portions of the exam they failed to complete
Invite staff to “clean up” answer sheets before transmittal to scoring company
Permit teachers to score own students’ tests
Fill in answers on items left blank
Re-score borderline exams to “find points” on constructed response items
Erase erroneous responses and insert correct ones
Provide false demographic information for test takers to assign them to wrong categories
Fail to store completed answer sheets securely
Destroy answer sheets from low-scoring students
Report low scorers as having been absent on testing day
Share content with educators/students who have not yet taken the test
Fail to perform data forensics on unusual score gains
Ignore “flagged” results from erasure analysis
Refuse to interview personnel with potential knowledge of improper practices
Threaten discipline against testing impropriety whistle blowers
Fire staff who persist in raising questions
Fabricate test security documentation for state education department investigators
Lie to law enforcement personnel                                     http://www.washingtonpost.com/blogs/answer-sheet/wp/2013/03/31/50-ways-adults-in-schools-cheat-on-standardized-tests/

Here is the press release from Fair Test:

FairTest Press Release: Standardized Exam Cheating In 37 States And D.C.; New Report Shows Widespread Test Score Corruption

Submitted by fairtest on March 27, 2013 – 11:32pm

for further information:
Bob Schaeffer (239) 395-6773
cell  (239) 699-0468

for immediate release, Thursday, March 28, 2013

STANDARDIZED EXAM CHEATING CONFIRMED IN 37 STATES AND D.C.;
NEW REPORT SHOWS WIDESPREAD TEST SCORE CORRUPTION

As an Atlanta grand jury considers indictments against former top school officials in a test cheating scandal and the annual wave of high-stakes standardized exams begins across the nation, a new survey reports confirmed cases of test score manipulation in at least 37 states and Washington, D.C. in the past four academic years. The analysis by the National Center for Fair & Open Testing (FairTest) documents more than 50 ways schools improperly inflated their scores during that period.

Across the U.S., strategies that boost scores without improving learning — including outright cheating, narrow teaching to the test and pushing out low-scoring students — are widespread,” said FairTest Public Education Director Bob Schaeffer. “These corrupt practices are inevitable consequences of the politically mandated overuse and misuse of high-stakes exams.”

Among the ways FairTest found test scores have been manipulated in communities such as Atlanta, Baltimore, Cincinnati, Detroit, El Paso, Houston, Los Angeles, Newark, New York City, Philadelphia and the District of Columbia:

  • Encourage teachers to view upcoming test forms before they are administered.
  • Exclude likely low-scorers from enrolling in school.
  • Drill students on actual upcoming test items.
  • Use thumbs-up/thumbs-down signals to indicate right and wrong responses.
  • Erase erroneous responses and insert correct ones.
  • Report low-scorers as having been absent on testing day.

Schaeffer continued, “The solution to the school test cheating problem is not simply stepped up enforcement. Instead, testing misuses must end because they cheat the public out of accurate data about public school quality at the same time they cheat many students out of a high-quality education.”

The cheating explosion is one of the many reasons resistance to high-stakes testing is sweeping the nation,” Schaeffer concluded.

– – 3 0 – –

Attached:    

Attachment Size
CheatingReportsList.pdf 113.99 KB
Cheating-50WaysSchoolsManipulateTestScores.pdf 171.74 KB

Moi wrote in The military mirrors society:

Here’s today’s COMMENT FROM AN OLD FART: Despite the fact that those in high places are routinely outed for lapses in judgment and behavior unbecoming the office or position they have been entrusted with, many continue to feign surprise at the lapse. Really, many are feigning the surprise at the stupidity of the seemingly bright and often brilliant folk who now have to explain to those close and the public about the stupidity which brought their lives to ruin. Some how the “devil made me do it” does not quite fully explain the hubris. The hubris comes from a society and culture where ME is all that counts and there are no eternals. There is only what exists in this moment. http://drwildaoldfart.wordpress.com/2012/11/13/the-military-mirrors-society/

Related:

Cheating in schools goes high-tech https://drwilda.com/2011/12/21/cheating-in-schools-goes-high-tech/

What , if anything, do education tests mean? https://drwilda.wordpress.com/2011/11/27/what-if-anything-do-education-tests-mean/

Suing to get a better high school transcript after cheating incident

https://drwilda.com/tag/parents-who-sued-school-over-sons-punishment-for-cheating-receive-hate-messages/

Where information leads to Hope. ©                  Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART©                      http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©                                             http://drwildareviews.wordpress.com/

Dr. Wilda ©                                                                                                      https://drwilda.com/

Studies: Current testing may not adequately assess student abilities

22 Nov

Moi wrote about testing in More are questioning the value of one-size-fits-all testing:

Joy Resmovits has an excellent post at Huffington Post. In Standardized Tests’ Measures of Student Performance Vary Widely: Study Resmovits reports:

The report, written by the Education Department’s National Center for Education Statistics, found that the definition of proficiency on standardized tests varies widely among states, making it difficult to assess and compare student performance. The report looked at states’ standards on exams and found that some states set much higher bars for students proficiency in particular subjects.

The term “proficiency” is key because the federal No Child Left Behind law mandates that 100 percent of students must be “proficient” under state standards by 2014 — a goal that has been universally described as impossible to reach….

They found many states deemed students “proficient” by their own standards, but those same students would have been ranked as only “basic” — defined as “partial mastery of knowledge and skills fundamental for proficient work at each grade” — under NAEP.

The implication is that students of similar academic skills but residing in different states are being evaluated against different standards for proficiency in reading and mathematics,” the report concludes….

Here is the report citation:

Mapping State Proficiency Standards Onto NAEP Scales: Variation and Change in State Standards for Reading and Mathematics, 2005-2009

August 10, 2011

Author: Victor Bandeira de Mello

PDF Download the complete report in a PDF file for viewing and printing. (1959K PDF)

W.M. Chambers cautioned about testing in a 1964 Journal of General Education article, Testing And Its Relationship To Educational Objectives. He questioned whether testing supported the objectives of education rather than directing the objectives.

Here is the complete citation:

Penn State University Press Testing And Its Relationship To Educational Objectives

W. M. Chambers

The Journal of General Education
Vol. 16, No. 3 (October 1964), pp. 246-249
(article consists of 4 pages)

Published by:

Stable URL: http://www.jstor.org/stable/27795936

Sarah D. Sparks writes in the Education Week article, Today’s Tests Seen as Bar to Better Assessment:

The use of testing in school accountability systems may hamstring the development of tests that can actually transform teaching and learning, experts from a national assessment commission warn.

Members of the Gordon Commission on the Future of Assessment in Education, speaking at the annual meeting of the National Academy of Education here Nov. 1-3, said that technological innovations may soon allow much more in-depth data collection on students, but that current testing policy calls for the same test to fill too many different and often contradictory roles.

The nation’s drive to develop standards-based accountability for schools has led to tests that, “with only few exceptions, systematically overrepresent basic skills and knowledge and omit the complex knowledge and reasoning we are seeking for college and career readiness,” the commission writes in one of several interim reportsRequires Adobe Acrobat Reader discussed at the Academy of Education meeting.

“We strongly believe that assessment is a primary component of education, … [part of] the trifecta of teaching, learning, and testing,” said Edmund W. Gordon, the chairman of the commission and a professor emeritus of psychology at Yale University and Teachers College, Columbia University.

The two-year study group launched in 2011 with initial funding from the Princeton, N. J.-based Educational Testing Service and a membership that reads like a who’s who of education research and policy. Its 32 members include: author and education historian Diane Ravitch of New York University, former West Virginia Gov. Bob Wise of the Washington-based Alliance for Excellent Education, and cognitive psychologist Lauren Resnick of the University of Pittsburgh, among others.

The panel is developing recommendations for both research on new assessments—for the Common Core State Standards and others—and policy for educators on how to use tests appropriately. The final recommendations, expected at the end of the year, will be based on two dozen studies and analyses from experts in testing on issues of methods, student privacy, and other topics….

Related Stories

http://www.edweek.org/ew/articles/2012/11/14/12tests.h32.html?tkn=UMWFIAqhQGc%2Fi5o3iVfxpKJ7Mx2ZMahHFZ7L&cmp=clp-edweek&intc=es

There are education scholars on all sides of the testing issue.

Moi wrote in What, if anything, do education tests mean?

Every population of kids is different and they arrive at school at various points on the ready to learn continuum. Schools and teachers must be accountable, but there should be various measures of judging teacher effectiveness for a particular population of children. Perhaps, more time and effort should be spent in developing a strong principal corps and giving principals the training and assistance in evaluation and mentoring techniques. There should be evaluation measures which look at where children are on the learning continuum and design a program to address that child’s needs.               https://drwilda.com/2011/11/27/what-if-anything-do-education-tests-mean/

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART © http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©                           http://drwildareviews.wordpress.com/

Dr. Wilda ©                                                                                https://drwilda.com/

ACT to assess college readiness for 3rd-10th Grades

4 Jul

There have been a number of cheating scandals over the past couple of years. Benjamin Herold has a riveting blog post at The Notebook which describes itself as “An independent voice for parents, educators, students, and friends of Philadelphia Public Schools.” In the post, Confession of A Cheating Teacher Herold reports:

She said she knows she’s a good teacher.

But she still helped her students cheat.

“What I did was wrong, but I don’t feel guilty about it,” said a veteran Philadelphia English teacher who shared her story with the Notebook/NewsWorks.

During a series of recent interviews, the teacher said she regularly provided prohibited assistance on the Pennsylvania System of School Assessment (PSSA) exams to 11th graders at a city neighborhood high school. At various times, she said, she gave the students definitions for unfamiliar words, discussed with students reading passages they didn’t understand, and commented on their writing samples.

On a few occasions, she said, she even pointed them to the correct answers on difficult questions.

They’d have a hard time, and I’d break it down for them,” said the teacher matter-of-factly.

Such actions are possible grounds for termination. As a result, the Notebook/NewsWorks agreed to protect her identity.

The teacher came forward following the recent publication of a 2009 report that identified dozens of schools across Pennsylvania and Philadelphia that had statistically suspicious test results. Though her school was not among those flagged, she claims that adult cheating there was “rampant.”

The Notebook/NewsWorks is also withholding the name of her former school. because the details of her account have been only partially corroborated.

But her story seems worth telling.

During multiple conversations with the Notebook/NewsWorks, both on the phone and in person, the teacher provided a detailed, consistent account of her own actions to abet cheating. Her compelling personal testimonial highlighted frequently shared concerns about the conditions that high-stakes testing have created in urban public schools. The Notebook and NewsWorks believe that her confession sheds important light on the recent spate of cheating scandals across the country….

She said she knows she’s a good teacher.

But she still helped her students cheat.

“What I did was wrong, but I don’t feel guilty about it,” said a veteran Philadelphia English teacher who shared her story with the Notebook/NewsWorks.

During a series of recent interviews, the teacher said she regularly provided prohibited assistance on the Pennsylvania System of School Assessment (PSSA) exams to 11th graders at a city neighborhood high school. At various times, she said, she gave the students definitions for unfamiliar words, discussed with students reading passages they didn’t understand, and commented on their writing samples.

On a few occasions, she said, she even pointed them to the correct answers on difficult questions.

They’d have a hard time, and I’d break it down for them,” said the teacher matter-of-factly.

Such actions are possible grounds for termination. As a result, the Notebook/NewsWorks agreed to protect her identity.

The teacher came forward following the recent publication of a 2009 report that identified dozens of schools across Pennsylvania and Philadelphia that had statistically suspicious test results. Though her school was not among those flagged, she claims that adult cheating there was “rampant.”

The Notebook/NewsWorks is also withholding the name of her former school. because the details of her account have been only partially corroborated.

But her story seems worth telling.

During multiple conversations with the Notebook/NewsWorks, both on the phone and in person, the teacher provided a detailed, consistent account of her own actions to abet cheating. Her compelling personal testimonial highlighted frequently shared concerns about the conditions that high-stakes testing have created in urban public schools. The Notebook and NewsWorks believe that her confession sheds important light on the recent spate of cheating scandals across the country.

One might ask what the confessions of a cheating teacher have to do with the announcement by ACT that they will begin offering a series of assessments to measure skills needed in high school and college. Although, it is in the early stage of development, one could question whether this assessment will turn into a high-stakes test with pressures on students, teachers, and schools. Admittedly, it is early.

Caralee Adams writes in the Education Week article, ACT to Roll Out Career and College Readiness Tests for 3rd-10th Grades:

ACT Inc. announced today that it is developing a new series of assessments for every grade level, from 3rd through 10th, to measure skills needed in college and careers.

The tests, which would be administered digitally and provide instant feedback to teachers, will be piloted in states this fall and scheduled to be launched in 2014, says Jon Erickson, the president of education for ACT, the Iowa City, Iowa-based nonprofit testing company.

The “next generation” assessment will be pegged to the Common Core State Standards and cover the four areas now on the ACT: English, reading, math, and science.

“It connects all the grades—elementary school through high school—to measure growth and development,” says Erickson. “It informs teaching, as students progress, to intervene at early ages.”

The assessment would look beyond academics to get a complete picture of the whole student, he says. There would be interest inventories for students, as well as assessment of behavioral skills for students and teachers to evaluate.

It will fill a niche as the first digital, longitudinal assessment to connect student performance across grades, both in and out of the classroom, according to the ACT. The hope is to get information on students’ weaknesses and strengths earlier so teachers can make adjustments to improve their chances of success.

ACT has not arrived at a cost for the assessment system, but it intends to offer it in modules for states, districts, or schools to buy to administer to all students. As a nonprofit organization, Erickson says ACT wants to keep pricing affordable and at the lowest price acceptable to states. Teachers could choose to use all or part of the assessment, likely in the classroom during the typical school day. ACT is still field-testing the system so the length of the assessment is not set.

With digital delivery of the test, students would have automatic scoring and real-time assessments, says Erickson. (There would be pencil-and-paper testing to accommodate schools that would not be equipped with computers.) The assessment would include a combination of multiple-choice, open-response, and interactive items that would incorporate some creativity into testing, he adds. It would be both formative and summative for accountability purposes….

Just how states might use the new assessment is uncertain. It could replace the current state test, be given as a lead-up to the test, or used as a supplement for it, he says.

ACT developed the test in response to needs expressed by states to improve college and career readiness, says Erickson. Providing integrated testing from elementary to high school, with the ACT as the capstone in 11th grade, “will be a game changer,” he adds. http://blogs.edweek.org/edweek/college_bound/2012/07/act_plans_to_roll_out_career_and_college_readiness_tests_for_3rd-10th_grades.html?intc=es

There is no magic bullet or “Holy Grail” in education. There is only what works to produce academic achievement in each population of children. That is why school choice is so important.

There must be a way to introduce variation into the education system. The testing straightjacket is strangling innovation and corrupting the system. Yes, there should be a way to measure results and people must be held accountable, but relying solely on tests, especially when not taking into consideration where different populations of children are when they arrive at school is lunacy.

Related:

Early learning standards and the K-12 continuum https://drwilda.wordpress.com/2012/01/03/early-learning-standards-and-the-k-12-contiuum/

Jonathan Cohn’s ‘The Two Year Window’ https://drwilda.wordpress.com/2011/12/18/jonathan-cohns-the-two-year-window/

What , if anything, do education tests mean? https://drwilda.wordpress.com/2011/11/27/what-if-anything-do-education-tests-mean/

Dr. Wilda says this about that ©



More are questioning the value of one-size-fits-all testing

20 Feb

Joy Resmovits has an excellent post at Huffington Post. In Standardized Tests’ Measures of Student Performance Vary Widely: Study Resmovits reports:

The United States has 50 distinct states, which means there are 50 distinct definitions of “proficient” on standardized tests for students.

For example, an Arkansas fourth-grader could be told he is proficient in reading based on his performance on a state exam. But if he moved across the border to Missouri, he might find that’s no longer true, according to a new report.

“This is a really fundamental, interesting question about accountability reform in education,” Jack Buckley, commissioner of the government organization that produced the report, told reporters on a Tuesday conference call.

The report, written by the Education Department’s National Center for Education Statistics, found that the definition of proficiency on standardized tests varies widely among states, making it difficult to assess and compare student performance. The report looked at states’ standards on exams and found that some states set much higher bars for students proficiency in particular subjects.

The term “proficiency” is key because the federal No Child Left Behind law mandates that 100 percent of students must be “proficient” under state standards by 2014 — a goal that has been universally described as impossible to reach.

The report, released Wednesday, relies on standards used by the National Assessment of Education Progress, the only national-level standardized test, considered the gold standard for measuring actual student achievement. Researchers scaled state standards to match NAEP’s and then analyzed differences among state scores in 2005, 2007 and 2009.

They found many states deemed students “proficient” by their own standards, but those same students would have been ranked as only “basic” — defined as “partial mastery of knowledge and skills fundamental for proficient work at each grade” — under NAEP.

“The implication is that students of similar academic skills but residing in different states are being evaluated against different standards for proficiency in reading and mathematics,” the report concludes….

Here is the report citation:

Mapping State Proficiency Standards Onto NAEP Scales: Variation and Change in State Standards for Reading and Mathematics, 2005-2009

August 10, 2011

Author: Victor Bandeira de Mello

PDF Download the complete report in a PDF file for viewing and printing. (1959K PDF)

W.M. Chambers cautioned about testing in a 1964 Journal of General Education article, Testing And Its Relationship To Educational Objectives. He questioned whether testing supported the objectives of education rather than directing the objectives.

Here is the complete citation:

Penn State University PressTesting And Its Relationship To Educational Objectives

W. M. Chambers

The Journal of General Education
Vol. 16, No. 3 (October 1964), pp. 246-249
(article consists of 4 pages)

Published by:

Stable URL: http://www.jstor.org/stable/27795936

The goal of education is of course, the educate students. Purdue University has a concise synopsis of Bloom’s Taxonomy which one attempt at describing education objectives:

Bloom’s (1956) Taxonomy of Educational Objectives is the most renowned description of the levels of cognitive performance. The levels of the Taxonomy and examples of activities at each level are given in Table 3.3. The levels of this taxonomy are considered to be hierarchical. That is, learners must master lower level objectives first before they can build on them to reach higher level objectives.

 Table 3.3

Bloom’s Taxonomy of Educational Objectives

Cognitive Domain

1. Knowledge (Remembering previously learned material) 

Educational Psychology: Give the definition of punishment.

Mathematics: State the formula for the area of a circle.

English / Language Arts: Recite a poem.2. Comprehension (Grasping the meaning of material) 

Educational Psychology: Paraphrase in your own words the definition of punishment; answer questions about the meaning of punishment.

Mathematics: Given the mathematical formula for the area of a circle, paraphrase it using your own words.

English / Language Arts: Explain what a poem means.  3. Application (Using information in concrete situations)

Educational Psychology: Given an anecdote describing a teaching situation, identify examples of punishment.

Mathematics: Compute the area of actual circles.

English / Language Arts: Identify examples of metaphors in a poem.4. Analysis (Breaking down material into parts)

Educational Psychology: Given an anecdote describing a teaching situation, identify the psychological strategies intentionally or accidentally employed.

Mathematics: Given a math word problem, determine the strategies that would be necessary to solve it.

English / Language Arts: Given a poem, identify the specific poetic strategies employed in it.5. Synthesis (Putting parts together into a whole) 

Educational Psychology: Apply the strategies learned in educational psychology in an organized manner to solve an educational problem.

Mathematics: Apply and integrate several different strategies to solve a mathematical problem.

English / Language Arts: Write an essay or a poem. 6. Evaluation (Judging the value of a product for a given purpose, using definite criteria)

Educational Psychology: Observe another teacher (or yourself) and determine the quality of the teaching performance in terms of the teacher’s appropriate application of principles of educational psychology.

Mathematics: When you have finished solving a problem (or when a peer has done so) determine the degree to which that problem was solved as efficiently as possible.

English / Language Arts: Analyze your own or a peer’s essay in terms of the principles of composition discussed during the semester.Knowledge (recalling information) represents the lowest level in Bloom’s taxonomy. It is “low” only in the sense that it comes first – it provides the basis for all “higher” cognitive activity. Only after a learner is able to recall information is it possible to move on to comprehension(giving meaning to information). The third level is application, which refers to using knowledge or principles in new or real-life situations. The learner at this level solves practical problems by applying information comprehended at the previous level. The fourth level is analysis – breaking down complex information into simpler parts. The simpler parts, of course, were learned at earlier levels of the taxonomy. The fifth level, synthesis, consists of creating something that did not exist before by integrating information that had been learned at lower levels of the hierarchy. Evaluation is the highest level of Bloom’s hierarchy. It consists of making judgments based on previous levels of learning to compare a product of some kind against a designated standard.

Teachers often use the term application inaccurately. They assume anytime students use the information in any way whatsoever that this represents the application level of Bloom’s taxonomy. This is not correct.

A child who “uses” his memorization of the multiplication tables to write down “15” next to “5 times 3 equals” is working at the knowledge level, not the application level.

A child who studies Spanish and then converses with a native Mexican is almost certainly at the synthesis level, not at the application level. If the child made a deliberate attempt to get his past tense right, this would be an example of application. However, in conversing he would almost certainly be creating something new that did not exist before by integrating information that had been learned at lower levels of the hierarchy.

 Bloom’s use of the term application differs from our normal conversational use of the term. When working at any of the four highest levels of the taxonomy, we “apply” what we have learned. At the application level, we “just apply.” At the higher levels, we “apply and do something else.”The main value of the Taxonomy is twofold: (1) it can stimulate teachers to help students acquire skills at all of these various levels, laying the proper foundation for higher levels by first assuring mastery of lower-level objectives; and (2) it provides a basis for developing measurement strategies to assess student performance at all these levels of learning.

http://education.calumet.purdue.edu/vockell/edPsybook/Edpsy3/edpsy3_bloom.htm

See, Bloom’s Taxonomy http://en.wikipedia.org/wiki/Bloom%27s_Taxonomy More and more people are asking if testing really advances the goals of education or directs testing’s objectives, which may or may not be the same as the goals of education.

Arthur Goldstein writes in the New York Times article, Students Learn Differently. So Why Test Them All the Same?

We teachers have been hearing for years about “differentiated instruction.” It makes sense to treat individuals differently, and to adapt communication toward what works for them. Some kids you can joke with, and some you cannot. Some need more explanation, while others need little or none. If you consider students as individuals (and especially if you have a reasonable class size), you can better meet their needs.

Considering that, it’s remarkable that the impending Core Curriculum fails to differentiate between native-born American students and English language learners. The fact is, it takes time to learn a language, and while my kids are doing that, they may indeed miss reading Ethan Frome.

Is that really the end of the world?

Before Common Core, our standard was the ever-evolving New York State English Regents exam. Anyone who doesn’t pass the test doesn’t graduate, period. So when my supervisor asks me to train kids to pass it, I do.

The last time I taught it, the Regents exam entailed various multiple choice questions and four essays. I trained kids to write tightly structured, highly formulaic four-paragraph essays (in a style I would never use).

Nonetheless, many of them passed. Kids told one another, “You should take that class. It’s awful, but you’ll pass the exam.”

Regrettably, though the kids worked very hard, writing almost until their hands fell off, the only skill they acquired was passing the English Regents.

Because the exam placed more emphasis on communication than structure, I did not stress structure. I had classes of up to 34, and had to read and comment on everything every kid wrote, so time was limited.

Still, I knew that when my kids went to college, they would have to take writing tests — tests which would almost inevitably label them as E.S.L. students, and place them in remedial classes.

I’ve taught those very classes at Nassau Community College. Students pay for six credit hours and receive zero credits. It seems like a very costly way to learn (particularly since I would happily offer high school kids identical preparation for free). But when your student came from Korea five days ago and needs to graduate in less than a year, you make that kid pass the test.

Still, passing does not constitute mastery. It takes years to learn a language, and that time frame varies wildly by individual.

A kid who’s happy here will embrace the language and master it rapidly, while one who has been dragged kicking and screaming may fold his arms and refuse to learn a thing.

Some kids have been trained all their lives to be quiet in the classroom, and will not speak above a whisper — not the best trait in a language learner.

I’m prepared to deal with all these kids, and ready and willing to do whatever necessary to help them. But if I’m compelled to teach them Shakespeare before they’re ready for SpongeBob, I’m not meeting their needs.

There’s no doubt my students will be more college-ready with a strong background in English structure and usage, something relatively automatic for native speakers. In fact, the language skills my kids have in their first languages will almost inevitably transfer into English.

But depriving them of the time and instruction they need is not, by any means, putting “Children First.”

http://www.nytimes.com/schoolbook/2012/02/17/students-learn-differently-so-why-test-them-all-the-same/?ref=education

Testing is just another battle in the one-size-fits-all approach to education.

Dr. Wilda says this about that ©

What , if anything, do education tests mean?

27 Nov

Moi received a review copy from Princeton University Press of Howard Wainer’s Uneducated Guesses. The publication date was September 14, 2011. In the preface Wainer states the goal of the book, “It deals with education in general and the use of tests and test scores in support of educational goals in particular.” Wainer tries to avoid not only the policy, but the ethical analysis of the analysis of the improper use of tests and test results by tightly defining the objective of the book at page four. The policy implications of using tests and test results to not only decide the direction of education, but to decide what happens to the participants in education are huge. Moi wonders if Wainer was really trying to avoid the unavoidable?

For moi, the real meat of the book comes in chapter 4. Wainer says:

In chapter 3 we learned that the PSAT, the shorter and easier version of the SAT, can be used effectively as one part of the selection decision for scholarships. In this chapter we expand on this discussion to illustrate that the PSAT also provides evidence that can help us allocate scarce educational resources…. [Emphasis Added]

Wainer examines the connection by analyzing and comparing test results from three high school districts. Those schools are Garfield High School in L.A., the site of the movie “Stand and Deliver.” La Canada High School in an upscale L.A. Suburb and Detroit, a very poor inner city school district. The really scary policy implication of Wainer’s very thorough analysis is found at page 44, “Limited resources mean that choices must be made.” Table 4-4 illustrates that real life choices are being made by districts like Detroit. What is really scary is that these choices affect the lives of real human beings. Of course, Wainer is simply the messenger and can’t be faulted for his analysis. According to Wainer, it is very tricky to use test results in predicting school performance and his discussion at page 53 summarizes his conclusions.

Perhaps the most chilling part of Wainer’s book is chapter 8 which deals with how testing and test results can adversely impact the career of a teacher when so-called “experts” incorrectly analyze test data. It should be required reading for those who want to evaluate teacher performance based upon test results.

Overall, Uneducated Guesses is a good, solid, and surprisingly readable book about test design, test results, and the use of test results. The truly scary part of the book describes how the uninformed, unknowing, and possibly venal can use what they perceive to be the correct interpretation to make policy judgments which result in horrific societal consequences.

Wainer makes statistics as readable as possible, because really folks, it is still statistics.

Here is the full citation for the book:

Uneducated Guesses: Using Evidence to Uncover Misguided Education Policies

Howard Wainer

Cloth: $24.95 ISBN: 9780691149288

200pp.

Wainer’s book will come in handy when reading Eric A. Hanushek’s analysis of a National Research Council report.

Joy Resmovits writes about Eric A. Hanushek’s analysis of a National Research Council report in the Huffington Post article, Stanford Economist Rebuts Much-Cited Report That Debunks Test-Based Education:

When the National Research Council published the results of a decade-long study on the effects of standardized testing on student learning this summer, critics who have long opposed the use of exams as a teaching incentive rejoiced.

But Eric Hanushek, a Stanford University economist who is influential in education research, now says the “told you so” knee-jerk reaction was unwarranted: In an article released Monday by Harvard University’s journal Education Next, Hanushek argues that the report misrepresents its own findings, unjustifiably amplifying the perspective of those who don’t believe in testing. His article has even caused some authors of the NRC report to express concerns with its conclusions….

According to Hanushek’s analysis, the panel’s thorough examination of multiple studies is not evident in its conclusions.

“Instead of weighing the full evidence before it in the neutral manner expected of an NRC committee, the panel selectively uses available evidence and then twists it into bizarre, one might say biased, conclusions,” Hanushek wrote.

The anti-testing bias, he says, comes from the fact that “nobody in the schools wants people looking over their shoulders.”

Hanushek, an economist, claims that the .08 standard deviation increase in student learning is not as insignificant as the report makes it sound. According to his calculations, the benefits of such gains outweigh the costs: that amount of learning, he claims, translates to a value of $14 trillion. He notes that if testing is expanded at the expense of $100 per student, the rate of return on that investment is 9,189 percent. Hanushek criticized the report for not giving enough attention to the benefits NCLB provided disadvantaged students.

The report, Hanushek said, hid that evidence.

“They had that in their report, but it’s buried behind a line of discussion that’s led everybody who’s ever read it to conclude that test-based accountability is a bad idea,” he said. Hanushek reacted strongly, he said, because of the “complacency of many policymakers” who say education should be improved but that there are no effective options.

http://www.huffingtonpost.com/2011/11/23/eric-hanushek-rebuts-much_n_1108690.html?ref=email_share

Citation:

Grinding the Antitesting Ax: More bias than evidence behind NRC panel’s conclusions
Eric A. Hanushek,Education Next, WINTER 2012 / VOL. 12, NO. 2

Incentives and Test-Based Accountability in Education: A report from the National Research Council Checked by Eric A. Hanushek

http://educationnext.org/grinding-the-antitesting-ax/

One of the reasons why Hanushek’s critique is so important, aside from the implications that testing has under No Child Left Behind, is the push to use student test results in teacher evaluation. Valerie Strauss has an article in the Washington Post about a study which questions the use of student testing in the teacher evaluation process and the article includes links to the full report. In Study Blast Popular Teacher Evaluation Method Strauss reports:

Student standardized test scores are not reliable indicators of how effective any teacher is in the classroom, not even with the addition of new “value-added” methods, according to a study released today. It calls on policymakers and educators to stop using test scores as a central factor in holding teachers accountable.

Value-added modeling” is indeed all the rage in teacher evaluation: The Obama administration supports it, and the Los Angles Times used it to grade more than 6,000 California teachers in a controversial project. States are changing laws in order to make standardized tests an important part of teacher evaluation.

Unfortunately, this rush is being done without evidence that it works well. The study, by the Economic Policy Institute, a nonpartisan nonprofit think tank based in Washington, concludes that heavy reliance on VAM methods should not dominate high-stakes decisions about teacher evaluation and pay.

Here is the report link

Sarah Garland of the Hechinger Report has written the article, Should value-added teacher ratings be adjusted for poverty?

In Washington, D.C., one of the first places in the country to use value-added teacher ratings to fire teachers, teacher-union president Nathan Saunders likes to point to the following statistic as proof that the ratings are flawed: Ward 8, one of the poorest areas of the city, has only 5 percent of the teachers defined as effective under the new evaluation system known as IMPACT, but more than a quarter of the ineffective ones. Ward 3, encompassing some of the city’s more affluent neighborhoods, has nearly a quarter of the best teachers, but only 8 percent of the worst.

The discrepancy highlights an ongoing debate about the value-added test scores that an increasing number of states—soon to include Florida—are using to evaluate teachers. Are the best, most experienced D.C. teachers concentrated in the wealthiest schools, while the worst are concentrated in the poorest schools? Or does the statistical model ignore the possibility that it’s more difficult to teach a roomfull of impoverished children?

Saunders thinks it’s harder for teachers in high-poverty schools. “The fact that kids show up to school hungry and distracted and they have no eyeglasses and can’t see the board, it doesn’t even acknowledge that,” he said.

http://hechingerreport.org/content/should-value-added-teacher-ratings-be-adjusted-for-poverty_6899/

The question is what do test results mean and more importantly, how are test scores to be used? Wainer’s book attempts to analyze these questions.

Citation:

Should value-added teacher ratings be adjusted for poverty?

Sarah Garland, The Hechinger Report, November 22, 2011

http://hechingerreport.org/content/should-value-added-teacher-ratings-be-adjusted-for-poverty_6899/

Every population of kids is different and they arrive at school at various points on the ready to learn continuum. Schools and teachers must be accountable, but there should be various measures of judging teacher effectiveness for a particular population of children. Perhaps, more time and effort should be spent in developing a strong principal corps and giving principals the training and assistance in evaluation and mentoring techniques. There should be evaluation measures which look at where children are on the learning continuum and design a program to address that child’s needs.

Dr. Wilda says this about that ©