Tag Archives: Bloom’s Taxonomy

The importance of the National Assessment of Educational Progress

12 Sep

Moi wrote in What, if anything, do education tests mean?

Moi received a review copy from Princeton University Press of Howard Wainer’s Uneducated Guesses. The publication date was September 14, 2011. In the preface Wainer states the goal of the book, “It deals with education in general and the use of tests and test scores in support of educational goals in particular.” Wainer tries to avoid not only the policy, but the ethical analysis of the analysis of the improper use of tests and test results by tightly defining the objective of the book at page four. The policy implications of using tests and test results to not only decide the direction of education, but to decide what happens to the participants in education are huge. Moi wonders if Wainer was really trying to avoid the unavoidable?

For moi, the real meat of the book comes in chapter 4. Wainer says:

In chapter 3 we learned that the PSAT, the shorter and easier version of the SAT, can be used effectively as one part of the selection decision for scholarships. In this chapter we expand on this discussion to illustrate that the PSAT also provides evidence that can help us allocate scarce educational resources…. [Emphasis Added]

Wainer examines the connection by analyzing and comparing test results from three high school districts. Those schools are Garfield High School in L.A., the site of the movie “Stand and Deliver.” La Canada High School in an upscale L.A. Suburb and Detroit, a very poor inner city school district. The really scary policy implication of Wainer’s very thorough analysis is found at page 44, “Limited resources mean that choices must be made.” Table 4-4 illustrates that real life choices are being made by districts like Detroit. What is really scary is that these choices affect the lives of real human beings. Of course, Wainer is simply the messenger and can’t be faulted for his analysis. According to Wainer, it is very tricky to use test results in predicting school performance and his discussion at page 53 summarizes his conclusions.

Perhaps the most chilling part of Wainer’s book is chapter 8 which deals with how testing and test results can adversely impact the career of a teacher when so-called “experts” incorrectly analyze test data. It should be required reading for those who want to evaluate teacher performance based upon test results.

Overall, Uneducated Guesses is a good, solid, and surprisingly readable book about test design, test results, and the use of test results. The truly scary part of the book describes how the uninformed, unknowing, and possibly venal can use what they perceive to be the correct interpretation to make policy judgments which result in horrific societal consequences.

Wainer makes statistics as readable as possible, because really folks, it is still statistics.

Here is the full citation for the book:

Uneducated Guesses: Using Evidence to Uncover Misguided Education Policies

Howard Wainer

Cloth: $24.95 ISBN: 9780691149288

200pp.

https://drwilda.com/2011/11/27/what-if-anything-do-education-tests-mean/

Many do not know about the National Assessment of Educational Progress (NAEP). Here is a description of the test:

NAEP Overview

http://nces.ed.gov/nationsreportcard/about/

Here are some FAQs:

Frequently Asked Questions

The National Assessment of Educational Progress (NAEP) is a program with many components—from developing subject-area questions, to selecting schools to participate, to reporting the results. Given its complexity, NAEP receives a variety of questions from visitors to the website; these special pages have been developed to provide answers to some of the most common questions.

If you can’t find the answer to your question on any of our FAQ pages, please click Contact NAEP on the left.

General Questions 

What is NAEP?

NAEP, or the National Assessment of Educational Progress, produces the Nation’s Report Card, to inform the public about the academic achievement of elementary and secondary students in the United States. Sponsored by the department of Education, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S. history, civics, geography, and other subjects, beginning in 1969. NAEP collects and reports academic achievement at the national level, and for certain assessments, at the state and district levels. The results are widely reported by the national and local media, and are an integral part of our nation’s evaluation of the condition and progress of education.

For more general information about NAEP, read the NAEP Overview.
For technical information about NAEP, consult the NAEP Technical Documentation.

What is the difference between state NAEP and national NAEP?

The NAEP sample in each state is designed to be representative of the students in that state. At the state level, results are currently reported for public school students only and are broken down by several demographic groupings of students. When NAEP is conducted at the state level (i.e., in mathematics, reading, science, and writing), results are also reported for the nation. The national NAEP sample is then composed of all the state samples of public school students, as well as a national sample of nonpublic school students. If there are states that do not participate, a certain number of schools and students are selected to complete the national-level sample.

For assessments conducted at the national level only, samples are designed to be representative of the nation as a whole. Data are reported for public and nonpublic school students as well as for several major demographic groups of students.

Read technical information about the differences in the sample selection for state and national assessments in NAEP Assessment Sample Design

What are the goals of the NAEP program?

NAEP has two major goals: to compare student achievement in states and other jurisdictions and to track changes in achievement of fourth-, eighth-, and twelfth-graders over time in mathematics, reading, writing, science, and other content domains. To meet these dual goals, NAEP selects nationally representative samples of students who participate in either the main NAEP assessments or the long-term trend NAEP assessments.

For technical aspects of reporting student achievement, see Analysis and Scaling for NAEP.

Is participation in NAEP voluntary?

Federal law specifies that NAEP is voluntary for every student, school, school district, and state. However, federal law also requires all states that receive Title I funds to participate in NAEP reading and mathematics assessments at fourth and eighth grades. Similarly, school districts that receive Title I funds and are selected for the NAEP sample are also required to participate in NAEP reading and mathematics assessments at fourth and eighth grades. All other NAEP assessments are voluntary. Learn more about NAEP and why participation is important.

Are the data confidential?

Federal law dictates complete privacy for all test takers and their families. Under the National Assessment of Educational Progress Authorization Act (Public Law 107-279 III, section 303), the Commissioner of the National Center for Education Statistics (NCES) is charged with ensuring that NAEP tests do not question test-takers about personal or family beliefs or make information about their personal identity publicly available.

After publishing NAEP reports, NCES makes data available to researchers but withholds students’ names and other identifying information. The names of all participating students are not allowed to leave the schools after NAEP assessments are administered. Because it might be possible to deduce from data the identities of some NAEP schools, researchers must promise, under penalty of fines and jail terms, to keep these identities confidential.

For technical details, read about Questionnaires and Tracking Forms and Non-Cognitive Items in Student Booklets.

Who are the students assessed by NAEP?

The national results are based on a representative sample of students in public schools, private schools, Bureau of Indian Education schools, and Department of Defense schools. Private schools include Catholic, Conservative Christian, Lutheran, and other private schools. The state results are based on public school students only. The main NAEP assessment is usually administered at grades 4 and 8 (at the state level) plus grade 12 at the national level. The long-term trend assessments report national results (in mathematics and reading only) for age samples 9, 13, and 17 in public and nonpublic schools.

For technical details, read about the NAEP Assessment Sample Design.

Who evaluates NAEP?

Because NAEP findings have an impact on the public’s understanding of student academic achievement, precautions are taken to ensure the reliability of these findings. In its current legislation, as in previous legislative mandates, Congress has called for an ongoing evaluation of the assessment as a whole. In response to these legislative mandates, the National Center for Education Statistics (NCES) has established various panels of technical experts to study NAEP, and panels are formed periodically by NCES or external organizations, such as the National Academy of Sciences, to conduct evaluations. The Buros Center for Testing, in collaboration with the University of Massachusetts/Center for Educational Assessment and the University of Georgia, more recently conducted an external evaluation of NAEP.

For technical aspects of reporting student achievement, see Analysis and Scaling for NAEP.

How do I know what publications are available from NAEP and how do I get them?

The NAEP Publications page is accessible via the Publications link at the top of every screen.

Printed copies of NAEP publications can be ordered by contacting:
http://edpubs.ed.gov
Phone: (877) 4-ED-PUBS (433-7827)
TDD/TTY: (877) 576-7734
Mail: Ed Pubs, U.S. Department of Education, P.O. Box 22207, Alexandria, VA 22304
Para español, llame al (877) 433-7827

It is important to understand what the NEAP is because there are attempts to use the test as a predictive tool.

Sarah D. Sparks reports in the Education Week article, Can NAEP Predict College Readiness?

College Indicators

For college, at least, there are signs NAEP performance may be linked to how well a student will do in initial coursework. Researchers from WestEd, a San Francisco-based research group working under contract to the governing board, found that the 12th grade reading and math tests cover content very similar to that of the SAT.

Moreover, a 2009 study of more than 15,000 12th graders who took both the national assessment and the SAT showed that performing at the proficient level on the math NAEP was associatedRequires Adobe Acrobat Reader with an 80 percent chance of earning 500 points out of a possible 800 on the math portion of the SAT, and that the proficient level in reading was associated with a 50-50 chance of scoring 500 on the SAT verbal test.

The SAT has internally pegged a score of 500 to earning at least a B-minus in freshman-level college courses.

NAEP 12th grade content less closely mirrored that used in the ACT, the nation’s other major college-entrance exam; in particular, some arithmetic and applied-math items on the ACT would be covered in more depth on the 8th grade than the 12th grade NAEP in math. NAEP has not been able to compare its performance levels to those in the ACT, though Ms. Orr said the board plans to do so during the 2013 studies, which will also include more state-specific analyses.

Individual states’ data are likely to be critical, North Carolina’s Mr. Fabrizio said, because course requirements vary widely from state to state and even between college systems within the same state.

Hazy Work Picture

The connection between NAEP and preparation for careers that don’t require a four-year college degree is much more tenuous.

The governing board found less overlap between NAEP 12th grade content and that covered on the career-related WorkKeys test, also by ACT Inc. Last spring, panels of professional trainers in five careers—computer-support specialists, automotive master technicians, licensed practical nurses, pharmacy staff, and heating, ventilation, and air conditioning technicians—could not agreeRequires Adobe Acrobat Reader on what proficiency level on NAEP would indicate a student was ready for his or her field.

They did agree, however, that most of the content on the test wouldn’t say much about students’ potential in those fields.

For example, “there are hardly any test items in the pool at 12th grade that are applied, based on some use of mathematics rather than theoretical stuff,” said Jeremy Kilpatrick, a co-author of the studyRequires Adobe Acrobat Reader and a mathematics education professor at the University of Georgia in Athens. “Where there were such items, the career and technical people were really happy to see that—but most times they looked at the questions and said, ‘This is not relevant to what we want.'”

The assessment governing board will try to bring more clarity around job skills next year, with an analysis that compares the skills and knowledge covered in job-training programs in the five career areas with the math and reading content in the 12th grade NAEP tests.

Still, Ms. Orr was less hopeful about whether NAEP will be useful for gauging career readiness. http://www.edweek.org/ew/articles/2012/09/12/03nagb.h32.html?tkn=WYYFs6Fvb3qWQ7tlD%2B4kB4B80di2bmJy6Rje&intc=es

Moi wrote about testing in More are questioning the value of one-size-fits-all testing:

The goal of education is of course, the educate students. Purdue University has a concise synopsis of Bloom’s Taxonomy which one attempt at describing education objectives:

Bloom’s (1956) Taxonomy of Educational Objectives is the most renowned description of the levels of cognitive performance. The levels of the Taxonomy and examples of activities at each level are given in Table 3.3. The levels of this taxonomy are considered to be hierarchical. That is, learners must master lower level objectives first before they can build on them to reach higher level objectives. http://education.calumet.purdue.edu/vockell/edPsybook/Edpsy3/edpsy3_bloom.htm

See, Bloom’s Taxonomy http://en.wikipedia.org/wiki/Bloom%27s_Taxonomy More and more people are asking if testing really advances the goals of education or directs testing’s objectives, which may or may not be the same as the goals of education. https://drwilda.com/2012/02/20/more-are-questioning-the-value-of-one-size-fits-all-testing/

Related:

What , if anything, do education tests mean? https://drwilda.wordpress.com/2011/11/27/what-if-anything-do-education-tests-mean/

Complete College America report: The failure of remediation https://drwilda.wordpress.com/2012/06/21/complete-college-america-report-the-failure-of-remediation/

What the ACT college readiness assessment means https://drwilda.com/2012/08/25/what-the-act-college-readiness-assessment-means/

Dr. Wilda says this about that ©

More are questioning the value of one-size-fits-all testing

20 Feb

Joy Resmovits has an excellent post at Huffington Post. In Standardized Tests’ Measures of Student Performance Vary Widely: Study Resmovits reports:

The United States has 50 distinct states, which means there are 50 distinct definitions of “proficient” on standardized tests for students.

For example, an Arkansas fourth-grader could be told he is proficient in reading based on his performance on a state exam. But if he moved across the border to Missouri, he might find that’s no longer true, according to a new report.

“This is a really fundamental, interesting question about accountability reform in education,” Jack Buckley, commissioner of the government organization that produced the report, told reporters on a Tuesday conference call.

The report, written by the Education Department’s National Center for Education Statistics, found that the definition of proficiency on standardized tests varies widely among states, making it difficult to assess and compare student performance. The report looked at states’ standards on exams and found that some states set much higher bars for students proficiency in particular subjects.

The term “proficiency” is key because the federal No Child Left Behind law mandates that 100 percent of students must be “proficient” under state standards by 2014 — a goal that has been universally described as impossible to reach.

The report, released Wednesday, relies on standards used by the National Assessment of Education Progress, the only national-level standardized test, considered the gold standard for measuring actual student achievement. Researchers scaled state standards to match NAEP’s and then analyzed differences among state scores in 2005, 2007 and 2009.

They found many states deemed students “proficient” by their own standards, but those same students would have been ranked as only “basic” — defined as “partial mastery of knowledge and skills fundamental for proficient work at each grade” — under NAEP.

“The implication is that students of similar academic skills but residing in different states are being evaluated against different standards for proficiency in reading and mathematics,” the report concludes….

Here is the report citation:

Mapping State Proficiency Standards Onto NAEP Scales: Variation and Change in State Standards for Reading and Mathematics, 2005-2009

August 10, 2011

Author: Victor Bandeira de Mello

PDF Download the complete report in a PDF file for viewing and printing. (1959K PDF)

W.M. Chambers cautioned about testing in a 1964 Journal of General Education article, Testing And Its Relationship To Educational Objectives. He questioned whether testing supported the objectives of education rather than directing the objectives.

Here is the complete citation:

Penn State University PressTesting And Its Relationship To Educational Objectives

W. M. Chambers

The Journal of General Education
Vol. 16, No. 3 (October 1964), pp. 246-249
(article consists of 4 pages)

Published by:

Stable URL: http://www.jstor.org/stable/27795936

The goal of education is of course, the educate students. Purdue University has a concise synopsis of Bloom’s Taxonomy which one attempt at describing education objectives:

Bloom’s (1956) Taxonomy of Educational Objectives is the most renowned description of the levels of cognitive performance. The levels of the Taxonomy and examples of activities at each level are given in Table 3.3. The levels of this taxonomy are considered to be hierarchical. That is, learners must master lower level objectives first before they can build on them to reach higher level objectives.

 Table 3.3

Bloom’s Taxonomy of Educational Objectives

Cognitive Domain

1. Knowledge (Remembering previously learned material) 

Educational Psychology: Give the definition of punishment.

Mathematics: State the formula for the area of a circle.

English / Language Arts: Recite a poem.2. Comprehension (Grasping the meaning of material) 

Educational Psychology: Paraphrase in your own words the definition of punishment; answer questions about the meaning of punishment.

Mathematics: Given the mathematical formula for the area of a circle, paraphrase it using your own words.

English / Language Arts: Explain what a poem means.  3. Application (Using information in concrete situations)

Educational Psychology: Given an anecdote describing a teaching situation, identify examples of punishment.

Mathematics: Compute the area of actual circles.

English / Language Arts: Identify examples of metaphors in a poem.4. Analysis (Breaking down material into parts)

Educational Psychology: Given an anecdote describing a teaching situation, identify the psychological strategies intentionally or accidentally employed.

Mathematics: Given a math word problem, determine the strategies that would be necessary to solve it.

English / Language Arts: Given a poem, identify the specific poetic strategies employed in it.5. Synthesis (Putting parts together into a whole) 

Educational Psychology: Apply the strategies learned in educational psychology in an organized manner to solve an educational problem.

Mathematics: Apply and integrate several different strategies to solve a mathematical problem.

English / Language Arts: Write an essay or a poem. 6. Evaluation (Judging the value of a product for a given purpose, using definite criteria)

Educational Psychology: Observe another teacher (or yourself) and determine the quality of the teaching performance in terms of the teacher’s appropriate application of principles of educational psychology.

Mathematics: When you have finished solving a problem (or when a peer has done so) determine the degree to which that problem was solved as efficiently as possible.

English / Language Arts: Analyze your own or a peer’s essay in terms of the principles of composition discussed during the semester.Knowledge (recalling information) represents the lowest level in Bloom’s taxonomy. It is “low” only in the sense that it comes first – it provides the basis for all “higher” cognitive activity. Only after a learner is able to recall information is it possible to move on to comprehension(giving meaning to information). The third level is application, which refers to using knowledge or principles in new or real-life situations. The learner at this level solves practical problems by applying information comprehended at the previous level. The fourth level is analysis – breaking down complex information into simpler parts. The simpler parts, of course, were learned at earlier levels of the taxonomy. The fifth level, synthesis, consists of creating something that did not exist before by integrating information that had been learned at lower levels of the hierarchy. Evaluation is the highest level of Bloom’s hierarchy. It consists of making judgments based on previous levels of learning to compare a product of some kind against a designated standard.

Teachers often use the term application inaccurately. They assume anytime students use the information in any way whatsoever that this represents the application level of Bloom’s taxonomy. This is not correct.

A child who “uses” his memorization of the multiplication tables to write down “15” next to “5 times 3 equals” is working at the knowledge level, not the application level.

A child who studies Spanish and then converses with a native Mexican is almost certainly at the synthesis level, not at the application level. If the child made a deliberate attempt to get his past tense right, this would be an example of application. However, in conversing he would almost certainly be creating something new that did not exist before by integrating information that had been learned at lower levels of the hierarchy.

 Bloom’s use of the term application differs from our normal conversational use of the term. When working at any of the four highest levels of the taxonomy, we “apply” what we have learned. At the application level, we “just apply.” At the higher levels, we “apply and do something else.”The main value of the Taxonomy is twofold: (1) it can stimulate teachers to help students acquire skills at all of these various levels, laying the proper foundation for higher levels by first assuring mastery of lower-level objectives; and (2) it provides a basis for developing measurement strategies to assess student performance at all these levels of learning.

http://education.calumet.purdue.edu/vockell/edPsybook/Edpsy3/edpsy3_bloom.htm

See, Bloom’s Taxonomy http://en.wikipedia.org/wiki/Bloom%27s_Taxonomy More and more people are asking if testing really advances the goals of education or directs testing’s objectives, which may or may not be the same as the goals of education.

Arthur Goldstein writes in the New York Times article, Students Learn Differently. So Why Test Them All the Same?

We teachers have been hearing for years about “differentiated instruction.” It makes sense to treat individuals differently, and to adapt communication toward what works for them. Some kids you can joke with, and some you cannot. Some need more explanation, while others need little or none. If you consider students as individuals (and especially if you have a reasonable class size), you can better meet their needs.

Considering that, it’s remarkable that the impending Core Curriculum fails to differentiate between native-born American students and English language learners. The fact is, it takes time to learn a language, and while my kids are doing that, they may indeed miss reading Ethan Frome.

Is that really the end of the world?

Before Common Core, our standard was the ever-evolving New York State English Regents exam. Anyone who doesn’t pass the test doesn’t graduate, period. So when my supervisor asks me to train kids to pass it, I do.

The last time I taught it, the Regents exam entailed various multiple choice questions and four essays. I trained kids to write tightly structured, highly formulaic four-paragraph essays (in a style I would never use).

Nonetheless, many of them passed. Kids told one another, “You should take that class. It’s awful, but you’ll pass the exam.”

Regrettably, though the kids worked very hard, writing almost until their hands fell off, the only skill they acquired was passing the English Regents.

Because the exam placed more emphasis on communication than structure, I did not stress structure. I had classes of up to 34, and had to read and comment on everything every kid wrote, so time was limited.

Still, I knew that when my kids went to college, they would have to take writing tests — tests which would almost inevitably label them as E.S.L. students, and place them in remedial classes.

I’ve taught those very classes at Nassau Community College. Students pay for six credit hours and receive zero credits. It seems like a very costly way to learn (particularly since I would happily offer high school kids identical preparation for free). But when your student came from Korea five days ago and needs to graduate in less than a year, you make that kid pass the test.

Still, passing does not constitute mastery. It takes years to learn a language, and that time frame varies wildly by individual.

A kid who’s happy here will embrace the language and master it rapidly, while one who has been dragged kicking and screaming may fold his arms and refuse to learn a thing.

Some kids have been trained all their lives to be quiet in the classroom, and will not speak above a whisper — not the best trait in a language learner.

I’m prepared to deal with all these kids, and ready and willing to do whatever necessary to help them. But if I’m compelled to teach them Shakespeare before they’re ready for SpongeBob, I’m not meeting their needs.

There’s no doubt my students will be more college-ready with a strong background in English structure and usage, something relatively automatic for native speakers. In fact, the language skills my kids have in their first languages will almost inevitably transfer into English.

But depriving them of the time and instruction they need is not, by any means, putting “Children First.”

http://www.nytimes.com/schoolbook/2012/02/17/students-learn-differently-so-why-test-them-all-the-same/?ref=education

Testing is just another battle in the one-size-fits-all approach to education.

Dr. Wilda says this about that ©