"Assessment" is a buzzword in education that had little usage a generation ago, but is everywhere today. Accrediting bodies and school administrators are under increasing pressure to provide evidence that resources allocated to education, whether human or capital, result in effective and desirable educational outcomes. The goal of assessment is to answer a reasonable set of questions:
Are students learning?
Are teachers effective?
What pedagogies should be used?
Are resources allocated in the most effective way?
Practitioners of assessment seek to provide answers based on data, not expert opinions, which are often self-serving. By collecting and analyzing data, the process of assessment is intended to mimic the scientific method and provide objective, evidence-based answers to these questions. Unfortunately, many of the administrators and policy makers engaged in assessment, misunderstand the scientific method.
The scientific method, as it is articulated at the grade school level, teaches students to first ask a testable question (the hypothesis), design a controlled experiment, collect data from the experiment, analyze the data and infer whether it supports or refutes the hypothesis. This is a gross oversimplification of the conduct of actual science - a topic I will write about at a later date. Science isn't that straightforward. Often major discoveries result from observations made outside of the stated hypothesis. However, this simplified summary of the scientific method does contain some of its essential elements. Experiments must be designed to answer specific relevant questions, and for data to be useful it must be acquired and analyzed in a way that will lead to new insights.
What I find disturbing about the current "assessment" frenzy is that the scientific method has been corrupted to the point where all that matters is the accumulation and analysis of data. It is highly doubtful whether much of the collected data and convoluted analysis schemes will ever lead to new insights.
For example teachers in New York City public schools are evaluated using complex statistical models that are impenetrable to even the most mathematically sophisticated. In a March 7, 2011 article in the Education Section of The New York Times, "Evaluating New York Teachers: Perhaps the Numbers Do Lie," the statistical formula for determining the "value added" by teachers in New York City schools was published. The formula involves a complex weighted summation over nine different variables with names such as: "true total school effect," "true total district effect," and "classroom participation indicator."
According to this statistical model, it is not enough that teachers in New York City schools demonstrate that their students learn the material. Instead the statisticians predict the "expected proficiency" of students using an even more complex formula involving 32 variables. The result of the "expected proficiency" calculation is one of the factors used in the "value added" formula. A teacher with students that all become proficient might not have any measurable "added value" if the statistical model "predicted" the students would all become proficient. Teachers without measurable "added value" do not get tenure.
At least that is my best guess after re-reading The New York Times article several times on how this statistical model is suppose to work. The journalist writing the article admitted that the model was too complex for him to fully understand.
Assessment practices that require collection of reams of data are also being forced on higher education institutions. The Middle States Commission on High Education is charged with accrediting many of the nation's college and universities. Therefore its requirements on assessment must be followed or schools will lose their accreditation. The Middle States Commission publishes a handbook: "Student Learning Assessment: Options and Resources"
to "clarify the principles and methods to assess student learning."
The handbook leads faculty members through a multi-stage process for developing assessment practices for their courses. One step in the process is to use the "Teaching Goals Inventory (TGI)" to "identify the priority of various learning goals in their courses and programs." This "self-scorable" inventory asks a faculty member to rate the relative importance in a particular course of 52 separate goals on a scale of 1 to 5, with 1 being "not-applicable" and 5 being "essential." The instructions are to "assess each goal in terms of what you deliberately aim to have your students accomplish rather than in terms of the goal's general worthiness . . . "
A sample of the 52 goals includes:
Develop an ability to perform skillfully
Develop a commitment to accurate work
Develop a sense of responsibility for one's own behavior
Develop a capacity to make wise decisions
Develop aesthetic appreciations
Learn to appreciate important contributions to this subject
Develop an informed concern about contemporary social issues
The list contains an additional 45 goals all stated in the same vague, trite, an essentially un-assessable manner as the goals above. However, once the faculty member decides on the learning goals for a course, the next step is to collect evidence to document that the chosen learning goals have been met.
Of course, teachers have always assessed student learning by using grades. But, according to the assessment handbook: "grades are not direct evidence of student learning" because a "grade alone does not express the content of what students have learned; it reflects only the degree to which the student is perceived to have learned in a specific context." Only if grades "are appropriately linked to learning goals" are they an indicator of student learning. If you are confused by what all this verbiage means, so am I.
In practice, what it means is that accreditation teams want to actually see examples of the tests and assignments that grades are based on. It is no longer sufficient for a professor to read an English paper and assign it a grade such as an A. The accreditors want to see the actual paper, see how it was graded, and if the paper shows evidence that the student achieved any of the goals the professor chose from the Teaching Goals Inventory, such as developing aesthetic appreciation, commitment to accuracy, informed concerned, capacity for wisdom, and so on.
These are all fair questions, but how far can the assessment process be carried? Do accreditors want to immerse themselves in the details of each faculty member's grading process? Will the next step be to ask for evidence to assess the usefulness of assessment?
Notice that the teaching goals in the inventory are byproducts of the education process, not the actual content goals that most professors typically teach in their courses. It can be argued that the byproducts are more important than the content. But, it would be presumptuous for professors to explicitly teach character traits such as wisdom, concern, aesthetic appreciation, self-esteem, and self-confidence. The inculcation of these traits usually comes through immersion, struggle, and mastery of intellectually challenging course work. No one, for example, will obtain authentic self-confidence from a course designed to explicitly teach self-confidence.
A running theme in all these assessment practices is lack of trust. Simply put: the administrators do not trust the teachers. There is no other reason for asking that professors show accreditors actual student work to be reviewed for evidence that learning goals are being met, unless you believe that current grading practices are flawed and meaningless. If you thought that teachers were doing their jobs, there would be no reason to construct elaborate statistical models to measure how much "value" they add to their classrooms.
Unfortunately, however well intentioned the assessment movement might be, its approach will not fix the trust deficit. The scientific method originated to answer questions about the natural world, an environment where intentional deception never happens. If administrators believe that deception is the problem, a "scientific" approach to evaluating teachers and students is not going to solve it for them. Unless administrators also have the expertise to make sense of the "evidence" for student learning and the resources to act upon it, it won't matter how much data they collect.
Joseph Ganem, Ph.D., with link to home page and blog at www.JosephGanem.com is a professor of physics at Loyola University Maryland, and author of the award-winning book on personal finance: The Two Headed Quarter: How to See Through Deceptive Numbers and Save Money on Everything You Buy. It shows how numbers fool consumers when they make financial decisions.
Related Posts The Daily Riff:
Why Testing Fails: How Numbers Deceive Us All by Joe Ganem
Campbell's Law in Education: Test Scores vs. Accountability - "A society in which decisions are based solely on numbers instead of sound judgment is one in which no one is truly accountable."
The Difference Between Feedback and Measurement: Assessments in School
by Margaret Wheatley and Myron Kellner-Rogers