Sunday, November 6, 2011

Sparks at UFT DA Over New Evaluations -- Danielson in the News -- the Truth Behind the Efficacy of High-Stakes Tests

Sparks flew last month at the United Federation of Teachers Delegate Assembly over the issue of teacher evaluations. Bryant High School (Queens) Chapter Leader Sam Lazarus called for voting against a resolution endorsing the use of Charlotte Danielson's Framework for Learning, arguing that the application of Danielson at his school has meant that nearly two-thirds of the teachers evaluated under the new system were rated sub-standard, setting them up for termination under 3020-a proceedings. And from the pattern of which teachers have been stuffed into the Absent Teacher Reserve ("ATR") --older, experienced, and expensive teachers, we can imagine which teachers will tend to be found ineffective. (So, Voila!, with accommodating to Governor Mario Cuomo's call for tougher teacher evaluation systems, the older-teacher elimination goals behind Mayor Michael Bloomberg's objective of ending Last In First Out --or LIFO-- would be accomplished.) For more on the DA see Ed Notes and ICEUFT Blog.

2011 seems to be a good year for Danielson: a great number of news reports dated in 2011 on implementation or purchasing of her (and Teachscape's, a collaborating entity) systems.
An interesting pattern is that the names of school systems that are recently publicized as using Teachscape are urban school systems that are facing budgetary crises, actual or threatened teacher layoffs, yet millions of dollars are available to spend on the deploying (the industry's word) Teachscape video evaluation system.
Danielson's evaluation methods have moved beyond her pilot approach introduced in 1996 to walkthroughs for use in a new edition of the "Teachscape" video analysis system.

A report appearing to be a press release indicates that Teachscape software (Teachscape Reflect 360-Degree Video Analysis System) using Charlotte Danielson's Framework for Learning is already being used in New York City schools:
"NYC Schools Deploy New Teacher Observation Tools"

Overwhelmingly the NJ locations where the Danielson system is being introduced are urban districts.

*And like many new developments, there is a money trail: links to familiar foundation and institution names, from Teachscape's own PR press release: Teachscape's partners include Kogeto, the Bill & Melinda Gates Foundation, Stanford University, the Carnegie Foundation for the Advancement of Teaching and Charlotte Danielson to help shape its vision, its products, and its strategies. --

(And for a take from a different angle: a satiric animation video but making serious statements about the interlinking of investors, foundation philanthropy and money for video evaluation software in the face of limited funds for classroom essentials:

As writer Valerie Strauss said at "the Washington Post" in "Report: Test-based incentives don’t produce real student achievement:"
Incentive programs for schools, teachers and students aimed at raising standardized test scores are largely unproductive in generating increased student achievement, according to a new report researched by an expert panel of the National Research Council.

The report [by Michael Hout and Stuart W. Elliott, "Incentives and Test-Based Accountability in Education"] said that standardized tests commonly used in schools to measure student performance — including high school exit exams and tests in various grades mandated by former president Bush’s No Child Left Behind law — “fall short of providing a complete measure of desired educational outcomes in many ways,” according to a summary of the lengthy document.

The report, together with a number of other studies [summarized by Diane Ravitch] released in the past year, effectively serve as a warning to policymakers in states that are moving to implement laws, with support from the Obama administration, to make teacher and principal evaluation largely dependent on increases in students’ standardized test scores.

The practice doesn’t bring about the kind of student achievement policymakers say is necessary for the United States to compete with the highest-performing countries, according to the 17-member Committee on Incentives and Test-Based Accountability convened by the National Research Council, which is the research arm of the National Academies (including the National Academy of Sciences, the National Council of Engineering and the National Academy of Medicine).

The panelists — who include experts in assessment, education law and the sciences — examined over the past decade 15 incentive programs, which are designed to link rewards or sanctions for schools, students and teachers to students’ test results. The programs studied included high-school exit exams and those that give teachers incentives (such as bonus pay) for improved test scores.

The panel studied the effects of incentives, not by tracking changes in scores on high-stakes tests connected to incentive programs, but by looking at the results of “low-stakes” tests, such as the well-regarded National Assessment of Educational Progress, which aren’t linked to the incentives and are taken by the same cohorts of students.

The researchers concluded that the effects of incentive programs tend to be “small and . . . effectively zero for a number” of such programs.

Gains that were detected were concentrated in elementary grade mathematics and “are small in comparison with the improvement the nation hopes to achieve,” according to the summary.

The researchers concluded not only that incentive programs have not raised student achievement in the United States to the level achieved in the highest-performing countries but also that incentives/sanctions can give a false view of exactly how well students are doing. (The U.S. reform movement doesn’t follow the same principles that have been adopted by the other countries policymakers often cite. You can read an analysis of that by educator Linda Darling-Hammond here.)

Strauss closed by critiquing the efficacy of "value-added" evaluations because they ignore the effects of factors of social ills (outside the school) upon student learning:
Other studies in the past year have also cast doubt on the effectiveness and reliability of the value-added method of teacher/principal evaluation, which takes student test scores and puts them into a formula that is supposed to factor out other influences and determine the “value” a teacher has brought to a student’s learning.

The method often ignores outside-school factors that can influence how a child does on a test, including lack of sleep, hunger and illness, but even formulas that are said to take these into account are not especially reliable, some experts have said.