Monday, June 10, 2013

Huff Post on Danielson's secretiveness, profiteering, alleged expertise

A powerful piece, questioning the unquestionable: just what gives Charlotte Danielson the authority and expertise to play God with people's teaching reputations and careers? This is all important to teachers in cities across the nation, as her Danielson Framework rubrics are being used variously as tools for constructive criticism or as a rapid fire assault weapon to nit-pick at how teachers teach. As a few people have said, one would have to be a Greek god to attain the "Highly Developed" category. Even a few principals have acknowledged that this is level is really, really exceptional.

Her Frameworks are of special concern in New York City, as her evaluation framework is being rolled out with even more official implementation in every single school. Teachers are struggling with how to tackle all the points. Challenging is the issue of how teachers can be evaluated in a 15 minute drive-by snapshot. Additionally, the United Federation of Teachers lobbied for, and secured, in King's Advance evaluation plan, an even higher number of 22 points, whereas the city Department of Education only wanted a handful. Instead of offering criticism for anticipated faults of these features, the UFT has been fine with both of these points. These kinds of ill-minded choices are emblematic of how the UFT leadership years removed from the classroom. The globalized nit-picking that the Danielson Framework engenders will serve to worsen teacher morale and will bring on dispirited alienation and burn-out.

So much of the education reform movement rests on unquestioning acceptance of certain assumptions. The blind acceptance of assumptions represent a modern day manifestation of the Emperor's New Clothes. Hitherto, no one, except bloggers, has publicly questioned her qualifications or her teacher evaluation program. The classic essence of her program is micro-managing deskilling teachers of their professional craft. The language and the motivations that Danielson and her reformer kin have uncanny parallels with Taylorism's deskilling. The iPad and other computer tools for deskilling make this a kind of Digital Taylorism.

The reformers when touting their poisons love to make international comparisons. When pushing the Common Core State Standards they allege that their standards are "internationally benchmarked." Incidentally, Danielson has capitalized on the Common Core by rolling out a special 2013 revision of her Danielson Framework, even though she conceded in Education Week that little has changed in her evaluation program since the introduction of the national standards. Where is the evidence that the PISA high-flying nations are following the Charlotte Danielson model? Oh, that's right, it's tackling extremes of income gaps, not the testing regime of Discipline and Punish.

This Huffington Post article opens with reference to the New York Times' adulation for New York State Education Commissioner John King's imposition of a new teacher evaluation system on New York City teachers. It then cites aligned praise for the system, from both mayor Michael Bloomberg and United Federation of Teachers president Michael Mulgrew (Unity Caucus).

Who Is Charlotte Danielson and Why Does She Decide How Teachers Are Evaluated?
By Alan Singer, June 10, 2013

Unfortunately, nobody, not the Times, the New York State Education Department, the New York City Department of Education, nor the teachers' union have demonstrated any positive correlation between teacher assessments based on the Danielson rubrics, good teaching, and the implementation of new higher academic standards for students under Common Core.

A case demonstrating the relationship could have been made, if it actually exists. A format based on the Danielson rubrics is already being used to evaluate teachers in at least thirty-three struggling schools in New York City and by one of the supervising networks. Kentucky has been using an adapted version of Danielson's Framework for Teaching to evaluate teachers since 2011 and according to the New Jersey Department of Education, sixty percent of nearly 500 school districts in the state are using teacher evaluation models developed by the [Princeton, N.J.-based] Danielson Group. The South Orange/Maplewood and Cherry Hill, New Jersey schools have used the Danielson model for several years.

According to the Times editorial, the "new evaluation system could make it easier to fire markedly poor performers" and help "the great majority of teachers become better at their jobs." But as far as I can tell, the new evaluation system is mostly a weapon to harass teachers and force them to follow dubious scripted lessons.

Ironically, in a pretty comprehensive search on the Internet, I have had difficulty discovering who Charlotte Danielson really is and what her qualifications are for developing a teacher evaluation system. According to the website of the Danielson Group, "the Group consists of consultants of the highest caliber, talent, and experience in educational practice, leadership, and research." It provides "a wide array of professional development and consulting services to clients across the United States and abroad" and is "the only organization approved by Charlotte Danielson to provide training and consultation around the Framework for Teaching." The group's services come at a cost, which is not a surprise, although you have to apply for their services to get an actual price quote. Individuals who participated in a three-day workshop at the King of Prussia campus of Arcadia University in Pennsylvania paid $599 each. A companion four-week online class cost $1,809 per person. According to a comparison chart prepared by the Alaska Department of Education, the "Danielson Group uses 'bundled' pricing that is inclusive of the consultant's daily rate, hotel and airfare. The current fee structure is $4,000 per consultant/per day when three or more consecutive days of training are scheduled. One and two-day rates are $4,500/per consultant/per day. We will also schedule keynote presentations for large groups when feasible. A keynote presentations is for informational/overview purposes and does not constitute training in the Framework for Teaching."

Then, there's the juicy stuff: her stellar teaching expertise credentials. But puncture the myth, and everything gets very cloudy, very quickly. And why has she never gone into how many years, at which grade levels, at which schools, in which sort of school systems, urban, suburban, socially heterogenous or homogenous, public or private? One wonders, is this like the grand myth that Michelle Rhee created for herself, that her impact as a Baltimore teacher was exemplary in the test score-raising measure, until someone destroyed the myth by unearthing her record to find that change in her classes was not what she purported? And where are the former teaching colleagues of Charlotte Danielson? These are important questions, as her rubric is used to terminate people and destroy their careers and reputations.
Charlotte Danielson is supposed to be "an internationally-recognized expert in the area of teacher effectiveness, specializing in the design of teacher evaluation systems that, while ensuring teacher quality, also promote professional learning" who "advises State Education Departments and National Ministries and Departments of Education, both in the United States and overseas." Her online biography claims that she has "taught at all levels, from kindergarten through college, and has worked as an administrator, a curriculum director, and a staff developer" and to have degrees from Cornell, Oxford and Rutgers, but I can find no formal academic resume online. Her undergraduate degree seems to have been in history with a specialization in Chinese history and she studied philosophy, politics and economics at Oxford and educational administration and supervision at Rutgers. While working as an economist in Washington, D.C., Danielson obtained her teaching credentials and began work in her neighborhood elementary school, but it is not clear in what capacity or for how long. She developed her ideas for teacher evaluation while working at the Educational Testing Service (ETS) and since 1996 has published a series of books and articles with ASCD (the Association for Supervision and Curriculum Development). I have seen photographs and video broadcasts online, but I am still not convinced she really exists as more than a front for the Danielson Group that is selling its teacher evaluation product.

The United Federation of Teachers and the online news journal Gotham Schools both asked a person purporting to be Charlotte Danielson to evaluate the initial Danielson rubrics being used in New York City schools. In a phone interview reported on in Gotham Schools, Danielson was supposedly in Chile selling her frameworks to the Chilean government, "Danielson was hesitant to insert herself into an union-district battle, but did confirm that she disapproved of the checklist shown to her." The checklist "was inappropriate because of the way it was filled out. It indicated that the observer had already begun evaluating a teacher while in the classroom observation. She said that's a fundamental no-no."
[Postscript: Danielson's national reach is quite expansive. As her press release, celebrating the Danielson Group's 15th anniversary, from June 28, 2011, stated while celebrating record growth, beyond New Jersey and New York City, the Framework is the default evaluation system for Illinois, Los Angeles, California and Pittsburgh, Pennsylvania, as well as having state-wide adoption in Arkansas, Delaware, Idaho and South Dakota.] Singer then gets to how the actual implementation of the Danielson Frameworks will involve superficial fifteen minute snapshots of what could be lessons ranging from 45 minutes to pairs of 50 minute block periods. These snap judgments are meant to cast judgment on an entire lesson. (Actually this already has become standard practice under Bloomberg: with the widespread breakup of comprehensive high schools, out have gone the subject department chairs would would be observing assistant principals. Typically these traditional APs would have been seasoned veteran teachers. Now under the Bloomberg era of smaller schools the departmental APs are gone. So, the observers are often people of an entirely different background. So, a former gym teacher can now observe a chemistry class, having limited understanding of the subject. Worse, often principals, from the Leadership Academy, have little or no classroom experience.)
Commendably, he points to how inappropriate it is for someone to note a fraction of someone's performance, and then make a global judgment about someone's professional capabilities, particularly when a minor glitch arises. Note how Singer's examples involve technical malfunctions. Teachers in the current era are pressured to incorporate technology in their lessons. Yet, there are always avenues by which little technical difficulties can arise. Have a bad connection during your Powerpoint presentation? Oh, well, the seven minutes of interruption sabotaged your performance during the snapshot 15 minute observation.
Bottom line is that 40% of a teacher's evaluation will be based on student test scores on standardized and local exams and 60% on in-class observations. In this post I am most concerned with the legitimacy of the proposed system of observations that are based on snap-shots, fifteen minute visits to partial lessons, conducted by supervisors potentially with limited or no classroom experience in the subject being observed, followed by submission of a multiple-choice rubric that will be evaluated online by an algorithm that decides whether the lesson was satisfactory or not.
Imagine an experienced surgeon in the middle of a delicate six-hour procedure where the surgeon responds to a series of unexpected emergencies being evaluated by a computer based on data gathered from a fifteen-minute snapshot visit by a general practitioner who has never performed an operation. Imagine evaluating a baseball player who goes three for four with a couple of home runs and five or six runs batted in based on the one time during the game when he struck out badly.

Imagine a driver with a clean record for thirty years who has his or her license suspended because a car they owned was photographed going through a red light, when perhaps there was an emergency, perhaps he or she was not even driving the car, or perhaps there was a mechanical glitch with the light, camera, or computer.

Now imagine a teacher who adjusts instruction because of important questions introduced by students who is told the lesson is unsatisfactory because it did not follow the prescribed scripted lesson plan and because during the fifteen minutes the observer was in the room they failed to see what they were looking for but what might have actually happened before they arrived or after they left.
Singer then goes onto give a corrective, a reference to how teachers used to be treated more professionally, to how observations were occasions to help educators grow. (Contrast this with gotcha games of today.)
When I was a new high school teacher in the 1970s, I was observed six times a year by my department chair, an experienced teacher and supervisor with expertise in my content area. We met before each lesson to strengthen the lesson plan and in a post-observation conference to analyze what had happened and what could have been done better. Based on the conferences and observations we put together a plan to strengthen my teaching, changes the supervisor expected to see implemented in future lessons. The conferences, the lesson, and the plan were then written into a multi-page observation report that we both signed. These meetings and observations were especially important in my development as a teacher and I follow the same format when I observe student teachers today.

As I became more experienced the number of formal observations decreased. . . .
. . . . Teachers in the field report to me that the New York City Department of Education is already trying to undermine the possibility of a fair and effective teacher evaluation system. I cannot use their names or mention their schools because they fear retaliation. I urge teachers to use Huffington Post to document what is going on with teacher evaluations in their schools.

Within hours after an arbitrator mandated use of the Danielson teacher evaluation system, New York City school administrators received a 240-page booklet explaining how to implement the rubrics next fall. Teachers will receive six hours of professional development so they know what to expect, not so they know how to be successful. Teachers are being told that while there is no official lesson plan design, they better follow the recommended one if they expect to pass the evaluations.

Administrators are instructed how to race in and out of rooms and punch codes into an IPad with evaluations actually completed in cyberspace by an algorithm. Teachers will fail when supervisors do not see things that took place before or after they entered the room, if lesson plans do not touch on all twenty-two dimensions, or when teachers adjust their lessons to take into account student responses.

Teachers expect to be evaluated harshly. In December, 2012 the New York Daily News reported that the Danielson rubric, while still unofficial, was being used to rate teachers unsatisfactory.
Go to Huffington Post for Alan Singer's full article.

[Postscript: In the Danielson Group's gloating 2011 press release, the Framework is tied in with cutting edge video technology: "The FFT was recently embedded into Teachscape Reflect, the first solution to combine state-of-the-art 360-degree video capture hardware and online software, to further enhance the professional learning process by providing educators with a research-based framework against which classroom videos can be evaluated. Educators can use the FFT, in conjunction with Teachscape Reflect, for formative assessments of teaching practice or for formalized, summative teacher evaluation." Susan Ohanian challenged the Teachscape technology, and pointed to Teachscape's wide-ranging business partners, ranging from the AFT to the NEA, ETS, the Gates Foundation, Stanford University, the education reform entrepreneur boosting NewSchools Venture Fund and over a dozen other institutions.]

This is a brave new world that we are in. Teachers' unions must fight such impersonal, inappropriate technology and rating systems. Sympathetic parents and other citizens should question this commercialized linkage between the Danielson Group and school districts, never seeing transparent public discussion about the budgeting for Danielson. Let teachers teach and principals lead, let's avoid this constant treadmill of distrusting teachers' professional capabilities.

Previous posts at this blog on Danielson:

Walcott Confirms: Danielson is Official; Yet Why Does Unity Play Us for Dumb with Danielson Evaluations Doublethink Line?

In the above post from this April, I quoted a 2011 Gotham Schools report of how Danielson is being used with checklists. In November, 2011 the UFT said that this was abuse of the Framework. This spring the city DOE wanted a handful of checklist points. The UFT rallied for, and got in Andrew Cuomo and John King's imposed system, 22 checklist points to meet. Here is the 2011 report when the UFT found checklists problematic:
The UFT reported that principals are using Danielson Framework elements as checklists to evaluate teachers. Note that Danielson herself disapproves of this practice:
When the UFT obtained a copy of one of the checklists, it shared it with Danielson herself to get her thoughts.
Danielson was troubled by the checklists and disapproved of them, union officials said. With that endorsement, UFT Secretary Michael Mendel wrote a letter to the DOE and demanded an immediate end to the practice. He even threatened to cut off negotiations toward a larger evaluation deal that is required by the end of the school year.
. . .
The checklist she saw, Danielson said, was inappropriate because of the way it was filled out. It indicated that the observer had already begun evaluating a teacher while in the classroom observation. She said that’s a fundamental no-no.
Note also that the drive-by snapshot observation, once controversial at 10 or 15 minutes excerpt of a 45 or 90 minute class, is now unquestioned institutionalized normative practice in the new observation system. Where is the outrage? (For more on this issue see this post and this one.)

REVISION UPDATE: OFFICIAL STATUS OF DANIELSON; UFT MUST CONFRONT THE TRUTH ON REALITY OF DANIELSON IMPLEMENTATION:

How will Unity explain itself when U-rating appeals like this become a flood with Danielson?,

Sparks at UFT DA Over New Evaluations -- Danielson in the News
and
Contrasting Sept. 20 2011 Mulgrew, Walcott Statements on Teacher Evaluations

Teachers need to ask their union to help with this kind of abuse, to authentically represent the rank and file. In New York City the MORE caucus is working within the UFT to resist abusive evaluations. In other cities the unions are resisting such abuse, and where there are gaps in authentic representation of teachers, teachers are forming their own truly representative caucuses.

Post-script on more hot buzz: Controversy over the abusive use of Danielson's Framework has spilled over June 16, 2013 to Diane Ravitch's blog, in "Who Distorted Charlotte Danielson's Message?" Danielson could speak out, but then she would jeopardize the millions she and her Danielson Group are making as the system is being adopted in innumerable states.

REVISION UPDATE: "I got Danielson'ed!" A New York City teacher, in via the Teaching Fellows program, gives first-hand testimonial as to the pitfalls of Danielson in application.

POSTSCRIPT, ONE YEAR ON, August 27, 2014:
One might think, that with the passage of time from June, 2013 to the end of August, 2014, right before the new school year, we might learn of more information about Charlotte Danielson's credentials as an educator.
Alas, this has not come to be.

As we check several sources, there is still no light into: where Charlotte Danielson has taught, at what grade level, for how many years, where did she study education, child psychology, teaching methods, or administrative/supervisory training or leadership experience. Alas, we have none of that.
We have not learned anything of her actual qualifications from subsequent installments at Diane Ravitch's venerable ever resourceful blog.
We have not learned anything of her actual qualifications from the Schools Matter blog, which has noted the millions that Danielson's operation is netting.

Subsequent comments at a June 2013 posting at Ravtich's blog make some important points. Most clearly to the point, one commenter wrote:

The web has been cleaned of her past. Have seen this before. Go see how many pages there are on me and I do not have a consulting firm with 36 consultants under me. There is no question that someone who knows what they are doing has cleaned up the web on her for whatever reasons. Usually to hide something. What other reason could there be? When no information question always at least. I always ignore as I know the information is no good when not all there and planted.
This commenter has a powerful point. You can take your pick from people either in the union side or in the reformer/management side, and you can easily find something more tangible on their teaching background, and it is interesting to note that it is easier to find information on the teachers' representatives histories than it is of education directors. Just Google from the union side: Randi Weingarten, Karen Lewis, Michael Mulgrew, Dennis Van Roekel, Lily Eskelsen García; or from the "expert"/management side, it is sometimes a little harder to get precise information about when and where they taught (or did not) and for how long: Merryl Tisch, Michelle Rhee, John King, Sandra Alberti, Kate Gerson, Arne Duncan, David Coleman, Sue Pimentel. Why is it that the people that most precisely dictate to us how to teach are indeed ones that nothing is available as to which school or which school system in which they taught? By comparison, Michelle Rhee is the paragon of transparency, as to her credentials in (supposed) teaching or teaching staff leadership.

It is rather disingenuous for Danielson to protest that her method is being misused. She has spun her evaluation manuals into a multi-million dollar enterprise that has gone international. Her "Danielson Group" employees lead workshops in New York City and elsewhere to explain "what good teaching is." Notice that her protests are referred to in the third person. Given her profoundly high stature, if she really cared about the "misuse" of her program, she would say so in nationally available opinion column or on "60 Minutes" (now there's a venue to expose that the Emperor's New Clothes are a chimera) appearance. Or how about a candid interview in Time Magazine? No, if she says in a high-profile venue that her method is a counseling tool and not for hiring or firing, this will demolish the profound financial weight that her program has, and not to mention, give great heft to lawyers protecting teachers from arbitrary administrators. No, if she declaratively cites the misuse of her program, this will create colossal headaches for school administrators. No, if she says so, she will lose millions for her books and her Danielson Group enterprise.

Furthermore --unless the Danielson method is merely for providing cut scores for eliminating teachers-- you would think that a corollary to her nation-wide, apparently, method is some framework for administrators to provide real leadership, namely constructive tools for teacher improvement.
Is it any wonder that at one large administrators' conference --let us say, in the tri-state vicinity of New York City, to protect my source-- it was frankly stated that the Danielson Framework is simply a tool for getting rid of teachers.

The fact that the Danielson Framework is used and that critical thinking is not directed at pulling the curtain away from the Oz Danielson shows that sadly we are in a time of Emperor's New Clothes-type suspension of critical thinking, head-nodding along with a program whose designer whose program is dubious.

No, Danielson's one size fits-all grades, subjects, student body incomes or cultures model is invalid. It all goes to demonstrate what the new National Education Association (NEA) president, Lily Eskelsen García says about the unqualified corporate school reformers:

"[A]ll things are possible to people who don't know what they're talking about."

Read more about this inspiring figure in teacher leadership in my portrait here.

2 comments:

  1. I am writing my dissertation on how the Framework will be implemented here in Maryland, and can't seem to find *any* critique of Danielson's model. Surely someone has something bad to say about it!

    ReplyDelete
  2. Try looking at the previous posts I've written on Danielson, linked at the end of this post (recently revised).

    Also check out these posts describing serious issues in Danielson Framework implementation:

    What Charlotte Danielson saw when the UFT came calling http://gothamschools.org/2011/11/07/what-charlotte-danielson-saw-when-the-uft-came-calling/

    The District 75 Danielson Pilot: CRASH! Burn! Fizzle http://paulvhogan.wordpress.com/2013/03/25/the-district-75-danielson-pilot-crash-burn-fizzle/

    A NYC teacher's observations on how the Danielson rubrics are being (mis)used http://nycpublicschoolparents.blogspot.com/2012/01/nyc-teachers-observations-on-how.html

    ReplyDelete