NCTQ along with U.S. News and World Report Seeks to Rate over 1400 Teacher Education Programs in the U.S.

Feb 22nd, 2011

NCTQ along with U.S. News and World Report Seeks to Rate over 1400 Teacher Education Programs in the U.S.

By: Brad Olsen | 09:02 AM | Categories: Assessment & Accountability

In letters to U.S. News and World Report’s editor Bruce Kelly, various educational  researchers, university deans, and teacher education directors have expressed concern about the recent project initiated by the magazine along with the National Council of Teacher Quality (NCTQ) to review and rate the country’s teacher education programs, presumably much like the magazine has been ranking colleges and universities. (For more, see here and here).

 

There are multiple facets to this national debate generated by the project that are relevant to California—a state whose many teacher education programs prepare the majority of the approximately 18,000 new teachers licensed in California each year. Let’s look at a few of the critical issues.

 

First came the puzzle as to how NCTQ would treat institutions that do not (or cannot) provide all the requested materials (mostly evidence of inputs like program policies, length of student teaching, course syllabi, and student surveys). Initially, NCTQ stipulated that any institution that did not submit all the information requested would be graded by way of publicly available information, even though that may mark it as failing. Given subsequent feedback that NCTQ’s rigid approach would serve as a form of coercion, the NCTQ backed-off of its proposal.

 

Next emerged methodological concerns raised by many educational researchers and teacher educators about the opaque, narrowed, input-heavy evaluation design that NCTQ plans to employ in reviewing and ranking the approximately 1,400 teacher education programs, mostly housed in universities. In one letter of concern, seven of the eight education deans and directors of the University of California system wrote this:

  • In particular, we are concerned about the quality and “universality” of the standards you use, the measures identified to provide evidence of meeting the standards, and the methodology of data collection, ratings, and the formulas for assigning letter grades to program elements. Issues of measurement reliability and validity are paramount in an evaluation process that is, and is perceived to be, fair and impartial. In addition, we believe that it is important to capture the effectiveness of prospective teachers in the classroom.
  •  

This statement sounds balanced to me. First off, I’m all for quality control of something as important as how we prepare our children’s teachers, and so I accept the need to examine the effectiveness of our teacher education efforts. But secondly, I also know how important it will be to ensure that we do it right. The stakes and the politics are simply too high to get it wrong—especially in this current economic climate where finding public entities to financially decimate is de rigueur. I also know that something as complex and socially contested as teacher education requires a balanced, reliable study design to sufficiently collect and analyze the right kinds of data. These data will be crucial in how we view, evaluate, and improve teacher education programs. Given that NCATE already reviews and rates teacher education programs, what is to be gained by this new initiative?

 

Currently, universities and university teacher education (and teachers' unions too) are under widespread attack by many politically minded folks, including—it must be pointed out—several of the people on NCTQ’s advisory boards. The two boards of NCTQ include well-known critics of university teacher education such as Chester Finn, Fred Hess, Wendy Kopp, and Michelle Rhee. This reality could suggest that this present initiative is as much about politics (and, for U.S. News and World Report, selling magazines) as it is about conducting even-handed reviews of the places that prepare most of our nation’s teachers.

 

But let’s for the moment accept that all of us want the same thing: to accurately assess the quality of our teacher education programs. And let’s take NCTQ at its word: it wishes to be fair, methodologically sound, and reliable in its review. If so, I offer the following three suggestions to help assure a win/win situation for all entities and for the millions of school children who could gain from this effort.

 

1. RE-EVALUATE SIMPLE INPUTS vs. GREATER COMPLEXITY

 

Do not focus primarily on inputs and simple data (what NCTQ calls “the design quality” of the education programs they will rate). Inputs do not tell the whole story. Neither can we rely on outputs only (akin to what some cities, including LA, and some states such as NY and NJ are doing with their value-added analyses of teachers and schools by way of student test scores). These indicators have value, but not when used alone. Instead, emphasize a look at the actual interactions between beginning teachers and their students in practice. As the UC deans and directors wrote in their letter, “We believe that it is important to capture the effectiveness of prospective teachers in the classroom. We wish to argue strongly that any effort that attempts to evaluate teacher education programs focuses as much on what students learn as what they are taught."

 

This focus on the situated teaching-learning interactions strikes me as highly useful. It’s influenced by the inputs or ‘design quality’ of the programs, yet highlights the actual work of teacher preparation. It accepts that student teachers are not blank slates who only absorb what their programs offer. Research shows that a teacher education experience is more nuanced, complex, and dynamic than what a sole focus on program inputs might convey. And it accepts that the contexts in play—the teacher education sites, the schools in which novices begin their careers, the various players involved—mediate the kinds of teaching enacted and the teachers produced. This focus on the interaction between teaching and learning accepts that learning to teach is complex work, not easily reducible to inputs and simplistic data points alone.

 

2. DEVELOP A MORE ROBUST STUDY DESIGN

 

How about a study design in which five related data analyses are integrated to answer the bifurcated, grounded theory research question: How can we capture what is important in evaluating teacher education programs, and how do participating education programs fare along the measurement range established?

 

A. Collect and examine the inputs of ‘quality design.’ 
B. Identify and study the practices of teacher education programs.
C. Solicit and code open-ended opinions and job contours of teacher education graduates by surveying most four-year-out and seven-year-out graduates and conduct follow-up questionnaires with selected sub-groups.
D. Collect redacted student teacher performance assessments conducted by the teacher education programs.
E. And, yes, invite in some value-added analyses to offer their useful but taken-alone-incomplete perspectives on how well program graduates are teaching during years 4-7.

 

3. LET CALIFORNIA LEAD THE WAY

 

I realize that expanding the rigor of the study creates some challenges in this depressed economic climate, but finding a way to encourage and support a state-of-the-art, mixed-methods, integrated study of the quality and effects of teacher education programs in California, and then carefully reporting the results, sounds far more effective in the end to me. U.S. News and World Report can even publish an issue on it. I’d love to see California take the lead on this project and offer a model for other states to consider.

 

Maybe those of you who are more knowledgeable than I in these kinds of mixed-methods evaluation studies can offer your perspectives here. Hope to hear from you!

 

 

 

Comments

Elizabeth,

That's a useful question you ask. I think that it's hard to make direct comparisons between how business evaluates performance and how teachers might be evaluated. This is because the inputs and outputs in most business contexts are more straightforward. Manufacturers calculate how much money and time it costs to produce x number of widgets. Salespeople are often evaluated on the financal amount of business they bring in during a financial quarter. Of course, it's not quite this simple, and businesses are always employing systems of supervisor review, peer- and self-evaluation, and other more open-ended models. But education is a different beast altogether. We can't agree on what a successfully educated student looks like or what the primary purposes of education are. Students enter schools each day with so many facets to their learning landscape, and multiple factors always mediate how classroom teaching and learning unfolds. Teachers, too, are multidimensional professionals whose work is affected by many variables, from their professional training and teaching styles, to the leadership of their schools, to the resources and ethos of their community, to the curriculum and assessment procedures their district adopts, and the student populations they serve. This complicates straightforward evaluations of teachers, teaching, or teacher preparation programs complicated!

Elizabeth,

PACE published a policy brief on value-added measures last October. You can find it here: http://pace.berkeley.edu/2010/10/18/value-added-measures-of-education-pe.... One of the reasons we've moved toward more complex ways of evaluating teachers is that the apparently straightforward ways don't give us very much information. Value-added measures have their problems, but using them together with other kinds of measures can give us a fuller picture of how teachers are doing.

I don't really understand all the challenges of value-added, but it seems to me that assessing teachers is just getting more and more complicated. How do other businesses do this? Why can't teachers/teacher education be evaluated in more straight-forward ways?