NCTQ staffers cannot write a focused, relevant Findings section

Last week the advocacy-group National Council on Teacher Quality (NCTQ) released and publicized a set of ratings of 134 college-based teacher education programs' final internship practices, a set of ratings that was covered broadly by national newspapers

What I find intriguing about NCTQ here is the idiosyncratic nature of the judgment. Inside the report, there's a lot of … well, it's close to bragging about the NCTQ's internship standards in contrast with the expectations in the guiding documents for the National Council for the Accreditation of Teacher Education (NCATE), which were generally labeled as either nonexistent or nonmeasurable. In the "Findings" section, there are percentages of rated institutions for various standards, but that data are never compiled in a single place in the main report (the data show up well-buried in the appendix), and some of the more obvious patterns are discussed in passing where they appear at all:

  • NCTQ judged that all rated programs require at least 10 hours in a final internship, and almost all require such internship to be full-time. 
  • NCTQ judged that most of the rated programs require interns to fulfill the full range of the classroom teachers' responsibilities. 
  • NCTQ judged that the vast majority of the rated programs require a minimum amount of experience for hosting (cooperating) classroom teachers.
  • NCTQ judged that a minority of the rated programs controlled the selection of hosting (cooperating) classroom teachers, let alone being able to select teachers who were both effective with students and effective at mentoring the interns. 

Given that first-cut set of findings based on the five key standards NCTQ asserts, one might expect that NCTQ would have shouted from the rooftops, "Give internship programs the right to select teachers and then hold them to selection standards!" However, that's not what the report highlighted as "findings." Instead, much of the section labeled findings focused on other matters: the number of elementary education students, study-abroad internships, whether internship calendars aligned with elementary school calendars, the alignment of internship evaluation instruments with stated goals in internship syllabi and other program objectives, and the qualifications and involvement of supervisors (i.e., faculty who visit and evaluate interns in schools). All of those are interesting subjects, but they're mostly beside the point compared with what one could conclude given the NCTQ's own identification of the base standards against which they tried to rate 134 programs. 

Then there's the following buried on p. 12 of the appendix:

Institutions were not provided with the methodology we would use to rate institutions against our standards in advance of our solicitation of materials: The fact that we did not provide institutions with our standards’ ratings methodologies in advance of our initial solicitation of materials caused some consternation. Institutions indicated that they could have been more efficient in providing materials in response to our solicitation of materials if they had known the ratings criteria in advance. Our rationale for not providing ratings methodologies in advance is that for many standards doing so could have biased the nature of the materials provided. [emphasis added]

Let me try to apply this to my own teaching: would I be justified in withholding a written guideline for scoring (i.e., a rubric) with the justification that if I let my students know what I was looking for, they would try to meet my expectations and that's a bad thing? This looks remarkably like a gotcha standard, and it is one I bet Kate Walsh would condemn if I or any of my colleagues practiced it. 

I could go on, such as the following claim that goes well beyond any evidence presented in the paper: "Institutions lack clear, rigorous criteria for the selection of cooperating teachers—either on paper or in practice" (p. 25; emphasis added). Since the review of 134 institutional programs happened almost entirely by paper documentation, with a total of five sites visited in person by NCTQ staff, it is hard to see how the report could make claims about selection practices apart from saying that the environment was not conducive.* But the broader pattern is more what concerns me, and this report as such seems to bode ill for the broader attempt to rate teacher education that NCTQ is ginning up with U.S. News. Disclosure: My college is not one of those whose teaching internship administration was  rated by NCTQ — but we'll be targeted in the giant NCTQ rating exercises for teacher education either later this year or sometime in 2012, and the entire Florida state university system has decided not to cooperate apart from fulfilling its obligations under Florida's public-records law. This was a decision taken at the system level. 

* I think a written set of guidelines is important, and I suspect that the speculative conclusion regarding practice is true for at least a large minority of undergraduate elementary-education programs, but the report does not provide persuasive evidence to support the claim, and neither Kate Walsh nor I should make such definitive claims without something more than my hunch and her five site visits.

4 responses to “NCTQ staffers cannot write a focused, relevant Findings section”

  1. Glen S. McGhee

    “Our rationale for not providing ratings methodologies in advance is that for many standards doing so could have biased the nature of the materials provided.”

    Maybe the NCTQ folks understand these institutions better than you think they do.

    And its not just the NCTQ.

    Isn’t this like asking a Law School how many of its graduates are UNemployed?

    Just look at these candid remarks from NALP Exec Director James Leipold about why they never ask Law Schools that question: “If we asked … they wouldn’t do it.”

    Same issue. Maybe the NCTQ is just being realistic about the dangers of asking schools and programs hard questions.

    Besides, you shouldn’t ignore the discontinuity between analysis at the level of the classroom and analysis at the level of a fully-institutionalized program. Better to explain why they have something in common rather than to suppose that they do.

  2. Glen S. McGhee

    “If NCTQ is asking colleges of education to align the stated goals of internships to evaluative mechanisms (one of the issues discussed in the report) and then to state in the appendix that they do *not* want to be held to the same standard of transparency, that gap is inconsistency at best and rank hypocrisy at worst.”

    But this “accountability gap” is in no way worse than what you find generally in higher education self-reports. This was the point of the Law School ref. Nowhere are the high standards of research transparency (as codified in A.I.R. — did I get that right?), that are the gold standard for scholarly research, applied to the self-reports of institutionalized schooling.