Reports on college student learning leave out passion

Thursday’s David Brooks column talks simple-mindedly about wanting colleges to be held accountable for student learning, somewhat misstates a key claim of Academically Adrift, and writes in a generally David Brooksish style, which should only be alarming if you were expecting a Krugman column. From my position deep in the territory controlled by the Southern Association of Colleges and Schools, I have read some responses that seem … well, not exactly on point. Neither was Brooks’ column, but I rarely expect clarity from newspaper columnists.1

Here are some of the fundamental limits on holding higher education institutions “accountable” in the way politicians have imposed test-based accountability on K-12:

  • Because college students take different programs, assessing progress in those individual programs in a rigorously comparative way is impractical for all but the largest majors (psychology, business, maybe a few others).
  • If there are common elements of what we expect students to learn in college, the options for assessing that knowledge or set of skills is in its infancy, at best.
  • Many of the public and private goals for college are either noncognitive or otherwise difficult to assess.2
  • Students progress through college at very different speeds and often enroll in or take courses at two or more institutions in their undergraduate years.3
  • Not only do colleges and universities have different levels of resources, but they frequently have different missions in terms of serving student populations.4
  • The common rhetoric that college is essential for individual economic advancement, along with cost-shifting from public funds to students and families, implies that (because they will be receiving the primary benefits of college) students are responsible for their own costs and effort.
  • Corollary of the last point: as college is seen as a private rather than a public good, students and families make choices based on their perceived desires, which may be far from specific cognitive outcomes.

The way that regional accreditation agencies have responded to the accountability discourse has not helped much. In the South, SACS members must select program goals, identify objectives measures for those goals, report on the measures, and “close the loop” by discussing program improvement. It is a classic iterative program cycle, perfectly rational and far too unlikely to do much beyond occupy people’s time. It’s not because of rubrics. SACS does not require the creation of rubrics to evaluate the fuzzy and complex world of advanced student work (yes, SACS wants assessments for doctoral programs).5

The problem is the poor fit between a very generic process and the organization, work patterns, and passions of individual disciplines. At almost any level, the process can turn into paper-pushing for any of several reasons, and then the whole point is lost. Three degree programs in my department have national accreditation/approvals specific to the programs, and the specificity in those reports are orders of magnitude above what SACS expects. There is something at least mildly disorienting about the generic questions when your mind is at the level of the very specific. No one has yet phrased the closing-the-loop report as the program equivalent of an elevator pitch, but maybe it belongs there (and the forms need to reflect that).

At its root, even if we could get programs to think about SACS assessment as an elevator pitch, the problem is that the SACS-assessment elevator pitch is largely irrelevant to prospective students and the real choices made about colleges: “In the last year, we know that well over 80% of students competently analyzed primary sources in a 3-page paper assignment.” You know that no one chose a college based on information like this. It just doesn’t happen. What got me excited about majoring in history at Haverford was walking into Special Collections and thinking about working with archival documents, talking with history majors who had been or were currently working with archival materials. What sells Evergreen State College to many prospective students is listening to current students talking about the interdisciplinary programs they are in and the work they have been doing. What gets some potential students excited about attending Reed College is being walked into the tower of the library with hundreds of student theses and being invited to take any off the shelf and read through it. What gets some potential students excited about attending George Washington University is being walked through the business school and imagining oneself in a small seminar analyzing current business cases. And, yes, what sells many potential students about Ohio State University is the thought of painting one’s face and getting smashed on Saturday afternoons in the fall. What sells individual colleges and programs is the dopamine response that comes from students’ and parents’ internal thoughts of, “I could imagine [my child] studying here and being happy, and I like the idea.” Sometimes it’s a little less ambitious: “I could imagine [my child] completing a degree here without going crazy or $100K into debt, and I need that.” No one has ever chosen to attend a particular college or university because of SACS assessment data. Even if admissions offices do not see their job as making dopamine receptors light up on campus tours, they know their job is to encourage prospective students and their families to fall in love with a college or university. That is why parents generally do not ask, “How much do students here learn? How do you know?” (Brooks’ suggested questions), because “learn” is less concrete than individual subjects and less of a draw than “fall in love with their major.”

The hidden bet of the Lumina Foundation and the Tuning projects is that one could align general statements of academic goals with the key passions of a discipline. Oh, I know that all of the project documents refer to student learning outcomes and discipline-based expectations, and while I have some concerns about the national history Tuning project, creating a list of objectives tied to disciplinary conventions is doable in subjects such as history and physics. But while a better-grounded set of program goals is a good thing for many reasons, assessing progress towards those goals omits many reasons why students love or hate specific subjects. How could an assessment of college be complete without the passion we hope students will have?6

If you enjoyed this post, please consider subscribing to the RSS feed to have future articles delivered to your feed reader.

Notes

  1. If you disliked Brooks, you would positively hate John Stossel. Probably do, in fact. []
  2. Also true for K-12 schools. []
  3. A less intense equivalent is true for K-12 schools. []
  4. This is much more intense than in K-12: if there is anything USNWR rankings have taught us, it is the perverse incentives of rewarding an elite mission with even more prestige. []
  5. Rubrics will often serve the purpose, at least for paper compliance. But that’s not what anyone really wants. []
  6. No, no solutions proposed… yet. []

7 responses to “Reports on college student learning leave out passion”

  1. Mark Pearcy

    Excellent post, and precisely what I was thinking when I read the Brooks column, though so much more erudite. Great points throughout!

  2. CCPhysicist

    On your second bullet point, didn’t your state abandon an attempt to evaluate uniformly college level skills for all students because (the version I heard) it was a joke for all students except athletes?

    Where are you in the SACS reaffirmation cycle? I ask because your statement that “SACS does not require the creation of rubrics” does not match what we have been told about (by people attending innumerable national meetings) what constitutes acceptable assessments.

    And I agree 100% with your assessment of the assessment process for most students. Speaking of which, was your Quality Enhanced? Does SACS even care if it was, now that it has a new bee in its bonnet? You could blog all year on that, I think.

  3. Glen S. McGhee

    I agree that SACS does not require rubrics as part of their QEP. According to SACS, USF was reaffirmed 2005, up again in 2015.
    http://sacscoc.org/details.asp?instid=64480

    But as an academic, Sherman, you have slipped into thinking that learning only occurs in schools, in classes, etc. This never was the case, and hopefully, never will be.

    1. CCPhysicist

      We are talking about Outcomes (at the course level) and their Assessment. A QEP is old hat, although even a QEP does have to have measurable outcomes (hence measurement, hence rubrics of some sort or that process is as meaningless as it seems to have been in retrospect).

  4. Glen S. McGhee

    Measures and rubrics are not the same thing. SACS resource manual on accreditation doesn’t even mention “rubrics,” but there is a lot of discussion about “evidence based” learning. Go figure.

    1. CCPhysicist

      If all you know is what is in their manual, you know essentially nothing about what is actually required to have your accreditation reaffirmed. One single clause in there describes something that has added about 10 hours of work for each class I teach, and that doesn’t count what has to be done each semester to get results in just the right box in some form.