It's late and I'm tired but unsleepable, and I spent the last 2 hours doing something I hate, so I will tell you about #assessment in #highered (in the USA). I'm extrapolating from personal knowledge of 2 universities' practices and hearsay about a few other places.
It's bullshit. Much of it, anyway. This is not an exaggeration.
Accrediting agencies, university systems, and other bodies want to see assessments. Administrators (presidents, provosts, deans, and the increasing cloud of quasis around them) want to see assessments, too. The big problems I see (caveat: I'm wrong sometimes) are embedded in the fact that (contrary to popular belief) almost all American public colleges and universities are not controlled by professors; they are authoritarian institutions controlled by suit-wearing, corporate-cosplaying middle-managers. This leads (because reasons) to a management-vs-labor dynamic.
For administrators, assessment is not about understanding processes or outcomes; it is about control of the university (especially the faculty, who tend to get uppity and think that they should have a say in things just because they know stuff about stuff) and career management. The last point is not remotely independent from the first, BTW; higher ed admins' careers options are impacted heavily by how hard they knock faculty heads.
Result: assessment is not about assessment at all, but anyone who says this out loud runs headfirst into authoritarian power games. Institution-level assessment is about power.
One big area of assessment is the ubiquitous General Education program (i.e., the "liberal arts" curriculum in which engineers have to take a philosophy class and aspiring writers have to take a math class). The schools of which I have knowledge all do the following:
- Revamp their gen ed program every 7 years or so
- Assess the new gen ed program to see if it's better than the old one
- Conclude that it is (100% of the time)
General education programs theoretically serve university "mission statements" etc., which have become so vague and stuffed with business-speak in recent years that they are nearly meaningless. However, they still tend to have some language about "success" or "skills" or "critical thinking" or similar. These things can be assessed. Perhaps a good way to demonstrate how assessment works is to show how I tried to influence assessment of these things at one institution.
I'm an Assessment Person. I'm not the most skilled and knowledgeable psychometrician in the world, but I am a psychometrician. I have a PhD that says "Psychology and also statistics with a focus on stuff like psychometrics or whatever." This, I have found, makes me more qualified to do all things educational-assessment-related than 99% of other employees at the average American college. Of course, my big head as I figured this out and my stats-specific imposter syndrome faded led me to bonk directly into an unspoken but firm rule of administrators at universities: Never let a faculty member contribute to anything of operational importance.
After a year or two at a particular school I volunteered to be on the Gen Ed Assessment Committee for our brand new gen ed program, which was hammered out by dozens of faculty over three years or so, with administrators over their shoulders and fingers on the scales at every turn. Well, I know some things about how to assess stuff when human behavior is involved, so it was a cool gig.
What followed was a year of meetings with three other faculty and one vice-provost. The VP was the chair of the committee and the rest were appointed, not elected. I spent many hours parsing our committee's charge, the university's mission and values statements, and the voluminous literature about educational assessment. I prepared briefs, made suggestions, etc.
You see, it's pretty goddamn simple (not the same as easy, but not that hard): If your gen ed program's mission statement says it will increase critical thinking in students, you get a fucking critical thinking assessment (there are a few pretty decent ones) and you fucking give it to the fucking students. You can do a longitudinal study, assessing students at various points in their college experience. You can do a cross-sectional thing where you assess a bunch of 1st, 2nd, 3rd, and 4th year students all at once. You can get fancy and do a cross-lagged design. You can get picky and weird with the methdology, but it's not that fucking hard.
I didn't understand the problems with this approach. They included (but probably were not limited to) the following. This kind of assessment...
- Fails to occupy hundreds of faculty members for dozens of hours each, every semester
- Doesn't provide buzzword-intensive "initiatives" to go on the VP's resume
- Allows a few faculty members to gain social status by demonstrating their expertise
- Allows faculty members to have a meaningful say in university bidness
- ** Might show that the new gen ed program is not better than the previous one
So I spent a year working my ass off, not quite understanding (but beginning to suspect) why all my suggestions (e.g., "We want to know about skills. What if we measure skills?") were ignored or sometimes pointedly shot down. After a year, the committee was disbanded with no final report and no meeting minutes (another suggestion that got me some surprisingly hostile responses). There is no record, as far as I know, of anything we did. A year or two later, that same administrator announced the "faculty-led" assessment system we have now, which bears no resemblance to anything we discussed in that committee, let alone my suggestions.
The system we have now is this:
Every instructor of a gen ed course comes up with their own assessment of 3 to 6 learning outcomes. The learning outcomes were developed by a bunch of faculty committees, sort of. They're high-level and don't have a single obvious assessment process (e.g., "student utilizes relevant knowledge sources to evaluate claims in discipline", etc.). They are all basically OK, but literally every faculty member makes up their own assessment. It could be a test question, a class project, a student interview, a portfolio, whatever.
Then every instructor must evaluate every student in every gen ed class (hence the many hours) on each of those learning outcomes, and score them on a rubric which was suggested by... someone, then voted into existence... or maybe just mandated. It is based on a well and truly debunked theory of learning, and it has 4 categories: did not meet outcome, approached outcome, achieved outcome, and exceeded outcome.
Side note: After this assessment process was dropped on us several years ago, there were a couple of months of intense discussion about how and who, etc. After the contention, the University Senate and administrators (who are goddamn members of our Senate for reasons that continue to elude me) agreed that no students would be identified in the assessment process, nor would any instructors or course sections. Everything would be reported in broad categories of courses, with all identifying information removed. This is because (see everything above) faculty who have been at a college/university more than a couple of years do not trust administrators. If you give them data, they will use it to fire, marginalize, or just be shitty to faculty they don't like, or in pursuit of whatever buzzword career booster is popular among provosts and deans that semester.
Great. It sucked but it was a compromise.
Then an admin announced that we had spent something like $50K/year on a "solution" to collect these data from faculty. Sadly, the "solution" required us to enter all student names attached to all scores, and our own names attached as well. When some of us mentioned the previous agreement we were told that we were harming the university by wanting to make this big (for us) financial investment worthless. When some other of us (OK, me) mentioned that every part of the assessment could be done with an Excel workbook or even pieces of paper slipped under the provost's door at the end of the semester and tallied up by a secretary, we were told that we clearly didn't understand assessment.
So now we have a tedious, laborious, overly complex, data-harvesting online platform to do the job of a single Excel workbook. It costs tens of thousands a year while we are told that we might go bankrupt at any minute and we can't have copier paper for exams. Our "assessment" involves a bunch of outcomes that were never evaluated for validity or effectiveness, assessed by hundreds of people who have (a) no expertise in creating valid assessments, (b) instructions that guarantee very low validity, and (c) fear-based motivation to inflate scores as much as possible. And we all spend a dozen or two hours a semester creating the assessments, scoring them, entering them in the cumbersome system, and dealing with dozens of emails about how to do it.
I hate some parts of my job, and this is one of the hateyest. I hate being forced (literally on threat of unemployment) to participate in this farce every semester. I hate being gaslit (gaslighted?) about the history, validity, and need for this process. I hate watching ritualized authoritarianism on display: Do this thing and prentend it makes sense and shut up about what's actually going on. I figure a junior faculty member who never took a psychometrics course and who didn't understand how universities work would feel OK about this; it has the appearance of assessment.
I just criticized fiction authors for not knowing how to end a story. I don't, either, I guess, because The End.
#highered #professor #assessment #psychometrics #bullshit #power #labor