I've been working in higher education since the late 1990s and in higher education assessment since the early 2000s. So, it might come as a surprise when I say I'm starting to reduce my use of the word assessment. In the early 2000s I had the unique challenge of working in an assessment office when a provost forbid us from using the words "outcomes assessment." Red-faced and pounding on the table, he still tells me that outcomes assessment is an educational fad, doomed to die a death of attrition.
No. My reasons for not using "assessment" as much lately are not as cavalier or bullish. I've simply found it more accurate and accessible to talk about "evidence" rather than assessment. Assessment is a term that has picked up a lot of baggage in the past decade. Many people point to the Spellings Commission on the Future of Higher Education (2005) as a watershed moment in assessment's history. True, this commission did indeed galvanize many assessment practitioners and scholars and motivate their interests. The commission also painted a picture of the necessity of assessment in today's modern university. That picture of necessity--which seems to have tied assessment to accountability with double knots--is the one I find problematic. It is also one I find peculiar since assessment has yet to yield the earth shattering changes any scholars and policy makers (Spellings' Commission included) have ever advocated for it.
This is, in my opinion, due to the fact that when policy makers and legislators press the rhetoric of assessment, they simultaneously press the rhetoric of accountability. Now, I am not one to bemoan accountability. John Q. Public has a right to have access to data on educational performance if even a small (and ever decreasing) portion of his tax dollars are devoted to public higher education and even if he is not particularly interested in mounds of data already available. But one thing I hope Mr. Public understands is that no measure of accountability not matter how grandiose or comprehensively administered will ever make a student smart. Tests don't make students smarter. Students, faculty, teachers, and advisers, engaged in partnership--in truly generative dialogue--do. In particular, if by "tests" one means any number of commercially-sold, standardized measures, these efforts in particular have grown so detached from the modern curriculum that they are hardly an effective measure of 20th Century learning, let along 21st Century learning. There are a number of notable exceptions to this claim, but in general, I have found test companies to sell their products while simultaneously telling professors and teachers not to teach to the tests all the while knowing that some amount of teaching to the test will occur and is necessary to ensure a sufficient portion of test takers perform well. And, let's face it, some pretty important decisions are being made using test data, so the pressure to teach to the test is always present in any institution or school who has either gotten on the testing train or was forced onto it in the great academic pissing match we plan for institutional or district reputation.
Sitting in the middle of all of these complex ideas is the term "assessment." Last year we asked faculty and administrators participating in their respective versions of the Survey of Assessment Culture to describe any metaphors they heard describing their institution's culture of assessment. For administrators bureaucratic metaphors such as machine- or production-oriented metaphors were the prevalent themes. When one considers all of the challenges and pressures administrators face in complying with accountability mandates, once need not ponder long why such answer flow freely. But for faculty, the metaphors they hear are quite difference. The overwhelming number of faculty used curse words to describe their campus' culture of assessment. Literally more than half of all 800 faculty participating in our study said they had heard curse words in relation to assessment on their campus.
So I ask you this. How, exactly, are administrators and supportive faculty supposed to engender any sort of support for a process that elicits such virulent responses? Answers rest in two areas of thought (with many more to come through dialogue, for sure). First, we (by which I mean all educators) must stop using assessment as a proxy for accountability. Stop using terms such as "buy in" to assessment for things that very seldom give faculty any direct benefits. Administrators are quick to point out that without accountability is a major focus of accreditation and without accreditation (i.e presumably because faculty are refusing to provide some form or report), there can be no programs. In its worst case scenario, I've know this to be true. However, this "chicken little" approach does more damage than good and leads me to my second thought on how we--in particular administrators--can save assessment from what it has become. Very seldom are our institutions in such dire straits that their mere existence rests on the involvement of just one or two faculty (since that is often all that assessment administrators are hoping to get to attend a meeting or provide data on time). Assessment leaders can do much to "depressurize" assessment for their colleagues and, as needed, remind accountability agents that teaching and learning, not reporting and presenting, are higher education's first calling. Assessment conducted with its original, uncorrupted intent is still one of the best things American Higher Education has going for it. But accountability efforts taken to extremes--particularly extremes of rote compliance with codes and things called "standards" that are anything but expert-derived standards--have corrupted assessment's original intent. Mike Gunzenhauser (2003) once wrote about the how, in the absence of a philosophy guiding educational assessment accountability agents were able to supplant a philosophy for us, most often relying on a logic that more compliance results in better learning. Most faculty do engage informal reviews and demonstrations of student abilities. Fitting these reviews into the jargon, models, and "education-ese" of accountability agents is as problematic for faculty as say, an accreditor describing a scientific, economic, or artistic theory, which faculty do each day. the important lesson here is that neither perspective is more or less valuable to education. Both cultures, faculty and administrators, must span the cultural chasm if generative change is to take place; is assessment's current meaning is to regain some of its former glory and usefulness.
So for that reason, at least until assessment turns back to its former self, I've begun to reduce my use of the term. It has just picked up too much baggage and no longer means the good hearted, locally-meaningful, educationally-relevant term it was when I first started my career. For now, I've begun favoring the notion of a culture of evidence or a culture of inquiry, though I am sure my transition to this term will be a lengthy one.
One last thought: If we as educators demanded the same level of accountability from accountability agents as they demand of us--if we realized that most accountability agencies are voluntary organizations of peer reviewers--I wonder how readily accountability agencies would be to divert the magnifying glass from a gaze upon them.