Blog Archives

The Rise and Fall of Accreditation in America

One of the first things that I had to learn about when I became an academic dean was the world of accreditation. What is it? What’s it for? Why do I seem to spend so much time on it? A recent position paper by The Center for College Affordability and Productivity does a very nice job of summarizing the history and purpose of accreditation in America, along with its greatest challenges, most troubling failures, and the likelihood of significant changes in the future. The report, “The Inmates Running the Asylum: An Analysis of Higher Education Accreditation,” is worth at least a quick skim if you want to understand the accreditation world better. And, if you’re involved (or would like to be involved) in higher education, accreditation is something you should know at least something about.

The authors begin with a pretty pointed summary of their view on modern accreditation.

If the nation were starting afresh on accreditation, we predict it would devise a radically different system than the one it has become over the past century. Would we have multiple regional accrediting agencies? We doubt it. Would the accreditors be private entities largely controlled by individuals themselves affiliated with the institutions that they certify? We doubt it. Would accreditation largely be “an-all-ornothing”proposition, where institutions are simply “accredited” or “non-accredited” with few distinctions in between? We doubt it. Would an accrediting mechanism be permitted where key elements of the assessment are not available for public review? We doubt it. Would accrediting that sometimes emphasizes inputs rather than outcomes be permitted? Again, we doubt it. In short, there are numerous characteristics of today’s system of accreditation that are subject to questioning and criticism. (1)

The authors do a nice job explaining the four eras in the historical development of accreditation (pre-1936, 1936-1952, 1952-1985, and 1085-present). The most helpful part here was their explanation of how accreditation shifted from being a largely voluntary, self-governing process in the early years, to one focused on meeting certain standards in order to be eligible for government funding after the passage of the GI Bill (1944, 1952) and the Higher Education Act (1965). The authors then assess how successful accreditation has been in each era with regard to a number of factors:

  • Quality Improvement
  • Quality Assurance (defining appropriate measures of quality, certifying minimum quality, informing the public)
  • Promoting the Health and Efficiency of Higher Education (preserving historical strengths, promoting efficiency)

In each case, they argue that accreditation practices were generally more effective in the earlier years when accreditation was voluntary and not connected to federal funding.

One of the more interesting aspects of the article was their contention that modern accreditation largely fails because it’s trying to serve two, mutually exclusive purposes. First, accreditation tries to promote institutional development. Having just been through an accreditation visit not too long ago, I can attest to the fact that one of the primary emphases is on helping the institution improve at what the institution claims it is trying to do. And, given the diversity of educational institutions, these purposes and goals are determined by each school. So, the school determines the target and the accreditors come along side to help the school get better at hitting its target. Indeed, one of the reasons that the accreditation process is largely confidential is because accreditors want/need to the schools to disclose candidly their areas of weakness so they can facilitate improvement.

That’s all well and good until you realize that a second purpose of accreditation is supposed to be quality control. Quality control isn’t about helping a school improve; it’s about measuring whether a school is performing up to some minimal level of expectation. And, quality control isn’t for the benefit of the institution (primarily), but for the benefit of the public. Quality control serves to determine which institutions are performing satisfactorily so that they should continue receiving government funds for the benefit of society. For quality control to work in this sense, though, it would seem that you need clear standards of acceptable performance that transcend institutional differences and are publicly available.

Somewhat surprisingly (to me), the authors end up arguing that the second function, quality control, is the one that needs to remain the primary emphasis for accreditation moving forward. They make this argument partly from a rather pragmatic perspective. Federal funding for higher education isn’t going away, and as long as it’s around, there will be a need for some kind of quality control mechanism. Since modern accreditation is simply incapable of handling that task, they argue that it needs to be jettisoned and a completely new system put in its place. But, they also think that higher education needs to focus much more on measuring student learning and performance as the primary indicator of success. So, they argue for the creation of clearer, discipline-specific standards for learning that could be used to measure quality across institutions.