Most recently I wrote about the Federal College Scorecard. I stated that I did not believe that the Federal Government should get into the business of rating colleges or providing outdated or potentially misleading information. Politicians and their appointees should not be allowed to rate any public or private service, especially if they are eligible to receive campaign funds from professional associations that include the schools as members.
But it is time for an independent college rating service, delivered by a research firm such as the Consumer’s Union or J.D. Power. These firms have the means to conduct data collection on behalf of every college as well as the publishing vehicles to present this information. There is absolutely no doubt that the potential paid subscription base is there among counselors, educators, parents and students. There is also no doubt that a guide from either one of these firms would do well at the point-of-sale at bookstores and newsstands. Like U.S. News and the Princeton Review, these publishers know how to sell magazines and build brand awareness.
Educators come down hard on U.S. News. Yet at the same time they cooperate with the magazine by sharing their data. There is no reason why they would not share data or survey results through another independent party such as J.D. Power or Consumers Union.
How would such a college rating service work?
One problem with U.S. News is that college rankings are based on an outdated concept called a Carnegie Classification. When you go through the magazine, you see colleges grouped as:
The problem with these classifications, and with their rankings in general, is that they compare schools that have become quite dissimilar to each other. For example, Brandeis University and Clark University, both in Massachusetts, are considered to be National Research Universities, although neither has more than 3,500 students and neither grants pre-professional undergraduate degrees. Bucknell, which also has approximately 3,500 students, and grants degrees in business and engineering, is considered a National Liberal Arts College. While I do not know how many college-bound students cross-shop Brandeis and Clark vs. Bucknell, I do know that Bucknell has an academic mix more similar to a larger university.
Which leads to a point: It is almost impossible to classify colleges in such a way that similar schools can be “ranked.”
A college rating from an independent body has to consider customer satisfaction, above all else. Just like consumer guides to airlines, banks or hospitals.
How do you survey customer satisfaction with colleges?
Like airlines, banks or hospitals, higher education is a service business. Customers are shopping and purchasing services performed by people.
Aside from academic classes a full-time college student has experiences with several services on campus. S/he deals with academic advisors, residence life professionals, the dining hall and more. When you visit college review sites including StudentsReview and Unigo, among others, you will quickly come to know that students are perfectly capable of delivering an in-depth viewpoint on any of these services. They are certainly capable of filling out a circle on a multiple-choice survey.
When would you survey customer satisfaction?
The best times to survey students are at the beginning of the sophomore year and six months after they have graduated.
At the beginning of the sophomore year, students would be surveyed about their freshman experience. Not only have the sophomores finished their freshman year, they have committed to a second year, and possibly to a major. The freshman year is also the time when students make the most use of the school’s services. In the sophomore year they know enough to decide which services, especially the residence halls and the dining halls, that they will continue to use.
The six-month out survey would cover student satisfaction with the college experience after the freshman year, including services such as career development. Most six-month out surveys also ask if the graduate continued their education or found employment. It’s also fair to ask a graduate about their major, satisfaction with the courses and preparation for work from the major and if s/he would choose that major again.
Is it possible to use only one survey?
Educators might argue that there is no “one size fits all” for surveying student satisfaction because every student’s reasons for attending a college are different. But some of those questions can be addressed in a single survey. For instance, some schools have very little residence life; most of the students are commuters. A school can opt in those circumstances to exclude questions that are not relevant.
How would schools use these ratings?
In publicity they would highlight the positives (‘A’ rated residence halls, for example). The survey should guide the school to the items that were of most importance to the students as well as the most-chosen academic programs. One would hope that a school would address the more glaring negatives instead of trying to play them down.
Would college selectivity matter?
Too many college rankings are based on the demographic profile of the freshman class as well as the percentage of applicants that a college accepted, and also denied. No other service is rated in this way. It implies that a service is “better” because most of the people who wanted to use it did not get the opportunity to use it. Does that make any sense?
However, suppose colleges went to a customer satisfaction and service-based rating. If parents and prospective students were to find that some schools do better at actually helping their students and alumni, then seats in the freshman classes would be in higher demand. Being known for providing better services will make a college more selective. This might be seen as either a public relations bonus or an unintended consequence for college admissions. But one would also hope that a good performer would want to get better, possibly find ways to serve more customers successfully.
Selectivity is as much a problem as it is a reputation builder for a college. Ivy League schools, for example, have not significantly grown the size of their freshmen classes over the past ten years although the demands for a seat in the class grows every year. One might argue that the schools were trying to protect the quality of their academic program by not making room for more freshmen. However, most of the nation’s best public universities have increased their enrollments while their freshmen retention and graduation rates have actually improved. It’s fair to say that the public schools have gotten better at graduating undergraduates while they tried to meet the obligations to the citizens of their states. It is also fair to take the jump to say that the state schools improved the quality of the educational experience while the Ivy League schools did not. They already believed that they were doing a good job.
How might such a college rating service change college admissions?
It will increase the popularity of many schools that are probably not household names among high school guidance counselors as well as parents and students. At the same time it will give college administrators more relevant feedback that they can use to improve the quality of the student experience. Too many colleges compare themselves to an “aspirant” class, even though their communities are different. This leads to misplaced priorities in spending that are mocked in the media, even if they are appreciated by the community. A customer satisfaction based rating would help a college to better consider the customers it already has as well as those it is most likely to attract.
Sharing is caring!