Tuesday, July 6, 2010

Whiskey Wednesday: Why I Don't Rate Whiskeys

I have often been asked why I don't rate whiskeys. After all, nearly every whiskey blogger, journalist, author or vlogger rates whiskeys. There are 100 point scales, ten point scales, star systems, letter grades, you name it. I understand why people like ratings. They are simple and straightforward, but they've never really appealed to me and here is why.

1. Ratings Are Linear While Taste is Multi-Faceted

Ratings are linear by nature. Everything is compared to everything else on one single, linear scale. This is, of course, what makes them useful to consumers. I know that a whiskey rated 96 is superior to an 88, end of story. But the single rating system fails to take into account the experiential nature of whiskey and the multitude of possible responses. For instance, I recently reviewed Buffalo Trace White Dog. I would probably never reach for this whiskey to sip in my leisure time, but I think it provides an invaluable window into the Bourbon ageing process and is a must-try for any Bourbon lover. If I merely rated it on my view of its pure quality as a spirit, it would receive a lesser grade, but that lesser grade would not reflect the important, academic interest in experiencing it, the intellectual joy that comes from the experience. Some things taste amazing, but some whiskeys are worth trying because of their unique flavor profiles, experimentation or other elements. Unfortunately, there is no room for such subtlety in the world of linear ratings.

2. Taste is Subjective

Of course taste is generally subjective, and each person has different likes and dislikes, but I don't see that as a flaw, and that is not the phenomenon I am referring to here. What I'm talking about is that each of our individual tastes is subjective and can change depending on any number of environmental factors. You can taste everything blind at the same time of day in the same temperature controlled room, but even then, there are factors that are hard to control. The other day I was at a tasting of seven single malts. Five or six of them were big on sherry and one was peated. The peated one stood out from the crowd and I likely had an inflated sense of it because of that. However, if I had tasted the peated whiskey along with some of my favorite highly-peated whiskeys, I probably would have had a lesser sense of it. You could try to control for this by tasting only one whiskey per seating, but not running comparisons has its own issues as you are tasting the whiskey out of context and don't have the helpful benchmark of other whiskeys to compare it to. Some tasters taste in a variety of settings, which is probably the best practice in this case (and I always taste a whiskey at least twice before writing it up just to confirm my notes), but the subjectivity of each person's individual tastes makes ratings extremely subjective.

3. Consistency is Hard

It's fairly easy to taste a group of single malts in one sitting and create a scale or rate them from best to worst. It's harder to do that over several sittings. Was the second best whiskey we had today better than the best we had last week? It gets even murkier if you are tasting hundreds of whiskeys per year. Was the Linkwood you rated a 92 seven years ago really two full points better than the Wild Turkey you rated a 90 last week? I'm fairly skeptical of anyone who makes that type of claim without doing side by side tastings. I pride myself on my own tasting consistency, but taste memories, like all memories, can be difficult to rely on.

4. One Hundred Points is Too Many

The 100 point spread seems scientific, but it's a pseudo-science (and yes, a scorer who uses a ten point scale but also uses one place past the decimal is using a 100 point spread). What's the margin for error when you are comparing ratings over several years? Is there really any difference between a 76 and a 77? I prefer to hear a more general description than see some number. Was it bad, good, great or one of the best? Any distinction beyond those terms is unnecessary.

5. The Apples and the Oranges

Unlike whiskey bloggers, most food bloggers don't give out ratings. Indeed, many old-media food critics don't give out ratings either, and those who do are usually rating only a limited selection of high-end, formal dining venues. One of the reasons is that it is very hard to compare things as different as a dinner at a high end French restaurant, a pastrami sandwich, a bowl of ramen and an In-n-Out double double (animal style, of course) on one rating system. I would rate a Langer's pastrami sandwich as one of the best bites of food around, but does that mean I should rate it higher than the carte blanche tasting menu at Melisse? There are very few Canadian Whiskies I like better than my least favorite single malt, but what if I taste something that I feel is the best any Canadian Whisky can be? Does its score reflect the "best of class" showing or does it lose out because I find Canadian Whisky overly light and sweet?

As noted above, most food bloggers don't use ratings systems and I've never had anyone ask why I don't rate food using a point system, but spirits (and wine and beer) seem to fall into a different category. People just seem to want a score. Similarly, movies and music are commonly subjected to ratings scales but books seldom are. Are these differences totally arbitrary or are there more fundamental reasons why we rate some things and not others?

All of this goes to say that because of these factors, I have resisted rating whiskeys in the past, but guess what? I've now started rating whiskeys, though not for the blog. More on that next week.

No comments: