Rating-systems
Almost whereever you read product reviews on the Internet, you run into some sort of rating system which summarizes how good or bad a product is.
But how does these rating systems work - and can you actually count on them ?
We will here try to explain a bit about how they often work, and how we think they
should work.
Rating
Rating a product you've reviewed means to rank it. There are many ways to do this, often by giving it some amount of:
Stars
Hats
Hammers
Percentages
Awards
And many, many more. The sky is the limit. You see rankings in many places, e.g. from film critics, news papers and TV shows. We will here focus on certain web sites, namely hardware testing web sites.
When you've reviewed a product and are going to rank it, there are some things to be cautious about. There are naturally differences between how to rank a monitor or a processor, but the way to go about it is always the same.
The criteria differs from site to site, but they could be like this:
Technology
Performance
Design
Price
Once again many possibilities, some sites use more, others less.
The actual expression of a ranking also differs. If you look at performance, you may see the same product scoring both 3, 5, 50% and 75% - but all of those scores actually means the same. Confused? That is not surprising, as rating systems have become a jungle.
In a very short time we found a product which had gotten the above ratings - and they all meant the same. How can that be? Because they are based on different rating systems:
Site 1: rates from 0-5 - and gives the score 3
Site 2: rates from 0-10 - and gives the score 5
Site 3: rates from 0-100% and gives the score 50%
Site 4: rates from 50-100% and gives the score 75%
What is the best rating, then? They're all the same, as they all indicate a medium score.
You could ask yourself, why use 1-10 instead of 1-5 ? Why use 50-100% rather than 0-100 %?
There are many opinions, but in many cases it's about satisfying the manufacturers.
No matter how a rating is generated, a score of 75 % looks better than 50 % - even if they basically mean the same.
But aren't the readers being misled? Yes, they are, but some sites takes more care about treating the manufacturers well, rather than their readers - as some manufacturers more or less sponsors these web sites.
This example will undoubtedly cause some web sites to raise a complaint, and you may wonder why we chose to bring this to your attention?
We have, as an independent and objective test site always been 100% honest and rated a product as it deserve, good or bad.
We have after many considerations decided to make our own rating system, and after working long and hard on it we think we have finally found one which lives up to our own expectations. And hence it is very frustrating to be confronted with other rating systems, where the products scores way more.
But perhaps our testing methodologies are simply incorrect, or maybe we judge to harshly? We don't think so, as we have more or less never been accused of being "bought", "influenced" or otherwise.
But how can we be sure that something is fishy?
When you look at mainstream product at a good price, it may get a score of 100%. That means something is wrong, because what should a better product score, then ?
I have so far never seen a product score 150%.
You can find reviews where some product scores 97% - to end up with such a specific number; you have to find a lot of things to look at, and when you read the summary you will find that the product suffers from several flaws. That doesn't make sense either.
Does that make us infallible now? Of course not, but we will let you, our readers, handle our ratings, not the manufacturers.
If you are not satisfied, or you do not trust it, something must be wrong about it.
If the manufacturers wimps because their products doesn't score 100% almost automatically, when they do so on other sites, we will write an article such as this!
We can live with other sites not being completely honest, but we don't think our readers should be kept in the dark like this.
We are not asking other web sites to change their ways, we are just helping you to see what is going on around you.
When all this is said, we would like to explain exactly how our rating system works - and let you rate it!
HT/OC Rating-System
We launched our brand new rating system during this year's CeBIT
We have abandoned our awards, as we do not think they're a viable solution anymore, as almost all products will at least score a "Good Buy" or similar. That would make it just another way to pander to the manufacturers.
Our rating system is composed of both points and percentages.
When a writer has finished testing a product and is writing his summary, he will be met with this box:
Here the writer must select his own opinion about the product in five different categories.
As you can see the table has points from 0-5. This is to ensure consistency between our writers' ratings. If it was simply from 0-10, what would the difference be between 7 and 8? Or between 2 and 3?
The points are awarded according to these descriptions:
"0 is given to:
A product which utterly fails in that category. It does not fulfill any of the requirements you would expect of a product in that category.
1 is given to:
Products which fails pretty severely in that category. A few positive things can be said, but the overall impression is bad.
2 is given to:
Products in this category with just enough positive things to outweigh the negative, making it useful for most people.
3 is given to:
A product which is slightly above average in that category. There are somewhat more good things than bad. There are, however, still certain areas to be aware of, some things aren't fully up to scratch.
4 is given to:
A product which can be said to be superior in that category and which at the same time does not have any flaws, or if it does, they're clearly overshadowed by the positive things.
5 is given to:
The perfect product in the category. It fulfills all the demands which you can rightly expect, and there are no negative things to say about it. This character is given if the product can hardly be improved at all."
As you can see we rank products so we think it covers everything.
N/A (Not applicable) is when that category is irrelevant.
Innovation - is it a new technology?
Bundle - Lots of accessories included? Cables, games, etc., etc.
Design - Does it look great? Is the layout practical?
Software - Is the included software useful, sufficient and stable?
Performance - Does the product perform well?
Price - Is the price acceptable, considering the above criteria?
The article is submitted when the writer has judged the product and entered points in the table. The writer never gets to know the final verdict in percent.
This is not shown until the article is online at HT/OC.
Our CMS then calculates the given points to x percent, e.g. 80%.
Hey, 80%, isn't that more or less in the middle? No, 100% has always and will always mean the very best! That means that something is completely wrong if 80% is average or bad. 80% will indicate that a product a closer to the top (100%) than the bottom (0 %), and even clearly close to the top than average (50 %).
To illustrate this somewhat, as it can be confusing when you're surfing around and you run into different rating systems, we have chosen to also use awards based on the score in percent. Hence will a product which scores 80% also get a Silver Award. To us, 80% means good!
Of course we also have a gold award, which is substantially harder to get. Here we're talking about close to 90%, whereas if you look at other sites you will see that 90% is something which is regularly given.
We have given ONE gold award since the system was launched, more than six months ago.
Why is it so hard to get a gold award? If a product is simply splendid, that should be it, right? No, because a splendid product is more often than not just a graphics card with a new chip, a CPU with an additional core or something like that.
And yes, that would be a really splendid product, but the price is usually also outrageous and basically unfair. Hence it would get a bad score in that category, lowering the overall score. If you're good enough to develop a really good product at a good price, well, then you'll get a gold award, because you deserve it!
A case study:
We test a 7800GTX costing 4,000 DKK, which performs stunningly well, it's the newest technology and all that. But the price is way too high, and the final score is 75%.
Many people would think this is unfair, as you always have to pay through the nose for leading edge devices - and another test site would (often) give the card 95-100%.
What happens three days later, when another manufacturer releases a similar card at only 3,000 DKK? At our site, that card might score 85%, but what should the other site/ranking system do? Award the card 105%?
It's important for us to see the whole picture. Of course a 4,000 DKK card shouldn't get a score of 0%, as it's a costly high end card worth a lot of money. But it is almost certainly not worth paying 4,000 DKK for it. Most people know that, deep inside.
Another example:
We put a low budget main board through our tests. It's got no special features and the performance is mediocre. It's not very pretty either, but it the very cheapest main board you can get. We end up awarding it 60% How can that be, when the board can hardly do anything?
Because it's neither aimed at the gamer nor the overclocker.
It's meant for an office system, an Internet machine or something like that. And as the card is well suited for that, with its low price, the score will and should reflect that.
The performance score will be medium, bundle and design low, but it will score highly in the price category, as it is here superior to all other boards.
Naturally considering that you get what you pay for!
Final words
We have chosen to write this article because there's a lot of uncertainty about exactly how rating systems works. And unfortunately, sometimes they do not work, and we didn't want to hide that fact any more.
At the same time we also wanted to introduce our own system, which has now been tweaked and some minor bugs fixed, so you readers could get to know how it works, and how on Earth we end up with our scores and rankings.
We will update our system a bit, so all articles will also include a rating description, that way all the loose ends should be covered.
It is possible that we might end up stepping on some peoples toes with this article, but if you feel insulted, perhaps you should consult your conscience and change your way of testing!
We hope it has helped, or will help you readers to get a better understanding of how the rating systems out there works.