How exactly did Samsung overtake Apple in "overall satisfaction"?
FORTUNE — You can hardly blame the reporters and editors who wrote all those headlines proclaiming Samsung’s victory over Apple AAPL in J.D. Power and Associates’ 2013 U.S. Tablet Satisfaction Survey.
After all, that’s what J.D. Power’s press release said. Sort of.
But reporters who got their hands on the attached chart were left scratching their heads. It details Samsung’s performance in the five categories that resulted in the company’s 835 to 833 win over Apple in “overall satisfaction.”
What’s puzzling is that Apple did better than Samsung in four out of five of those categories, scoring the maximum five stars in performance, ease of use, physical design and tablet features.
The only category that Samsung beat Apple in was (duh) cost. And cost, according to Power’s press release, counts for at most 16% of the total score.
Bottom line: Apple took home 22 gold stars. Samsung took home 18. And then, for reasons known only to itself, J.D. Power and Associates put out a press release under the headline:
The company — a division of McGraw Hill — promised to put me in touch with the guy who managed the tablet survey.
UPDATE: “It’s very simple,” says Kirk Parsons, J.D. Power’s senior director of telecommunications services, who got back to me Friday afternoon. “It’s just math.”
He explained that the real results — the ones that count as far as J.D. Power is concerned — are the numbers it reported in its press release and illustrated with the bar graph at right.
They come from a survey 3,375 tablet owners who were asked to rate their devices on five criteria using a scale of 0 to 1,000.
The results in each category were multiplied by a weighting factor — performance (26%); ease of operation (22%); styling and design (19%); features (17%); and cost (16%) — and the products summed.
J.D. Power won’t release any of the details, except to say that when the numbers were crunched, Samsung edged Apple by two points, 835 to 833.
The problem comes when J.D. Power uses its “Power Circles” to communicate the results to consumers.
Those gold circles are derived from same tablet survey, but they don’t reflect the original 0 to 1,000 scale. Rather, they show where each company’s products stand relative to its competitors — “among the best,” “about average,” etc.
This system has worked pretty well over the years.
But from time to time, signals get crossed: the Power Circles say one thing and the overall rating says another, as they do in the case of Apple’s iPads and Samsung’s tablets.
How can that happen? Parsons gives an example, stressing that these are not the real numbers:
Okay. I can see that, especially since Apple sells the most expensive tablets on the market and Samsung among the cheapest.
But if cost is the criterion by which Samsung edged out Apple — trumping such factors as ease of use and performance — is “satisfied” really the best way to describe how those 3,375 survey participants feel about their tablets?