By Philip Elmer-DeWitt
December 20, 2012

FORTUNE — Siri is a little like the weather. Everybody complains about it, but only Piper Jaffray’s Gene Munster seems to do anything.

In June he put the iOS 5 version of Apple’s (AAPL) voice-activated personal assistant to the test — asking her 1,600 questions, 800 in the streets of Minneapolis and 800 in a quiet room.

The results left much to be desired. In the best of circumstances (i.e. no street noise), Siri correctly interpreted 89% of the questions and correctly answered 68%. (Typical exchange, as Siri heard it: When is the next Haley’s comment?  “You have no meetings matching Haley’s”.)

On Thursday, Munster reported the results of a quiet-room (no street noise) re-test of the “improved” iOS 6 version. This time Siri understood 91% of the questions and correctly answered 77%.

In school-grade terms, she’s gone from a D to a C.

(For reasons Munster doesn’t explain, he’s raised Siri’s June score from 68% to 76%, making the improvement look more modest than it was. Perhaps he’s now grading on a curve.)

Munster also reported on the performance of Google Now, Google’s (GOOG) answer to Siri. Overall, Siri did slightly better, as the bar chart at right shows. But each program has its strengths and weaknesses.

“Siri’s biggest strengths,” Munster writes, “are in local discovery and OS commands. Siri’s biggest weaknesses are in commerce and information. For Google, unsurprisingly, navigation and information are its strongest points, while commerce was weaker. Google Now also was significantly weaker in OS commands. For example, Siri enables full control of the music application via Siri, but Google Now does not understand all song change/pause commands. We believe Google will eventually strengthen its ability to control the OS in future launches. Commerce was an area in which we viewed both solutions as weak.”

Below: Munster’s side by side comparison.

You May Like

EDIT POST