Helping computers understand human speech is a holy grail for computer scientists. Being able to interact vocally or via written word as opposed to a keyboard and mouse opens up a range of scenarios as we’re starting to see from products like Apple Siri and Amazon Echo.
The latest: Google (goog) has open-sourced its SyntaxNet neural framework to help developers build better natural language understanding (NLU) capabilities into their software. But it didn’t stop there: It’s also sharing Parsey McParseface, a tool already trained to diagram English sentences for you, much as your grade school teacher used to do. All of the code is available for free from the Github software library.
The long-range goal is to make software that understands what users say or write to enable more-natural interactions. Toward that end, SyntaxNet breaks down sentences into their grammatical parts—nouns, verbs, etc.
That helps computers learn about the relationships between the words. But the real trick is helping computers understand context, which can be extremely tricky. Wording can be very ambiguous.
According to Google, a not-too-long sentence with 20 or 30 words can have hundreds or even thousands of syntactic structures that affect meaning.
Per the Google Research blog:
Google said Parsey McParseface (to be honest, I just love typing that name) achieved 94% accuracy figuring out dependencies between words in one benchmark test and over 90% in another.
Get Data Sheet, Fortune’s technology newsletter.
SyntaxNet builds atop TensorFlow, another set of software tools that Google open-sourced in a similar manner last November.
For more on AI, watch:
Google is one of many tech titans trying to build artificial intelligence (AI) into computer systems: Microsoft (msft), Amazon (amzn), IBM (ibm), Facebook (fb), and other tech giants are pouring resources into building better natural language and other AI tools to enable self-teaching computer systems.