Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

The Problem With Google’s Allo: Who’s It Really For?

May 19, 2016, 12:20 PM UTC
Erik Kay
Google engineering director Erik Kay talks about the new Allo messaging app and Duo during the keynote address of the Google I/O conference, Wednesday, May 18, 2016, in Mountain View, Calif. Google unveiled its vision for phones, cars, virtual reality and more during its annual conference for software developers. (AP Photo/Eric Risberg)
AP Photo/Eric Risberg

Google (GOOG) is making what feels like its thousandth attempt at making an impact in the messaging-app market, with a new effort called Allo. To my mind, it’s a great example of how what’s good for Google is not necessarily good for its users.

Two of Allo’s standout features are its machine-learning functionality, through which it learns what your most likely responses to a correspondent might be, and the end-to-end encryption that it offers in its private-messaging mode.

These features are interesting not just because of their zeitgeist-y nature—AI! Encryption wars!—but because they seem to be mutually incompatible.

Get Data Sheet, Fortune’s technology newsletter.

End-to-end encryption makes it impossible for anyone but the correspondents in a conversation to read what’s being said. That should block Google’s systems from being able to watch and learn in order to suggest responses, so it is presumably why Allo’s main security feature is relegated to a privacy mode rather than being on by default, as it is in Facebook’s (FB) WhatsApp (which uses the same encryption technology).

That mode may have garnered some positive headlines, but the privacy community is fuming about Google’s choice here, because the first rule of app settings is that people tend to stick to the defaults. If you’re offering strong privacy protection for communications—and there’s no good reason not to do so for every online conversation—then turning it off by default is a bad way to go about things.

Then again, Google has been promising end-to-end encryption for emails since 2014, when the Snowden effect was starting to become serious in the tech world. It still hasn’t delivered anything for the mass market—perhaps because easy-to-use but reliable email encryption is hard to develop, and perhaps because Google mines Gmail for keywords that it can use to better target ads.

In the case of Allo, I struggle to see what direct benefits the monitoring of people’s conversations for machine-learning purposes will bring to those users. Yes, it will help Google’s virtual-assistant technology to become smarter—the company is in a race with Facebook and others to develop the best artificial intelligence—but the incentive here is apparently to relieve people of the burden of communicating personally with others.

Here’s how Google’s Allo blog post puts it:

Allo has Smart Reply built in (similar to Inbox), so you can respond to messages without typing a single word. Smart Reply learns over time and will show suggestions that are in your style. For example, it will learn whether you’re more of a “haha” vs. “lol” kind of person. The more you use Allo the more “you” the suggestions will become. Smart Reply also works with photos, providing intelligent suggestions related to the content of the photo. If your friend sends you a photo of tacos, for example, you may see Smart Reply suggestions like “yummy” or “I love tacos.”

I don’t want to come off like a complete Luddite here, but I think there’s something inherently awful in the idea of chatting to someone, only to find out that their responses are actually coming from a simulacrum of them.

Perhaps it’s a cultural thing—that kind of automated reaction feels like less of an issue in the less personal world of email—but I would think less of someone who did that to me in a real-time chat, because it would show a fundamental lack of respect for me and my time. I’m not sure what the endgame is here: Leaving personalized bots to chat among themselves?

I’m sure Google does not intend to usher in such a bleak future for interpersonal communications. It probably just wants to finally crack the nut of messaging apps in a very Google-ish way, using people’s communications to train its own virtual brain, while offering some (but not too much) security as a sweetener.

But the package—which, it’s important to note, has not yet been released—sounds rather sterile. Given the fact that it’s all about learning from humans, I also suspect it will be amusing but not very effective in its early stages, unless its users already communicate in a robotic fashion.

For more on Google, watch:

All of which makes Allo an interesting prospect, but not one that’s guaranteed much success. After all, there are scores of messaging apps already out there, with Facebook’s dual services—the secure WhatsApp and bot-friendly Messenger—already serving well over a billion people.

If any new service is going to make a dent in this market, it will need to be genuinely and innovatively useful for the people using it, rather than for the company behind it.