Cyber assistants are useful, but can you trust them?
Talk about upstaging a competitor. Oracle plans to make a big proclamation today about how artificial intelligence and the scads of consumer and business data it has bought over the past couple of years will make its sales, marketing, human resources, and financial management applications far smarter than anything else out there.
I know because the software giant helpfully decided to send me the news last night after rival Salesforce trumpeted a similar message earlier Sunday, the day Oracle’s massive annual OpenWorld conference kicked off in San Francisco.
Oracle, for example, claims that its so-called Adaptive Intelligent Applications will do things such as help finance teams find the best terms for a contract or guide sales representatives about the best times to approach prospects. Its promise is informed by a tremendous amount of data—its data cloud hosts more than 5 billion profiles of companies and individuals that can customize insights depending on what you’re looking for.
Get Data Sheet, Fortune’s technology newsletter, where this essay originated.
The pitch is similar for Salesforce Einstein, a service that will endow the cloud giant’s marketing, sales, service, and e-commerce applications with more predictive capabilities. One of the more alluring ideas: the ability to “predict” which sales contact is most likely to sign on the dotted line. Wouldn’t you rather call that person? The technology builds on several acquisitions including the buyouts of RelateIQ and Metamind.
Oracle orcl and Salesforce crm are far from alone in their worship of artificial intelligence, of course. Every software company interested in sticking around for the long term is investing in machine learning and other AI technologies that should help automate mundane tasks that we humans usually hire assistants to help us cope with. The real question is not how long it will take for these capabilities to become useful, but rather how long will it take for us to trust them.