Over the past few months, while Amazon, Microsoft and IBM took turns unveiling new cloud computing data centers in China, India, Germany, the U.K., South Korea, and elsewhere, one public cloud provider remained eerily silent: Google.
This is odd because, when it comes to delivering cloud services to customers—especially business customers—location matters. The further away the computer servers and storage are, the longer the lag time, or latency, in operations. And slow is definitely no good in this context.
But two sources close to Google
said the company is considering an interesting plan.
Google’s cloud services operate out of four data center regions worldwide compared with 20 for Microsoft Azure and 11 for Amazon Web Services (AWS) with five more due in the next year. But Google also has 70 data caching stations around the world that store copies (or caches) of video, audio and popular web pages close to their likely audiences to speed their delivery.
Those endpoints are key pieces of what Google calls its “peering and content delivery network.” The idea, which both sources said is under discussion, calls for these Google outposts to be outfitted with additional computing capacity so they would become sort of mini data centers.
Both sources requested anonymity because their companies work with Google. A Google spokeswoman would not comment on what she called rumor and speculation.
One source noted that the Google Compute Engine (GCE) team—basically the cloud group—is working with the company’s broader internal infrastructure groups to see if it can put small pods or clusters of computing power into these regional endpoints.
There could be wrinkles. For one thing this could end up being a tiered system, with big jobs running on Google’s own massive data centers which run hundreds of thousands (millions?) of servers that could outstrip the capabilities of the smaller end-point nodes of the CDN. If they were to do that, there’s more limited capacity than for the stuff running in GCE’s own data centers, he said.
“Users might be given a cap and it would be more expensive to run” in these smaller clusters, the source said.
That scenario flies in the face of the expectation of public cloud users—customers who rent out computing, storage and bandwidth run by Google, Amazon, Microsoft etc.—who think they can add nearly unlimited computing and storage capacity as needed. So this scenario could set up a two sets of infrastructure with two pricing models.
But it is clear Google needs to do something about global coverage soon. Amazon
is the leader in public cloud infrastructure, running what Gartner last year estimated to be 10 times more computing capacity than the next 14 cloud competitors combined. Microsoft
is making a play especially with business customers.
To help speed traffic flow into and out of its cloud, Google and Akamai [Fortune-stock symbol=”AKAM”], the CDN market leader recently announced a partnership to directly link to Google’s own CDN interconnect with Akamai’s CDN. In September it announced a similar deal with four other CDN providers: Cloudflare, Fastly, Level 3 Communications and Highwinds. But those deals are just a piece of the overall puzzle.
Google has great technical smarts, and entered the fray with a bang last year with a series of price cuts on key storage and computing services in a move that seemed to flummox Amazon, which is not used to other companies driving the price agenda. But it has struggled to sell its cloud to business users, many of whom are not totally sure that Google, the online search and ad giant, really cares about cloud services. (For the record, it always said it does.)
To counter that perception, Google recently put former VMware
CEO Diane Greene in charge of its cloud unit. She brings added credibility to the push with business customers. (Of course, this week’s snafu with Google App Engine, anohter of Google’s cloud offerings, probably won’t help its case with prospective business users, but then again AWS and Azure have also had hiccups.)
Adrian Cockcroft, technology fellow at Battery Ventures, and former cloud guru at Netflix
, said he has not heard about this Google plan but he that it would make sense given businesses needs for lots of geographical coverage and fast performance.
If the Google cloud team has figured out how to convert those CDN nodes into far-flung mini-data center pods, it means they have listened to what enterprise customers have requested, Cockcroft told Fortune via email.
“From a technology point of view, it also means they have figured out how to scale down and package their cloud for small regional deployments,” Cockcroft said. That is something Microsoft and Digital Ocean, another public cloud company, have already done and something he thinks AWS will get better with over time.
For more from Barb, follow her on Twitter at @gigabarb, read her coverage at fortune.com/barb-darrow or subscribe via this RSS feed.
Make sure to subscribe to Data Sheet, Fortune’s daily newsletter on the business of technology.