Why This Popular Developer Site Dumped Cloud to Build Its Own Storage

Gitlab

“How We Knew It Was Time to Leave the Cloud.” That headline is roughly equivalent to “Man Bites Dog” in the tech blogosphere, so it was eyebrow-raising to those who read it atop a new blog post from developer hub GitLab late last week.

In the tech realm, as evidenced by several talks at last week’s Structure industry conference, the conventional wisdom is almost all new computing jobs and data will run in a shared cloud infrastructure—whether it be Amazon Web Services, Microsoft Azure, or Google Cloud Platform, among others—in the not-too-distant future.

GitLab runs a popular website and set of tools used by tens of thousands of software developers to build applications and to track coding projects. It competes with the similarly beloved GitHub.

So why is GitLab forsaking the cloud—at least in part—to run its own storage? The details are in the GitLab post, but the long story short is that GitLab’s braintrust thinks it can do a better job than the unnamed cloud storage provider that frustrated them with response times. GitLab requires fast storage performance as many users are writing data to and reading it from the storage source all the time. If the storage cannot keep up with those read and write demands, there’s a bottleneck.

Get Data Sheet, Fortune’s daily technology newsletter.

While GitLab did not name its cloud provider,its clear from several earlier message threads, that the company had significant problems running optimally on Microsoft Azure storage. Fortune reached out to Microsoft for comment and will update this story as needed.

GitLab CEO Sid Sijbrandij says the company has not given up on cloud computing generally and has nothing bad to say about any one provider and the company continues to use DigitalOcean’s cloud for computing.

“Cloud is great for compute power, but we have a very specific need for a massive file system, and right now the best way to do that is to have our own hardware,” Sijbrandij tells Fortune.

Storage is a tough problem to solve in cloud. In data storage, “IOPS” or input/output operations per second is a critical metric. Customers need fast IOPS for applications in which data needs to be stored (written) and accessed (read) consistently.

That can be an issue when sharing storage resources with other customers—the definition of public cloud computing. In this model, servers, storage arrays, and networking gear are aggregated, maintained, and run at multiple data centers by one vendor, such as Microsoft (MSFT), Amazon (AMZN), or Google (GOOG). Then all that computing and networking capacity can be rented out to many users so that they don’t need to build more (or any) of their own data centers.

In the case of GitLab, here’s how the company’s infrastructure lead Pablo Carranza described it in the blog post:

On our server, GitLab can only perform 20,000 IOPS but the low limit is 0. With this performance capacity, we became the “noisy neighbors” on the shared machines, using all of the resources. We became the neighbor who plays their music loud and really late. So, we were punished with latencies. Providers don’t provide a minimum IOPS, so they can just drop you. If we wanted to make the disk reach something, we would have to wait 100 ms latency. That’s basically telling us to wait 8 years. What we found is that the cloud was not meant to provide the level of IOPS performance we needed to run an agressive [sic] system like CephFS.

Thus, GitLab plans to purchase and maintain its own storage infrastructure, using “bare metal” hardware instead virtualized cloud servers. “Bare metal” is industry jargon for hardware not running virtualization, a handy software layer upon which IT professionals can pack more workloads with less hardware.

For more on GitLab, read: This Startup Wants to Make Software Development Easier

Virtualization itself is a key underpinning of all cloud computing because of that characteristic. It has a soft underbelly, however, in that virtualization can slow down some operations. Databases, for example, typically are tough to run optimally on virtualized hardware. They do far better on their own dedicated, bare metal. Fast IOPS storage is another example of a task that does better on this unvirtualized, uncloud-like hardware.

For more on cloud computing, watch:

While this is an unusual cloud-to-on-premises move, there are precedents. Dropbox, for example, moved 90% of its storage workload off of Amazon Web Services to its own data centers last year. Even some cloud advocates admit that once a given set of tasks gets big enough and is well understood, it may make more sense to move it back in-house off of the cloud. And, Backblaze co-founder and CEO Gleb Budman has long maintained that his company can build and offer cheaper backup storage than any cloud provider.

On the other hand, workloads that change frequently day-to-day (or minute-to-minute) are likely better suited for a shared cloud infrastructure in which the customer pays for the capacity used. Why buy a big expensive server for a workload to handle a spike in holiday retail sales, for example, which only happens once or twice per year?

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward