Does one company control too much of how software containers are managed?
There’s been a battle brewing in the world of software development concerning how companies build and update the applications they use internally and, perhaps more importantly, those they use to serve outside customers. It directly involves Docker, which is becoming the standard way for packing up the pieces of these applications in a way that makes them easier to run and update quickly.
That speed-and-ease is increasingly important now as even old-school companies have to build, test, and tweak new software features faster to serve corporate customers. Thus, it’s also the sort of battle that chief information officers and others in the C-Level suite need to watch.
“10 to 15 years ago, enterprise applications were for internal users only. Now, every bank has millions of users it needs to serve,” says Tobi Knaup, chief technology officer for Mesosphere, a company that offers tools to manage and deploy containers.
The use of containers greatly helps in that regard because they make more efficient use of computing resources, stretching them even further than virtualization, which lets a single server run multiple operating systems and applications. Containers extend hardware even further because that server can now run many containers on the same operating system on that server.
Translated that means a company can can pack more applications onto one server than before. Another big perk: Containers can be moved around from one set of servers to another as needed.
Containers also are the vehicle for running “microservices”—basically modular pieces of software that when assembled provide major software services.
“Microservices are all about breaking up massive applications into smaller components that are easy to update and rapid to deploy. When you break big software into small bunches of code, you don’t want to spin up an entire virtual machine, you use a container,” said Joe Fernandes, senior director of project management for Red Hat rht , a big proponent of Docker use.
Docker is the name of both of the underlying container technology and the San Francisco company that popularized its use. When the company started out in 2013, the notion of a container was not really new. The Linux operating system has used containers for a long time, but the company (from now on referred to as Docker Inc.) made the technology much easier for developers to use.
Here’s the issue that has bubbled up among tech industry professionals over the past few weeks. Docker Inc. has been the keeper of the core Docker code, but it’s also been adding new management and orchestration features atop that core. Meanwhile, a raft of other companies including CoreOS, Mesosphere, Joyent, Hashicorp, Apprenda, and others are building their own management and orchestration tools. Several of these back Kubernetes, a way of managing Docker containers pushed by Google goog . All of these tools promise to make it easier to manage and deploy many Docker containers quickly, a task that gets confusing fast.
Get Data Sheet, Fortune’s technology newsletter.
Earlier this year, Docker Inc. added some of its advanced features and capabilities—a product called Swarm— to the core Docker download. That raised a ruckus among many of those other companies that are basically doing the same thing. Swarm claims to make it easier to run multiple (or a cluster of) Docker nodes as a single system.
Put another way, it’s like the basic Docker code is the beef patty of a Big Mac, while Swarm and third-party enhancements are different versions of the special sauce. When Docker Inc. added Swarm to Docker code, it made life difficult for customers to use an alternate special sauce, which of course, provoked howls of complaint from Swarm competitors.
Critics of Docker Inc. further argue frequent updates to Swarm itself have destabilized that core software code. Nobody wants an unstable foundation.
Bob Wise, chief technologist for Samsung SDSA, recently outlined his personal take on the issue in Medium post positing that now may be the time to “fork” Docker, meaning break off the core components for continued development by a community outside of Docker Inc. itself.
Tech publication The New Stack followed up with more on the kerfuffle, reporting that some tech companies are considering a fork to lessen what they see as Docker Inc.’s control of the core code. A few industry sources confirmed that to Fortune, but none would speak on the record because they are not authorized to do so. Docker Inc. officials didn’t comment on this story but one developer in their camp said talk of a fork was overblown.
“The putative forkers doth protest too much,” said Charles Fitzgerald, an angel investor and former tech executive who follows the market.
“These are competitors with an acute complexity problem. Docker was not the first to do containers, it was just the first to simplify the technology to make it mainstream. Now the same thing is playing out in container orchestration and the vendors are unhappy that Docker is making it so much easier than their own approaches.”
Still, even its fans know that Docker Inc., like other companies founded on open-source technology, has to figure out a way to make money off of what is essentially free software. Thus the push to add more value to core Docker, and charge for it. Given that Docker Inc., has raised $180 million in venture funding, there has to be some pressure there.
The idea of forking code is controversial. Some see it as a hostile act of seizing freely available code and taking it another direction. That could mean two separate sets of code develop over time, which in turn can lead to incompatibilities. Typically in the open source world, code is available to anyone who can tweak and change it as long as they contribute their changes back to the overall community.
Red Hat, which ships core Docker with its Linux operating system, is officially opposed to any sort of fragmentation of the basic Docker runtime and packaging format. Fernandes, echoed Wise’s concern that instability of that key layer increases as “more stuff gets added in” is warranted.
But, company says it wants no part of any fork. Red Hat’s take is that the Open Container Initiative backed by nearly everyone in this space—including Docker Inc.—should drive the container standard, ensuring no one company dominates the process or the technology Daniel Rieks, Red Hat’s senior director of systems design and engineering, wrote more about that stance on LinkedIn.
For more on open-source software, watch
While some industry insiders think there might be a place for multiple container specs to suit different application needs—much as there are different file formats for word processing documents or digital images today—others are pushing hard for the Docker spat to be resolved amicably and result in one set of core code.
Whether that can be achieved as various parties advocate for a fork, remains a big question.
Note: this story was updated with Charles Fitzgerald’s comments.