IBM takes on tough task of deploying containers across clouds by Barb Darrow @FortuneMagazine August 27, 2015, 5:08 PM EDT E-mail Tweet Facebook Linkedin Share icons The tech world is enamored of containers, new technology exemplified by fast-rising startup Docker that packages up applications in a resource-efficient and portable way. The advantage for businesses is that containers can run applications on less hardware and those applications can pull data from many sources. But, perhaps most important, those applications can be moved from one set of infrastructure to another with minimal muss and fuss. Sets or clusters of containers can be scheduled and run on a given cloud with tools like the newly available Google Container Engine as well as Google-backed Kubernetes or the Amazon Container Service. But now developers have their eye on the next frontier: Deploying container clusters across different clouds—something IBM IBM says it has accomplished. A team at IBM Research working with Moustafa AbdelBaky, a PhD candidate at Rutgers University’s Discovery Informatics Institute, used open-source technology called CometCloud as the basis of this work, which he calls C-Ports. Being able to run containers across clouds and geographies can address several enterprise concerns. For example, it can ensure that a particular application and its associated data stay within a set region, which is important given data sovereignty rules. Or it can parse out work across regions and clouds so that if there’s a data center issue in one area, life can go on. Or if a company runs out of capacity in one situation, it can “burst” that job to another. According to an IBM blog post about the work, C-Ports has proven itself in at least one situation. C-Ports (pronounced seaports), as we call it, has already been demonstrated to effectively deploy containers across 5 clouds (Bluemix, Amazon AWS, Google Cloud, Chameleon , and FutureSystems and 2 clusters (one at IBM and another at Rutgers University) in order to create a dynamic federation. Additionally, C-Ports is not tied to a specific container scheduler, i.e., it can work with any local container scheduler, such as Kubernetes or Bluemix, or directly deploy containers on the given resource/cloud, thereby increasing its portability and flexibility. In the rest of this blog, we will present C-Ports while highlighting the challenges associated with running containers in a multi-cloud/multi-datacenter environment. Bluemix is IBM’s IBM cloud development environment; Chameleon is a cloud used for large-scale testing; FutureSystems is an academic cloud; and Amazon AMZN Web Services is the market-leading public cloud. Parceling out containers on multiple environments is indeed an impressive feat if it works as advertised. “The whole premise behind containers is portability,” said David Mytton, CEO of London-based Server Density, who follows cloud developments closely. The ability to run workloads across cloud computing environments is an attractive proposition for big customers that want to take advantage of the best infrastructure to suit their needs and not get locked into any one cloud provider. But Mytton and other cloud watchers want to hear more about this works in the real world. “Cloudbursting is harder than people think because of network and storage constraints. I’d like to see more about how this project addresses those issues,” said Sebastian Stadil, CEO of Scalr, San Francisco. IBM, which Fortune contacted for additional comment but has not yet responded, is not alone in this quest for distributed container adoption. The nascent Google GOOG Ubernetes project aims to enable Kubernetes container clusters to share jobs across clouds. As container use proliferates—and it will—look for more talk about the need to deploy and manage containers across clouds. After all, that’s what’s needed for them to fulfill their promise of portability. Subscribe to Data Sheet, Fortune’s daily newsletter on the business of technology. For more on IBM’s cloud-and-data strategy, please check out the video below.