Here’s How Facebook Will Keep Sending Cat Videos Around the World
Facebook is the social network that people use to share cat videos, Funny or Die clips, and baby pictures. But for techies, Facebook is a pioneer in building serious technology that delivers all that content.
Facebook (FB) is now seen as an expert in building and operating massive data centers and making sure data flows around the world to its users. This is serious, heavy-duty technology.
This week, Facebook built on that trend by divulging more about how it gets all those user posts to recipients as quickly as possible. The company has published a technical paper called Steering Oceans of Content to the World that broadly outlines the software systems it’s built to meet “fast-growing demand from both services and people.”
Related: Facebook Claims Two Billion Users
While Facebook has talked quite a bit about the servers and networking switches it designs to run its massive cloud data centers, this is the first time the company has published details of how it connects its core data centers to “global points of presence,” which are little way stations that put the needed content closer to would-be consumers. This system is known as Facebook Edge Fabric.
Get Data Sheet, Fortune’s technology newsletter.
There are lots of parts to this network, broadly outlined in the post. For example, the Edge Fabric must “know” all the routes from a given point of presence to the ultimate destination and route the traffic appropriately.
But the high-level takeaway from this is that Facebook, and a handful of other cloud-oriented companies—like Google (GOOG), and Microsoft (MSFT)—are increasingly driving the hardware, software, and networking agenda. They’re doing this in part by designing the hardware they need for their own use, and then making those specifications available to other businesses via the Open Compute Project and other groups to use as they see fit.
On the software side, much the same thing is happening, with Facebook coming up with its own load balancing software that divvies traffic up to eliminate bottlenecks on any one connection point. That software is also often open-sourced for others to build on and use.
Given that more corporate and consumer data and software is now running on these massive clouds—which as noted tend to design their own hardware— smaller businesses have less need to re-stock their own data centers with server and switching gear from those companies going forward.