Skip to Content

Facebook Gets a Networking Facelift Ahead of the Holiday Photo Rush

Inside the Facebook data center in Lulea, Sweden, in 2013.Jonathan Nackstrand/AFP/Getty Images

After scaling its custom designed switches up to several thousand, Facebook’s current crop of networking gear isn’t cutting it. The social networking giant is set to replace its 40-gigabit switches with a new line of 100-gigabit switches that can process 3.2 terabits per second. That’s 3.2 billion megabits per second—which is a lot of capacity for your videos and photos to be whizzing around the servers inside Facebook’s data centers.

At the Structure 2015 event in San Francisco Thursday, Facebook’s Jay Parikh, global head of engineering and infrastructure, outlined Facebook’s new 100-gigabit switches and also said Facebook’s older 40-gigabit customizable switches would be available for sale to the wider community through Accton. Parikh also said the plans for the switches would be available through the Open Compute Foundation for companies to build the switches themselves if they wanted.

Facebook’s (FB) switches were announced in June 2014 as part of its Open Compute effort to build hardware that would be flexible and could evolve at the speed of software. While that’s a tall order, Facebook estimates that it has saved more than $1.2 billion since developing Open Compute in 2011 and implementing its own custom-built hardware.

The industry, too, has benefited, with other firms adopting some of the designs and design principles. Other firms such as Baidu and Microsoft have submitted designs to Open Compute as well. These switches are powerful and can move large amounts of data really quickly. They may be most appropriate for large data center providers and those in the financial services community, but not exactly for the corporate customers that buy more traditional gear from Cisco (CSCO) or Juniper (JNPR).

MORE: Google Is Serious About the Enterprise, Says Cloud Chief at Structure Conference

The boxes themselves aren’t a threat to the big networking companies, but the flexibility they offer might force those vendors to adjust their strategy. Facebook’s server designs and storage designs have influenced the wider industry, so it stands to reason its network thinking will also percolate through the wider industry architecture. Since Facebook’s design focuses on flexibility and configurability, and the current networking zeitgeist is still very much a black box mentality, that collision of ideals will be a fun one to watch.

Parikh also shared some other news from the Open Compute Foundation, including that the Department of Energy announced plans to deploy a series of Open Compute inspired high-performance computing clusters that will be used in three of its national labs. These clusters will be used for research in Los Alamos, Sandia, and Lawrence Livermore, making this the first time Open Compute hardware is being used for high-performance computing. (However, supercomputing has been getting less super for a decade or so.)

You can follow Stacey Higginbotham on Twitter at @gigastacey @gigastacey, and read all of her posts here or via her RSS feed. And please subscribe to Data Sheet, Fortune’s daily newsletter on the business of technology.

For more on Facebook, watch this Fortune video: