At HP Labs, an effort to make the biggest change to computing since the 1940s

December 31, 2014, 7:12 PM UTC
HP CTO Martin Fink (undated)
HP CTO Martin Fink (undated)
Courtesy of Hewlett-Packard

HP Labs has been Hewlett-Packard’s engine of innovation for decades, but the division won a lot of attention this year when it began talking about the Machine, an ambitious project to create an entirely new computing architecture capable of handling the ever-increasing demands of big data and the Internet of things.

How bold is HP’s vision for the Machine? The last time a broad-based computing architecture was conceived was the Von Neumann architecture in the 1940s. Nearly everything that has happened in computing since—the PC, the Internet, Silicon Valley itself—has largely relied on it.

The Machine builds on projects HP Labs has been working on as long ago as the 1990s, including memristors that fuse memory and storage, silicon photonics that can transfer data many times faster than copper wires, and servers that use a tenth as much energy as current ones. Now HP (HPQ) is creating a new operating system to run on the Machine’s next-generation hardware.

In an era where innovation often seems to be the province of startups, the Machine is a potentially game-changing project that only a giant with decades of experience could build. We talked to Martin Fink, HP’s chief technology officer and the director of HP Labs, about how the company went about building the Machine as well as the 75-year-old company’s approach to innovation.

Fortune: How does HP Labs cultivate new ideas?

There is both a top-down and bottom-up element to the innovation process. For the top-down, every year we refresh what we call a mega-trends document. It looks at what is happening all over the planet—economic trends, social trends, GDP rates—that impact the computing landscape. So something like the Internet of things might come up, and we try to translate that into what that means for HP.

The bottom-up process is more what I would consider serendipitous inventions. A lot of times I remind people that invention is not something that you can program, predict or dictate. Disruptive innovations tend to be ideas that come up from specific engineers, in some cases through the strangest of ways.

To make that real, I’ll give you the concrete example of how inkjet printing was born 30 years ago. An engineer came to work, went into the coffee room, poured his coffee, and the coffee machine did the percolation effect of hot water going through a spout. And he thought to himself, “What if I heat up in some ink and shove it through a spout?” And inkjet printing was born from that very serendipitous event, which could not of been planned predicted or programmed in any particular way.

The step up from there is the cultural process. So for the engineer in the coffee room, the cultural process made it easy for him to test out his idea and translate that into a real technology. There’s a cultural facility that we have that allows people to try out and just test ideas and thoughts, etc. And then kill those that don’t pan out. And then try out the next ones. That’s how it works, I often say process and innovation are two words that don’t belong in the same sentence. And so I try to encourage as much serendipity as I can.

How has the idea of the Machine evolved? It uses several different technologies coming together under this one umbrella of the Machine.

There was a mix of serendipity and responding to market conditions that got us to the Machine. Before I showed up, a number of areas were active, namely memristor research, photonics research, and research you might recognize as Moonshot [the company’s software-defined servers], having more specialized processing. I did an inventory of all the stuff that was going on in HP Labs and these three were obviously ones that were being actively worked on and investigated.

I started to understand the physical limitations of the DRAM and flash [memory] and the limitations of continuing try to jam electrons through copper. As a team, we got together and connected all those dots and saw that we could actually end up with an interesting and radically new computing architecture.

A lot of times people like to think that there is some Big Bang event that occurs. It always tends to be, and in this case, over a period of weeks of months of iterating through each of the technologies and going through what I like to call the aha moments where you say, “Well, that’s different.”

If we spin things forward over the next several years as the Machine becomes something that people use, what kinds of challenges do you expect and what are some of the key things that need to be accomplished?

Let’s assume all this works and you have a new architecture of computing, and it’s the first time we’ve changed the architecture of computing since the 1940s with Von Neumann. And this new architecture applies equally to phones, tablets, laptops, servers, supercomputers, whatever. The single biggest challenge you get to is rethinking how software works on that kind of machine.

We have spent every ounce of software programming since the 1940s building toward this Von Neumann construct where you have a CPU, main memory, and storage. Now for the first time, you have CPU and universal memory—and that’s it. And we have talked about this with a lot of customers along the way, and invariably customers kind of walk away thinking, “Wow, in order to fully leverage this I have to rethink how I do software.”

It’s very rare that the world flash-cuts from the old to the new. Companies need some kind of transition vehicle. So the biggest challenge and responsibility that HP will have is, how do we make your current world work and look like it does today but working in a Machine context? And how do we allow you to write a new class of apps, so that over a period of probably a couple of decades, quite frankly, the software world can transform?

Let’s say everything goes right and this transforms how computing works. How does this affect how people use computing in the future?

One of the reasons that were doing this is that, as a result of compute and data being very tightly close together, we can be far more efficient in performance. The second big, big thing is energy consumption. When you are not consuming any electricity to maintain your data, your energy consumption curve goes down dramatically. And then tied to that is the photonics angle because I can get significantly more bandwidth through a pipe, but as I get to extreme levels of bandwidth I do it at an energy curve that is far, far, far less than what is being used today.

So back to the original conversation around the mega-trends: They told us that we were building data centers at an alarming rate and the biggest problem that people have in creating data centers is that they can’t get energy to the data center. That was probably the single biggest part of the mega-trend analysis that got us to the Machine: How do we continue to store and manipulate data? If we have no more energy available to us, then we’re going to have to do it differently.

Are there thinkers in the past who are an inspiration either to you personally or to HP Labs, or role models that you look to?

If I had to point to one single individual who is influencing me in this journey it would be Joel Birnbaum. He ran HP Labs during two stints, from 1984 to 1986 and again from 1991 through 1999. Joel Birnbaum essentially allowed HP to bring forth RISC computing. He went through what I would categorize as a very similar event where he is trying to take a completely new [microprocessing] architecture, in his day RISC, and work through all of the end-to-end transformations to bring a completely new architecture to market.

For more stories from our Shape the Future package, click here.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.