At last year’s HP Discover, the company’s Chief Technology Officer Martin Fink announced an ambitious new technology, The Machine. At this year’s HP Discover, he was back with an update. The Machine’s uniqueness is its memory-based architecture compared to today’s CPU-centered architecture. This is what they are calling memory-centric computing.
Following HP Discover 2015, Martin Fink posted an update to summarize titled Accelerating The Machine. It’s easy to see the marketing spin “accelerating” The Machine when the rest of the post explains the difficulty of creating a new computing paradigm. How do you create a computer that uses new hardware and requires new software to control it?
We call this process hardware/software co-development. We’re building hardware to handle data structures and run applications that don’t exist today. Simultaneously, we’re writing code to run on hardware that doesn’t exist today. To do this, we emulate the nascent hardware inside the best systems available today, such as HP Superdome X. What we learn in hardware goes into the emulators so the software can improve. What we learn in software development informs the hardware teams. It’s a virtuous cycle.
One year since the initial announcement, some are concerned that The Machine is losing its signature component, the memristors that made the announcement so exciting. The Machine prototypes will be using plain old DRAM with emulated persistence to build the prototypes as quickly as possible.
The heart of The Machine is memory. This is so different from the processor-centric architecture today that we decided The Machine architecture needed a new name: Memory-Driven Computing.
A lot of memory. We’re aiming for hundreds of petabytes of memory which needs to be fast and persistent. (Even the first prototype will cram hundreds of terabytes of fast, persistent memory into a single rack.) The perfect memory doesn’t exist today. We can get fast, orpersistent, but not both at once.
Remember I said we want working prototypes as soon as possible? We can get there sooner if we use plain old DRAM as a stand-in for the perfect memory technology. No, DRAM isn’t persistent, but we can emulate persistence.
Do we still intend to use Memristors? Yes, of course. We still believe that it’s the best candidate, but as everyone who works in the chip industry knows, the road from “lab to fab” is a long and rocky one. We’re not there. Yet.
Should we wait until Memristors are ready before making prototype Machines? No, that wouldn’t make much sense. We want as many people to be able to start working in a Memory-Driven world as soon as possible. And, pragmatically, we want to show off what we’ve created in a tangible form, doing something amazing.
Remember, we’re talking about the first realization of The Machine architecture. Version 0.9 if you will. The versions that follow will only get better. Should we use another memory technology if it matures before Memristor? Phase-change RAM, perhaps? Again, the answer is yes, of course.
Why? Because it moves us forward faster. Because it moves our industry and the world forward, solving real-world challenges that we can’t even approach today.
Essentially, it sounds like HP Labs is doing everything necessary to create working prototypes with plans to come back in future iterations to replace components as they learn more and the hardware advances. With the prototypes, they should be able to learn more about available hardware optimizations that they can design towards and software controls that need to be implemented.
It will be interesting to see how The Machine fares with HP’s proposed split of the company. Expected to close by October 2015, one business would focus on PCs and printers while the other continues the enterprise products and services business. Assumedly The Machine would go with the enterprise products division for a datacenter focus.