Servers

HPE Unveils Huge Single-Memory Computer Prototype

HPE on Tuesday introduced the world’s largest single-memory computer as part of The Machine, its research project into memory-driven computing.

The computer has 160 T-bytes of memory, and HPE expects the architecture to allow memory to scale up to 4.096 yottabytes.

The memory is spread across 40 physical nodes that are interconnected using a high-performance fabric protocol.

The computer runs on an optimized Linux-based operating system running ThunderX2, Cavium’s flagship second-generation dual socket-capable ARMv8-A workload optimized System on a Chip.

It uses photonics/optical communication links, including the new HPE X1 photonics module.

The computer has software programming tools designed to take advantage of abundant persistent memory.

The technology is built for the big data era, HPE said.

One Size Fits All

“We think we’ve got an architecture that scales all the way from edge devices — the intelligent edge — through some bigger systems that are probably more in line with what we’ve built out as a prototype,” said HPE Fellow Andrew Wheeler, deputy director of Hewlett Packard Labs.

“If you think competitively against that scale of things, you’re going up against conventional supercomputers, which have their scale limits and will have energy consumption levels 10 to 20 times what we have here,” he told TechNewsWorld. The 400 nodes and 160 T-bytes of memory “all fit very comfortably within a single [server] rack.”

HPE has discussed the technology with hundreds of customers, it said, including industry verticals, high-performance computing companies, analytics firms, financial institutions and others, Wheeler said, noting that “everyone we talked to completely resonates.”

The Memory-Driven Computing Difference

HPE’s memory-driven computer offers “tremendous speedup,” Wheeler said, because everything resides on the memory fabric.

HPE is working on that fabric as part of the Gen-Z Consortium, which includes ARM, AMD, Cavium, Broadcom, Huawei, IBM, Lenovo, Micron, Dell EMC, Seagate and Western Digital among its members.

The current von Neumann architecture has computers moving data “all over the place,” noted Paul Teich, a principal analyst at Tirias Research. “Even if you’re working with one of the big [Software as a Service] suites, they spend a lot of energy and time moving data around through the processors.”

Having 160 T-bytes in one space lets users “leave all the data in memory, point to an address, and everything happens automagically,” Teich told TechNewsWorld.

“Everything becomes a lot faster and more fluid,” he explained, “and given that your primary big data dataset doesn’t change, the energy you save by not moving that data out of storage into dozens or hundreds of machines is huge. Instead of having to move the data closer to a processor, you have the processor closer to the data.”

Another Side of the Story

The memory-driven computer “is an odd mix of technology, heat use of optical, and flash, so memory speeds should be exceptional, but there’s no GPU, and [it uses] ARM CPUs, so processing could be relatively slow,” noted Rob Enderle, principal analyst at the Enderle Group.

“This thing could handle an impressive amount of data very quickly as long as you aren’t doing that much with it,” he told TechNewsWorld.

“I don’t see the world putting all health records in a single system — or Facebook all its data,” remarked Holger Mueller, a principal analyst at Constellation Research.

“Most of the Big Data use cases we know today are fine with HDD, or HDD with some memory powered by Spark,” he told TechNewsWorld. That leaves “limited room for the new offering for only very high-value, high cost-justifying use cases.”

Cost will be the main factor driving the market, Mueller suggested.

Intel has launched 3D Point Memory, branded as “Optane,” which also is persistent, but it performs at near DRAM speed, Enderle pointed out.

That “likely makes the HPE effort obsolete before it ships,” he said. “Had [HPE] brought this out in 2015 when it was expected, it would have been far more interesting.”

Richard Adhikari

Richard Adhikari has been an ECT News Network reporter since 2008. His areas of focus include cybersecurity, mobile technologies, CRM, databases, software development, mainframe and mid-range computing, and application development. He has written and edited for numerous publications, including Information Week and Computerworld. He is the author of two books on client/server technology. Email Richard.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels