Researchers at Intel unveiled an experimental 80-core microchip Monday at the International Solid States Circuit Conference in San Francisco. Known as a “Teraflop research chip,” it is the first programmable microprocessor capable of delivering performance formerly associated only with supercomputers, according to Intel.
If successful, Intel’s research into “tera-scale computing” — in which a chip modeled on Teraflop chips can perform trillions of calculations per second and move terabytes of data — has the potential to transform computers, software and the way people use their computers.
“Basically, it creates a processor that can reconfigure itself on the fly to do a variety of tasks like graphics and physics that required specialized parts in the past,” Rob Enderle, principal analyst at Enderle Group, told TechNewsWorld.
Four Cores, Eight Cores or More
Previously, Intel concentrated on making microprocessors that ran faster and faster, Martin Reynolds, a Gartner Research fellow, told TechNewsWorld. This strategy more or less followed Moore’s Law, which states that the number of transistors on a chip will double every 18 to 24 months with each subsequent generation. Intel, however, “ran into a wall where faster and faster became too hot, consuming too much power,” Reynolds said.
“So [the question is], how do you use Moore’s Law even if you can’t make things faster,” he explained. “The answer is you put more [microprocessors] on instead.”
Intel found that by making two slower — but not much slower — processors, they consume much less power but do almost twice as much work as a single faster chip, according to Reynolds. Intel’s two-core Core Duo and Core 2 Duo chips and Core 2 Quad four-core chips take advantage of that technology.
“We have a natural progression coming,” Reynolds said. “We’ve got two-core and four-core, now. And we’ve got eight- and sixteen-core coming.”
Intel is attempting to demonstrate that it can take this technology to far greater numbers than was previously thought possible, Reynolds continued. “They are trying to figure out what they have to do to build an 80-core chip, one with 80 processors on it. “
Intel’s answer with the Teraflop research chips is a “tile design” using smaller cores replicated as tiles. According to the company, the new design makes it easier to create a chip with many cores and lays a path to manufacture multi-core processors with billions of transistors more efficiently in the future.
Less Power, Captain
Intel’s research thus far has produced an 80-core chip designed with floating point cores. Although the chip is not suitable for production, the company says it was able to glean important insights from its Teraflop research chip in terms of new silicon design methodologies, high-bandwidth interconnects and energy management approaches.
Intel’s initial tests in 1996, which achieved Teraflop performance using an ASCI Red Supercomputer Intel built for the Sandia National Laboratory that was powered by almost 10,000 Pentium Pro processors, consumed some 500 kilowatts of electricity. The 80-core research chip uses only 62 watts of power, the company said. That is less than most single-core processors in use today.
“It’s a long-term [project], but in building this chip, [Intel] is addressing a few of the problems,” Reynolds said. “The first question is, will it run cool enough? And they have developed power management technology that will let them do that.”
The second challenge for Intel is how to create an infrastructure through which the 80 processors can communicate, transfer information and perform. Here, Intel has developed a mesh-like “network-on-a-chip” architecture that allows high-bandwidth communication between the cores, the company said.
If You Build It, They Will Need It
Before Intel can begin manufacturing the chip it will have to solve two big stumbling blocks, Enderle explained — cost and usability.
“They have to get the cost down dramatically,” he said. “And they have to figure out a way to effectively use this potential.”
Many core chips will probably make it into PCs during the middle or latter part of the next decade, Enderle predicted. When they do, it will mean a “rather massive change out of existing hardware” since the products based on this will be more flexible and less expensive for a given level of work. That also means software makers will have to design totally new products.
“The software community isn’t able yet to really grasp two cores effectively, let alone four or eight. Larger numbers are well beyond the current skill set,” Enderle explained.
If software makers are not careful, they could be steamrolled by multi-core technology, Gartner’s Reynolds added. He predicted that new software companies will emerge who have a new approach to the problem, and they will displace existing manufacturers.
“Microsoft has to be thinking about this,” Reynolds claimed. “And they could get caught by this if things change and they find that they have been overtaken by new ideas. They are very aware of this and have been researching it as well.”
Now finally multi-processing for all. Not that most of us need it but it sure looks good!
It’s been a long time since I was an electronics guy, but just had to throw out my thoughts here. Ever since I was a boy and dad had me file his rice copy papers file managment struck me gee if I make 10 colums I can file in a fraction of the time the papers by going to the column that starts with the nuber of my current paper. The rest of the columns and all the papers within won’t be a time comsuming factor at all, and if my brothers and sisters come down and help me out wow the family is multiprocessing! Then as a young teck working on repair of Z-80 based dailers the winds of parallel war were aloft. Now however it’s not war rather creative growth.
The article mentioned that the software programers must learn how to program. Well largly it has been figured out. Unix was given to us from AT&T and as I can recall the old Digital Vax units in parallel and distributive programs. Anyway I’m not a programmer but from my purely minds eye of what is or should be it seems that the basic structure of software design using such strong chips should be something like as follows. The Soft ware designer will descide how heavy the work load is and request the desired configuration. This could be done in say 4 levels of priority. Say he sets up something based of an alpha-numeric code. He might want 36, or 72 processors leaving 8 processors for functions other than application calculations like networking, commucations, management, security, administration, diagnostics, governmental and so on.
If it is required that the applications are required to run in multiple word sizes say 64 basic up through 128, and 256. the Power management can be more involved depending of say delay in processing times.
Oh yes we of course will need two more chips in addition to main unit to support the video. only joking. But it is exciting and now if we can just get rid of these damn hard drives and bring the fiber with say 256 descriet colors right into the home wow. I would not even complaing if the government wanted a piece of my fiber connection and one or two of my processors for thier token ring network.
Until then It seems to me the FCC should demand that all us poor smoze on dial up’s miles from the CO sould be allowed the top data rates possible so as to clear network loads and increase the efficiencies and availability of said network. Not to mention improve my surf and stimulate my minds growth and functionality.
Anyway for the last 20 years or so I’ve just been an electrician since someone stole my scope, but I hope their might be a little insight for a beginner out thier someware.
Be Good, Be Well, Stay Strong, and Have Some Fun
T