Language Selection

English French German Italian Portuguese Spanish

First Dual-Core Pentium 4 a Rush Job, Intel Says

Filed under
Hardware

Intel's first dual-core chip was a hastily concocted design that was rushed out the door in hopes of beating rival Advanced Micro Devices (AMD) to the punch, an Intel engineer told attendees at the Hot Chips conference today.

Following the company's realization that its single-core processors had hit a wall, Intel engineers plunged headlong into designing the Smithfield dual-core chip in 2004, but they faced numerous challenges in getting that chip to market, according to Jonathan Douglas, a principal engineer in Intel's Digital Enterprise Group, which makes chips for office desktops and servers.

"We faced many challenges from taking a design team focused on making the highest-performing processors possible to one focused on multicore designs," Douglas said in a presentation on Intel's Pentium D 800 series desktop chips and the forthcoming Paxville server chip, both of which are based on the Smithfield core.

Same Old Bus

Intel was unable to design a new memory bus in time for the dual-core chip, so it kept the same bus structure that older Pentium 4 chips used, Douglas said at the conference at Stanford University. This bus could support two separate single-core processors, but it was far less efficient than either the dual-independent buses that will appear on the Paxville processors or the integrated memory controller used on AMD's chips. The memory bus or front-side bus on Intel's chips is used to connect the processor to memory.

All of Intel's testing tools and processes had been designed for single-core chips, Douglas said. As a result, the company had to quickly devise a new testing methodology for dual-core chips that could measure the connections between both cores.

In addition, engineers had to design a new package for the Pentium D chips that could accommodate both cores. "We're putting two cores in one package; it's like trying to fit into the pair of pants you saved from college," Douglas said.

Another Design Preferred

Intel would have preferred to design a package that put two pieces of silicon in a single package, like the design that will be used for a future desktop chip called Presler, but its packaging team simply didn't have time to get that in place for Smithfield, Douglas said.

The company's Pentium D processors consist of two Pentium 4 cores placed closely together on a single silicon die. The design creates some problems, since dual-core processors must have some logic that coordinates the actions of both cores, and those transistors must go somewhere in an already small package, Douglas said. This complication led to signaling problems that needed to be overcome, he said.

Intel also had to design special thermal diodes into the chip to closely monitor the heat emitted by the combination of two fast processor cores, Douglas said.

Ultimately, Intel completed the Smithfield processor core in nine months, Douglas said. By Intel's standards, that is an extremely short development time for a major processor design, said Kevin Krewell, editor in chief of The Microprocessor Report in San Jose, California.

"Most designs take years," Krewell said. "But it was very important for them to get back in the game and have a road map."

Timeline

Intel began to put together the Smithfield project around the time it publicly announced (in May 2004) plans to cancel two future single-core designs and concentrate on multicore chips. The company realized that wringing more clock speed out of its single-core designs would require a significant engineering effort to deal with the excessive heat given off by such chips.

At the time, AMD had already started work on a dual-core version of its Opteron server processor, which it subsequently demonstrated in September of that year. AMD unveiled its dual-core Opteron chip in April, a few days after Intel launched Smithfield. AMD has since released dual-core desktop chips.

One reason for Intel's aggressive schedule for developing Smithfield was the company's need to respond to AMD's actions, Douglas said, without mentioning AMD by name. "We needed a competitive response. We were behind," he said.

Despite the rush, Smithfield was good enough to get Intel into the dual-core era, Krewell said. "It's not an optimal solution, but it's a viable solution. It works, and it works reasonably well," he said.

Intel took a little more time designing the server version of Smithfield, known as Paxville, Douglas said. For instance, the company addressed the bus inefficiencies by designing Paxville to use dual-independent front-side buses. Also, the more sophisticated package was available in time for Paxville, reducing the chip's power consumption, he said.

Paxville will be released ahead of schedule later this year in separate versions for two-way servers and for servers with four or more processors. Though Intel had originally expected to release the chip in 2006, it announced Monday that it will get Paxville out the door in the second half of this year. Another dual-core server processor, code-named Dempsey, will be released in the first quarter of 2006.

Future multicore designs will present additional challenges, Douglas said. Point-to-point buses and integrated memory controllers have been prominent features of other multicore designs, such as Opteron and the Cell processor. These designs help improve performance, but they require a larger number of pins to deliver electricity into the processor, and that can hurt yields, he said.

By Tom Krazit
IDG News Service

More in Tux Machines

The Machine with Open Source Carbon OS is the Next Big Thing – if HP can deliver

HP has recently been facing some serious difficulties and has opted to betting all its resources on the new PC called ‘The Machine’. Probably the most intriguing thing about the machine is that it will rewrite basic computing on a very fundamental level. While the topic has been covered extensively, I realized we haven’t actually touched it here and thought it was about time. Read more

YEAR of the PENGUIN: A Linux mobile in 2015?

It's nearly impossible to sum up an entire year of developments in something as large and nebulous as the world of desktop Linux, especially in a year like this one which has seen some the best releases that projects like Mint, Fedora and openSUSE have put out to date. At the same time the distro that's closest to being a household name, Ubuntu, has been nearly silent since 14.04 arrived in April. To paraphrase author Charles Dickens, the past year of Linux releases has been both the best of times and the worst of times. At the very moment that Linux desktops seem to be reaching new levels of sophistication, polish and "just works" ease-of-use, the entire future of the desktop computer (by which I also mean laptop) feels in doubt. Read more

Jolla's Sailfish OS Update 10 Is Now Available

The tenth update to Jolla's Sailfish mobile operating system is now available. This update is version 1.1.1.26 and is codenamed Vaarainjärvi. Read more

Forget Google's robot cars, now it's on to ANDROID cars

Google is planning a big push into in-car infotainment systems with an upcoming version of Android, sources claim. "Android M" – the version to come after the current Android 5.0 "Lollipop" – will be available in a formulation designed specifically to run cars' built-in screens, Reuters reports, citing anonymous insiders with knowledge of the plan. Google made its first advances toward the automotive world at its I/O developer conference earlier this year, when it unveiled its Android Auto software. The first Android Auto–compatible cars are expected to arrive early next year. Read more