Language Selection

English French German Italian Portuguese Spanish

First Dual-Core Pentium 4 a Rush Job, Intel Says

Filed under
Hardware

Intel's first dual-core chip was a hastily concocted design that was rushed out the door in hopes of beating rival Advanced Micro Devices (AMD) to the punch, an Intel engineer told attendees at the Hot Chips conference today.

Following the company's realization that its single-core processors had hit a wall, Intel engineers plunged headlong into designing the Smithfield dual-core chip in 2004, but they faced numerous challenges in getting that chip to market, according to Jonathan Douglas, a principal engineer in Intel's Digital Enterprise Group, which makes chips for office desktops and servers.

"We faced many challenges from taking a design team focused on making the highest-performing processors possible to one focused on multicore designs," Douglas said in a presentation on Intel's Pentium D 800 series desktop chips and the forthcoming Paxville server chip, both of which are based on the Smithfield core.

Same Old Bus

Intel was unable to design a new memory bus in time for the dual-core chip, so it kept the same bus structure that older Pentium 4 chips used, Douglas said at the conference at Stanford University. This bus could support two separate single-core processors, but it was far less efficient than either the dual-independent buses that will appear on the Paxville processors or the integrated memory controller used on AMD's chips. The memory bus or front-side bus on Intel's chips is used to connect the processor to memory.

All of Intel's testing tools and processes had been designed for single-core chips, Douglas said. As a result, the company had to quickly devise a new testing methodology for dual-core chips that could measure the connections between both cores.

In addition, engineers had to design a new package for the Pentium D chips that could accommodate both cores. "We're putting two cores in one package; it's like trying to fit into the pair of pants you saved from college," Douglas said.

Another Design Preferred

Intel would have preferred to design a package that put two pieces of silicon in a single package, like the design that will be used for a future desktop chip called Presler, but its packaging team simply didn't have time to get that in place for Smithfield, Douglas said.

The company's Pentium D processors consist of two Pentium 4 cores placed closely together on a single silicon die. The design creates some problems, since dual-core processors must have some logic that coordinates the actions of both cores, and those transistors must go somewhere in an already small package, Douglas said. This complication led to signaling problems that needed to be overcome, he said.

Intel also had to design special thermal diodes into the chip to closely monitor the heat emitted by the combination of two fast processor cores, Douglas said.

Ultimately, Intel completed the Smithfield processor core in nine months, Douglas said. By Intel's standards, that is an extremely short development time for a major processor design, said Kevin Krewell, editor in chief of The Microprocessor Report in San Jose, California.

"Most designs take years," Krewell said. "But it was very important for them to get back in the game and have a road map."

Timeline

Intel began to put together the Smithfield project around the time it publicly announced (in May 2004) plans to cancel two future single-core designs and concentrate on multicore chips. The company realized that wringing more clock speed out of its single-core designs would require a significant engineering effort to deal with the excessive heat given off by such chips.

At the time, AMD had already started work on a dual-core version of its Opteron server processor, which it subsequently demonstrated in September of that year. AMD unveiled its dual-core Opteron chip in April, a few days after Intel launched Smithfield. AMD has since released dual-core desktop chips.

One reason for Intel's aggressive schedule for developing Smithfield was the company's need to respond to AMD's actions, Douglas said, without mentioning AMD by name. "We needed a competitive response. We were behind," he said.

Despite the rush, Smithfield was good enough to get Intel into the dual-core era, Krewell said. "It's not an optimal solution, but it's a viable solution. It works, and it works reasonably well," he said.

Intel took a little more time designing the server version of Smithfield, known as Paxville, Douglas said. For instance, the company addressed the bus inefficiencies by designing Paxville to use dual-independent front-side buses. Also, the more sophisticated package was available in time for Paxville, reducing the chip's power consumption, he said.

Paxville will be released ahead of schedule later this year in separate versions for two-way servers and for servers with four or more processors. Though Intel had originally expected to release the chip in 2006, it announced Monday that it will get Paxville out the door in the second half of this year. Another dual-core server processor, code-named Dempsey, will be released in the first quarter of 2006.

Future multicore designs will present additional challenges, Douglas said. Point-to-point buses and integrated memory controllers have been prominent features of other multicore designs, such as Opteron and the Cell processor. These designs help improve performance, but they require a larger number of pins to deliver electricity into the processor, and that can hurt yields, he said.

By Tom Krazit
IDG News Service

More in Tux Machines

Leftovers: Software

  • Ocs-server 0.1 Technology Preview released! (with cats!)
    Finally, after many iterations, we have something that works! The ocs-server team (Claudio Desideri and Francesco Wofford) is therefore announcing the first release of ocs-server 0.1 technology preview.
  • 5 Less known Linux Admin Tools
  • dmMediaConverter Review - Converting Videos Has Never Been Easier
    dmMediaConverter is described by its developer as an FFmpeg frontend (GUI), but regular users only need to know that it's an application that allows them to quickly convert files from one format to another, in a simple and intuitive way. It's not the best looking out there, but it gets the job done.
  • Goggles Music Manager 1.0.7 Adds Support for Ratings and Tags to Filters, More
    On July 30, the developers of the Goggles Music Manager software, an open-source music collection manager and player that supports some of the most popular audio file formats, announced the release of version 1.0.7.
  • Semi-Official Google Drive Support For Linux Arrives, What's Next?
    Three years ago, when a user would attempt to download the Google Drive Sync Client, Google would bring them to the appropriate download page, which of course, is based off of the operating system that user is running on. If a user would attempt to download the Google Drive Sync Client while running on Linux, they’d land on a page where the message reads: “Not (yet) supported for Linux.” So, what’s the deal with Google not developing a sync client for Linux users, seeing as to how they build a lot of their things using Linux? There’s one simple answer to that, unfortunately. Windows is mainstream, so a lot of their focus is put on what a majority of people use. The bigger the market, the more money in their pockets, of course. But don’t fear, change is near!

today's howtos

Leftovers: Gaming

Leftovers: KDE

  • Kubuntu Wily Alpha 2
    The Second Alpha of Wily (to become 15.10) has now been released!
  • Plasma Mobile References Images by Kubuntu
    We launched Plasma Mobile at KDE’s Akademy conference, a free, open and community made mobile platform.
  • The Sun Sets on KDE-Solaris
    The KDE-Solaris site has been shuttered. The subdomain now redirects to KDE techbase, which documents the last efforts related to KDE on then-OpenSolaris. From the year 2000 or earlier until 2013, you could run KDE — two, three or four — on Solaris, either SPARC or (later) x86. I remember doing packaging for my university, way back when, on a Sun Enterprise 10000 with some ridiculous amount of memory — maybe 24GB, which was ridiculous for that time. This led — together with some guy somewhere who had a DEC Alpha — to the first 64-bitness patches in KDE. Solaris gave way to OpenSolaris, and Stefan Teleman rebooted the packaging efforts in cooperation with Sun, using the Sun Studio compiler. This led to a lot of work in the KDE codebase in fixing up gcc-isms. I’d like to think that that evened up the road a little for other non-gcc compilers later.
  • What It Takes Porting Qt Applications To Wayland