|[Home] [Credit Search] [Category Browser] [Staff Roll Call]||The LINUX.COM Article Archive|
|Originally Published: Thursday, 23 March 2000||Author: Brian Richardson|
|Published to: enhance_articles_hardware/Hardware Reviews||Page: 1/1 - [Std View]|
So What's The Deal With RAMBUS?
Much hoopla has been made over Intel's choice of RAMBUS as their preferred memory technology on upcoming chipsets. While RAMBUS is promising, actual implementation in the real world seems to fall short. Why use RAMBUS? Why does it look like RAMBUS will die before it gets off the ground? Gather 'round kiddies, as Uncle Brian again preaches a "sermon from the /mnt".
On a current PC system, the CPU speed is a composite of two factors, the Front Bus Speed (FSB) and the CPU multiplier. Multiply the FSB by the CPU multiplier and you get the CPU's actual internal speed. An AMD K6-2 300 MHz CPU uses a 5.0X multiplier and a 66 MHz FSB, while the AMD K6-2 450 MHz CPU uses a 4.5X multiplier and a 100 MHz FSB. The FSB is used to derive all other clock signals in the computer. The PCI bus and AGP bus operate at a fraction of these speeds, routed through a clock divider circuit. The memory bus runs at the same speed as the FSB, hence the term "Synchronous DRAM."
The disadvantage of synchronous memory comes when you create a CPU with a faster FSB. You have to find memory that will run on the faster bus. Then everybody who has SDRAM running at 66 MHz has to get 100 MHz memory when they upgrade, or 133 MHz for the new Intel Pentium III EB processors.
How do you make a faster memory technology that doesn't get thrown out every time the FSB gets cranked up? You make a memory technology that isn't running at the same speed as the processor. That's the idea behind RAMBUS technology. RDRAM (RAMBUS DRAM) runs on a serial bus, much like USB or IEEE 1394. The bus starts at 300 MHz, and can go upwards of 800 MHz. So a RAMBUS chipset doesn't use the standard "parallel address line" system of calling memory by row/column addresses. Rather, the memory controller asks the RAMBUS controller for "location X" and the memory is called from RAMBUS. So when the next processor technology comes along, RAMBUS doesn't have to change since it's not coupled to the CPU's FSB clock.
Sounds great, right? So why was the recent Tom's Hardware Review of Intel chipsets based on RAMBUS technology reminiscent of the execution scene from "The Godfather"? The answer is simple; too many eggs in one basket. Intel's total dedication to RAMBUS on their newer chipsets left them unprepared to switch back to SDRAM in the event of prohibitive problems...and there are such problems.
The first problem is limited RAMBUS manufacturing yield. Major memory manufacturers can't produce in sufficient volume so prices are very high. This has many computer manufacturers reluctant to use RDRAM.
The second problem is that RAMBUS offers no significant performance improvement. PC 600/700 RAMBUS is no better than PC 100/133 SDRAM in the benchmarks. Given the several hundred dollars cost difference, who wants to pay for it? The extra layer between the CPU and the memory for the RAMBUS serial protocol causes a performance hit unless you're reading from sequential memory locations (i.e. if you run software that never calls any routines, uses a 'jump' statement, or performs an if/for/while/do statement...fantasy software).
The third problem is forced workarounds in the hardware. Intel's i820 chipset has been reduced to two RDRAM sockets due to chipset bugs. The Intel workaround for running SDRAM on a RAMBUS-enabled chipset is a mess. Their "memory repeater hub" translates CPU calls to RAMBUS then back to CPU calls connected to the SDRAM (this solution generates a lot of thermal noise, causing the average motherboard to function like a radar jammer). Needless to say, this option provides for less-than-stellar performance.
So what is a monolithic silicon manufacturer to do? RAMBUS may payoff in the long run for Intel, but they will probably take a bloody nose in the short term as VIA and AMD work on faster SDRAM using DDR (double-data rate) for the Athlon CPU. For Intel, this may prove to be one nemesis to many.