The Low End Mac Mailbag

ATI Radeon Trade Up Program, Yet More on Intel and Other CPUs, and Step-by-Step Overclocking of a Beige G3

Dan Knight - 2003.02.19 - Tip Jar

ATI Trade-up Program

In response to discussion of Radeon cards and Mac OS X in recent mailbags, Gerald McRoberts writes:

BTW, I forgot to mention that ATI has a "trade-up" program for their display cards. Buy a Radeon 7000 for $129 and you get a $50 refund when you send them any old display board (apparently working or not). I just sent in a TwinTurbo 128 as trade-in on R7000. Pretty good deal, actually. The next lowest price I found was $99 back before Christmas at OWC, which hasn't been less than $119 since.

Wow, sounds like this could be a great way to unload some of those ancient Apple 4-bit NuBus video cards from way back....

CPU Competition

In response to CPU Competition, Andrew Prosnik comments:

Just wanted to voice my opinion on some points:

Intel using proprietary bus schemes? Huh? All P6-based products (Pentium Pro, Celerons, Pentium II, Pentium III) use a protocol called GTL+, if I remember correctly. And, for that protocol and platform (Slot 1, Socket 370) you can buy Via/Cyrix CPUs and I think there is another one or two (Transmeta, perhaps, and maybe one other?)

GTL+ is not blocked by Intel on the P6 platform for either CPUs or motherboard chipset manufacturers.

Okay, so how about the P7 (Pentium 4) platform? That uses a new bus and Intel tried to guard their intellectual property. They didn't want Via to make motherboard chipsets without getting into a licensing agreement. Or was it SiS? Same difference. Anyway, Intel allowed other motherboard manufacturers to make products with a license and, apparently, without one as well since Intel sued the other company and they settled out of court with the other company apparently paying nothing and able to make motherboard chipsets.

One of the reasons you don't see much "competition" for the Pentium 4 and earlier Pentium 2/Pentium 3/Celeron lines is mainly due to the crappy system bus that Intel uses. After the Pentium platform and the AMD K6-3/K6-2+, AMD licensed the DEC Alpha EV6 system bus. It was much better suited to multiprocessing and more efficient than Intel's scheme in general. While Intel was stuck at 100 MHz or 133 MHz as the CPU interface (front-side bus, FSB), AMD's Athlon handled 200 MHz (100 MHz double-rate) and later 266 MHz and now 333 MHz.

AMD didn't license or use GTL+ because, well, it was an inferior product. Their Athlon CPU used the same physical connectors (Slot A was physically the same connector as Slot 1 and I believe the Socket A connector was/is the same as Socket 370) to reduce costs for motherboard manufacturers. Sure, you couldn't use an Athlon on a Pentium 3 motherboard, but that's probably a good thing.

AMD has the resources to develop and license their own platform. First it was licensing DEC's technology for the EV6 system bus, and now, for their new upcoming CPUs, they have developed their own technology (HyperTransport) to use. Other CPU manufacturers realized long ago that they cannot compete with either AMD or Intel on a performance level so they instead work on a low-power/budget level. For them it is much cheaper to leverage existing solutions such as Intel-based motherboards. It's also much cheaper for manufacturers using those CPUs to use readily-available components for things like laptops, kiosks, and all-in-one PCs.

So . . . I disagree with the assertion that Intel uses a proprietary system bus to keep out competition. It's just the reality of the market. Intel's competition found a better solution that helped them distinguish themselves from Intel and offer better performance as well. Previous x86 system buses were not exactly "open source" or "free" either. Nor were x86 PCs - until Compaq made a compatible system BIOS, only IBM was able to make x86-based systems (or you would have to get a license from them, I believe). You know, like when Apple let clones make systems. Except with the x86, IBM couldn't make them stop when they wanted no more competition. ;)

Heck, both Cyrix and AMD used to OEM Intel CPUs back when Intel didn't have enough CPU fabs to keep up with demand. AMD actually made the fastest 386 ever produced. I think both Cyrix and AMD's last generation of OEM'd CPUs was the 486 line. Heck, even IBM OEM'd 486's under their "Blue Thunder" brand.

Regarding Intel marketing the Pentium 4, well, I think Intel made the design decision for that CPU back when AMD and Intel were going neck-to-neck and Intel was having trouble scaling the aging P6 core up in speed. Intel (and everyone else) knows that MHz sells to uneducated (not "stupid") consumers, so I'm sure that was a consideration as well as the technical aspect of needing to scale the CPU well. Heck, I'm no car expert. When I looked for a car I checked a few sites for reliability and safety. Then I looked at price. The car has 242 HP, gets 24 MPG, displaces 3.5L, blah blah blah. I don't care. I'm going to be driving the speed limit - 55 to 65 MPH. I don't really care what's under the hood as long as I get where I need to go.

In much the same sense, and this is an argument that I hear often from Mac advocates, why should I care so much about what's under the hood as long as I can get my work done? It's only when the other cars all move so much faster that I begin to notice. Say, the current G4/P4 situation.

But still, as a consumer looking at x86 systems on a basic performance parity, I see two systems that cost around the same. Do I care if one is more prone to CPU pipeline stalls? Do I care if one has a higher IPC? Or do I want something in my budget that does what I want it to do?

I take issue with the assumption that "stupid" consumers buy based on value (how much does it cost and does it do what I need to do?). That's a pretty arrogant PC hardware enthusiast attitude. Not everyone needs to run Unreal Tournament 2003 at 1600x1200 at 80 fps with the latest ATI Radeon 9700 Pro. Or Quake 3 at 366 fps.

Not everyone buys PC components piecemeal and assembles them by themselves. Not everyone wants to buy and install an OS and office suite. Not everyone is skilled enough to do this and troubleshoot the resulting system that they made - since there's no warranty or tech support on the system as a whole.

Hardware enthusiasts often take the elitist point of view that everyone should "educate" themselves and want to put together the best systems or hunt down various pieces of hardware for the best value.

What's a Mac hardware enthusiast? Someone who swaps the CPU out for an upgrade. Someone who adds another PCI card or hard drive. Someone who chips their old Quadra. Someone who paints their case.

It's much more difficult and expensive for Mac people to buy generic parts to make a Mac - you have to get Apple-based stuff or adapt PC stuff so that it will hold Apple products. I don't see those Mac enthusiasts encouraging "stupid" Mac consumers to build a Mac system based on buying components off of eBay.

By that token, it doesn't make sense for that to apply to the PC arena. Far more people don't know about how to maintain and fix and assemble a PC than do know. Just like far more people don't know how to maintain and fix and assemble a car than do know.

We often forget that the PC is a specialized area of knowledge and that all those unwashed heathens don't have our specialized level of knowledge. Why should we look down on them because they don't have the same training or experience? Why should we look down on them because they can get a solid system with minimal fuss?

Sure, they bought a Dell . . . but it's not slow, it's not hard to maintain, it has a warranty, and it is easier to deal with than some Frankenstein system.

Sorry for the digression but that whole "stupid" comment just smacks to me of techno-elitism. I get enough of that with people at work who know too much for their own good and let people know it. Anyone working in tech support or system administration should also know what I'm talking about. ;)

Getting back to the previous subject, true, the longer pipeline is a bit more inefficient for the Pentium 4. However, it isn't like Intel is sitting back and only using MHz to sell CPUs. They came out with hardware prefetch in an attempt to keep their CPU fed with instructions. They keep bumping up the CPU interface's speed, what is it, 533 MHz effective rate now? They include more and more on-chip cache to prevent the CPU from needing to access "slow" main system memory.

And the latest feat is on-chip multithreading, of sorts. Like a faux dual-CPU in one CPU package. While not as efficient as a true dual-CPU system, "HyperThreading" helps keep the CPU running at maximum efficiency.

Efficiency or not, Intel has the fastest, most advanced consumer CPU on the market. The MHz have helped that, but so have Intel's efforts in improving overall system performance through forcing new features onto the Pentium 4 platform.

Contrast that with the G4. It might be a decent CPU, but it is hampered by the system surrounding it. The interface to the rest of the system can't keep the CPU fed with instructions fast enough for the CPU to really flex its potential. All of the improvements on the Mac platform are either with the CPU or with the rest of the system - but between the CPU and the rest of the system is still a huge bottleneck. Intel's got far less of a bottleneck and has taken measures to make sure that even that bottleneck is minimalized as much as possible.

The last thing I'll harp about is the comment about Apple not wanting to use IBM's CPUs. Doesn't Apple use IBM G3s in all their laptops now? If what a previous reader wrote and the "Apple System Bus" on the PPC970 and "Apple co-designing" the CPU are true, doesn't it seem silly that Apple would not want to use that CPU?

Pentium 3 CPUs are more efficient, clock-for-clock, than Pentium 4 CPUs. The problem is that the Pentium 3 core doesn't scale as fast anymore. It's scaling about as fast as the G4 and is currently at 1.4 GHz. At that speed it performs around the same as a 1.9 GHz Pentium 4 in peak integer operations and around 20% slower than a 1.3 GHz Pentium 4 in peak (and base) floating point operations (probably due to using SSE2 for the test).

So it's a bit of a myth about how efficient the Pentium 3 is compared to the Pentium 4 in real-world applications. The first Pentium 4 systems were horrible, yes, but the CPU is in general better than the CPU it replaced. For general computing, okay, you've got the 1.9/1.4 GHz thing going but for games and 3D work the Pentium 4 pulls ahead. And honestly, for business applications most people wouldn't notice the extra integer performance - most corporate desktop users would do fine with a 1 GHz machine. The Pentium 3's are still being developed by Intel for blade-type systems and low-power/low-profile thin rack-mount servers, so it's not like Intel's continuing to develop the CPUs only to cripple them to make my argument work out. ;)

Oh, and the Itanium 2 launched a while ago and has been on the market for months. Granted, I'm sure you can't find one too easily unless you go directly to HP, but this is not a CPU intended for consumer-level applications. It's a big, expensive server-class CPU. Same with Itanium 3 and a later dual-core version.

Opteron/Hammer, yes, consumers will get that in the "Athlon64" version. If you're going to talk about Itanium, you might as well mention POWER4, PA-RISC 8700, Alpha 21364, SPARC64 V, etc. Those are all in the same class. Pentium 4, Opteron/Hammer, G4, PPC970 - those are all in the same class. Itanium does not belong grouped with other consumer-level CPUs. ;)

Oh, on a curious note, I looked up the dates for some Mac models mentioned.

Starting at the 9500 in 5/95, that line that can all share the same CPU card ended in 2/98. So 3 years. The Pentium Pro started in 11/95 but I'm not going to count that since it was not very upgradable and not orientated to consumers. The Pentium 2 line was introduced in 5/97 and continues on to the present as the 1.4 GHz Slot 1 Pentium 3. So, if you really want to get down to it, the P6 line of systems has had the same CPU slot for upgrading for 6 years compared to the Mac's 3 years.

On the x86 side of things, the 486 and Pentium lines also allowed for longevity through upgradability. I'm too lazy to look up the numbers but the 486 platform could be upgraded to a Pentium-class CPU and the Pentium platform could be upgraded to 450 MHz with integrated cache, something really nifty for that platform.

The only bad thing with PCs is that sometimes you need to flash the BIOS to handle the newer CPUs. Still, I don't know if that's any different than messing with CPU enablers or changing gestalt IDs on the Mac, really.

Anyway, thanks for resolving that "everything on the front page is centered" thing. It drove me nuts!


P.S. why is the 7300 released 2 years later than the 8500 and 7500? That makes no sense! And the 9500 released before any of the other systems? Go back in time and make Apple fix it! Too confusing #@! ;)

Yes, as you note in your third paragraph, Intel changed their CPU connection scheme when Cyrix, AMD, and others started making processors that were plug-and-play compatible with Intel's chips. By changing the bus regularly and attempting to keep others from using it (the definition of proprietary), Intel keeps the motherboard manufacturers mostly following their lead - and makes it that much more difficult for other CPU makers to engineer new CPUs in a timely fashion that will use the new connection. It's about controlling the market.

I looked up the history of HyperTransport, which AMD pioneered, and discovered that it's a new way of getting chips to talk with each other. It was originally devised for multiple CPU configurations and has expanded beyond that.

AMD and Apple are charter members of the HyperTransport Consortium. So are Nvidia, Sun Microsystems, SGA, and several others. Later additions include H-P, NEC. One key concept of HyperTransport it royalty-free licensing to consortium members.

As for PC clones and Macintosh clones, we're dealing with entirely different situations. PC clones were reverse engineered. The earliest Mac clones (all of them unauthorized) used Macintosh ROMs and other parts; the later Mac clones licensed the technology and OS from Apple. There was a company in Germany (if I recall correctly) that managed to reverse engineer the Mac, but nothing ever came of it.

Every field has its "stupid" consumers (and let me clarify - it was a reader who used that label, not me), whether we're talking about 35mm film, blank CD-Rs, car tires, or memory chips. They buy because it's cheapest, because of the brand, because it's on sale, or because somebody (a friend or the salesperson) recommended it. They don't know the difference because they haven't investigated it - then they wonder why their pictures no longer have the same snap as when they used Fujifilm or can't figure out why so many CD burns are failing.

Marketing departments and mass retailers thrive on uninformed consumers. Specialty shops - whether computer retailers or camera stores - thrive on educating their customers. And we don't usually call them "stupid;" we realize that they simply haven't learned that there are differences and that some choices are better than others.

Anyhow, back to Macs. There is no huge bottleneck between the G4 and the motherboard when you compare it to the Pentium 4. The G4 runs as fast as 1.42 GHz and accesses memory at 167 MHz - which is more real than the virtual 533 MHz used by a 3.06 GHz Pentium 4. Look at the multiplier: The G4 runs at 8.5x bus speed, so any time it needs to go to motherboard memory, it takes 8-9 clock cycles to get the data.

The P4 really accesses memory on a doubled-and-doubled again bus that's really 267 MHz if we measure in the same way Apple does. The multiplier - 11.5. In terms of processor cycles lost, the P4 loses 11-12 CPU cycles each time it has to access motherboard memory.

Okay, I'm a geek. I know that's a worst case scenario because modern CPUs have big caches that can move data at full CPU speed about 95% of the time. Motherboard memory access isn't very common, and when it takes place, the computer is usually smart enough to grab some extra bytes and hold them in the cache just in case that's the next thing the CPU needs.

The absolute bus speed of the memory path and absolute clock speed of the CPU are far less important than the absolute ability to do work. That's why Intel is pushing their slower than the G4 (by clock speed) Itanium 2 as the next great workhorse CPU. They know the truth, but they're going to have a hard time marketing it.

But back to Apple and IBM. I don't think anyone would argue that Apple doesn't want to use IBM CPUs. What Apple doesn't want to do is use faster G3s than G4s. As you note, IBM G3s power the iBooks and Apple apparently had a fair bit of input on the PowerPC 970 design. IBM designed the chip to meet Apple's needs, one of which is AltiVec. Any hesitation to use IBM PowerPC designs since Apple adopted the G4 has been because of the velocity engine.

That's not entirely dissimilar with Intel's situation. The P III was a more efficient processor for regular computing; the P4 was designed with multimedia in mind. The P4 is great for gaming, ripping MP3s, dealing with video. But the P4 is brain-dead from a multiple processor perspective. So Intel has to keep developing the P III for servers.

The difference is that in the Mac's case, the simpler G3 design could probably be pushing 2 GHz today, but it doesn't support multiprocessing well (it can do two CPUs, but the overhead is horrendous) and it doesn't have the velocity engine. The G4 has both. If IBM had been allowed to lead the way, though, the G4 would have been a slightly more robust G3 with strong multiple processor support. Instead, Motorola designed this incredibly complex AltiVec thing that precluded their keeping up with clock speed improvements seen in the entire rest of the industry.

I wasn't the one who brought up the Itanium 2, but AMD's Hammer CPUs are definitely being designed to compete with both Pentium 4 and Itanium 2. Like the Pentium III and PowerPC 970, the Hammer will function well in both consumer and server situations - or so it appears. We'll know more when it ships.

I won't quibble over CPU upgrade sockets and slots. The point of the original article was the incredible upgradability of the Power Mac 7500, which came out nearly 8 years ago, and contrast that with a Windows computer purchased shortly thereafter. The 7500 can run an 800 MHz or 1 GHz G4 without a motherboard transplant; no Windows PC from 1995 can run a Pentium 4 without such a transplant.

As for that centering problem, blame Microsoft. It's a glitch in IE 6, so now we all know that you use Windows. ;-) Anyhow, if a doctype declaration looks like this:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "">

That's the way you're supposed to declare a doctype, complete with a link to the specification. But if you do that, IE 6 apparently picks up a <center> tag or command and somehow applies it improperly. And the software engineers at Microsoft seem to think this somehow complies with the strict HTML standard - go figure.

The solution was to delete the URL. Now the LEM home page no longer centers in IE 6 - the only browser to exhibit this weird behavior. Just one more way Microsoft can make life hell for those who try to embrace open standards....

As for Apple's glitch in the numbering scheme that created the Power Mac 7300, it's easily explained. Higher numbers mean more capable computers. The 7500 had video ports and a PowerPC 601 processor. The 7600 had video ports and a PPC 604. The 7300 lost the video ports, so the lower numbers indicate a machine will less abilities than the 7500 and 7600. Strange but true.

Overclocking the Beige G3

Following our link in Beige G3 Stuck at 266 MHz?,Jeff Larvia writes:

I read with interest your article on overclocking a Beige G3. What I was hoping to discover were directions on installing a 333 chip from a tower into a 266 desktop. The Google enabled search aspect of your site was not helpful. Perhaps you could point me in the right direction.

Sorry for the oversight. It's the kind of think overclockers just know. Here are step-by-step instructions:

  1. Turn off the computer, unplug it, and open it up.
  2. Remove the old CPU.
  3. Install the new CPU.
  4. Remove the J16 jumper block.
  5. Rearrange the jumpers as explained in the article.
  6. Reinstall the jumper block.
  7. Close up the computer, plug it in, and turn it on
It's not uncommon to gain 66 MHz additional speed over the rated speed of the CPU. When I find the time, I hope to try clocking my beige G3/266 - already upgraded with a 333 MHz CPU - at 366 and 400 MHz to see which is stable. We'll share the results at Low End Mac when I do it.

Join us on Facebook, follow us on Twitter or Google+, or subscribe to our RSS news feed

Dan Knight has been publishing Low End Mac since April 1997. Mailbag columns come from email responses to his Mac Musings, Mac Daniel, Online Tech Journal, and other columns on the site.

Today's Links

Recent Content

About LEM Support Usage Privacy Contact

Follow Low End Mac on Twitter
Join Low End Mac on Facebook

Favorite Sites

Cult of Mac
Shrine of Apple
The Mac Observer
Accelerate Your Mac
The Vintage Mac Museum
Deal Brothers
Mac Driver Museum
JAG's House
System 6 Heaven
System 7 Today
the pickle's Low-End Mac FAQ

The iTunes Store
PC Connection Express
Macgo Blu-ray Player
Parallels Desktop for Mac

Low End Mac's store


Open Link