Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Use the Coax as a wirepull for the cat5 (Score 2, Informative) 608

Because coax was so unreliable it would make network admins cry.

In the good ole lan party days, the network would be disrupted every time someone needed to connect or disconnect a pc. Sometimes you had T piece that was a bit faulty and that also nuked the network. And when you had 12 machines on the network, finding the source of the error was even harder.

Performance was only a secondary reason for it's demise.

Comment Re:When do people get this (Score -1, Redundant) 613

"You obviously don't understand memory access design. It's all about feeding the CPU. There are two sorts of relationships we can use to make this work: temporal and sequential.

Hard drives are the largest-capacity storage (well unless you want to go to tape). But they're slow. Even the fastest high-RPM SCSI or SATA drives are SLOW compared to what's above them. This is mitigated, somewhat, by putting some cache memory on the drive's controller board itself. Still, having to hit the hard drive for information is, as you say, a slowdown. Same goes for external storage (Optical media, USB media, etc).

So you try to keep as much information as possible in RAM (next step up). Hitting RAM is less expensive than hitting the H/D in terms of a performance hit. In the original days of computing (up until the 486DX line for Intel CPUs), RAM and CPU operated on a 1:1 clock speed match, so that was that.

Once you factor in the clock multiplier of later CPU's, even the fastest RAM available today can't keep from starving the CPU. So we add in cache - L3, L2, and L1. the 486 implemented 8KB (yeah a whole 8K, wow!) in order to keep itself from starving. L3 is the slowest, but largest, L2 is faster still but smaller, and L1's the smallest of all, but the fastest because it is literally on the same die as the CPU. That distinction is important, and in general you'll find that a slower CPU with more L1 Cache will benchmark better than a faster CPU with less.

The CPU looks for what it wants as follows:
- I want something. Is it in L1? Nope.
- Is it in L2? Nope.
- Is it in L3? Nope.
- Is it in RAM? Nope.
- Is it in the H/D Cache? (helps avoid spin-up and seek times) Nope.
- Crap, it's on the H/D. Big performance hit.

Everything except for the L1 check, technically, was a performance it. The reason for pre-caching things (based on temporal and sequential relationships) is all about predicting and getting what will be needed next into the fastest available place.

Yes, I suppose you can run an entire system where it all goes into RAM, and you'll see it as more responsive simply because you never have to touch the hard drive. But turning off HDD caching is a BAD idea. It makes cache misses that much more expensive because then, instead of having even the chance of finding what you needed in RAM or in the HD's onboard cache, you have to wait for the H/D to spin up and seek to the right sector."

FTFY :)

Slashdot Top Deals

Great spirits have always encountered violent opposition from mediocre minds. -- Albert Einstein

Working...