That may be so. The new drive may indeed have four times the raw read throughput. But how much larger are they? Five times.
And even more tellingly, look at the seek performance. I looked up those two drives you mentioned. You'll find it's unchanged at 8.5ms. So we're seeking at the same speed, for more data.
In practice, then, in terms of throughput per provisioned GB, we are 24% worse off, and in terms of seek time per megabyte we are TEN times worse off today!
To illustrate what I mean, based on those numbers above: slurping 10TB off an idealised JBOD array of those newer drives would take 89 seconds; slurping 10TB off an idealised array of the older drives in parallel would take only 72 seconds. A similar (but far worse) story applies to random seek time performance, especially for busy transaction systems.
One might challenge the exact figures, but it doesn't matter - the point is, drive size is an important gotcha in storage performance optimisation today, and it's because performance has not really kept pace with drive size. The issue is not offset by the bigger caches they're turning up with, although that helps for some workloads.
We haven't talked dollars. The cost is important, but that's another dimension. Let's keep this to engineering chatter.
So what happens in shops that need really high performance? Well, if it's an application with lots of random reads but with hotspots, then cache will do nicely. But for raw random write performance i.e. the heavy transaction processing applications, it's gotta be more 15K RPM spindles at lower capacity. Or go crazy and solid state, but that's another party.