
Journal tomhudson's Journal: Another "interesting" upgrade attempt 8
Regexes are a PITA to work with, but editing them is more so when you're having a hard time seeing properly - all those (*$^!.*/ tend to look more like comic-book swearing than ever.
So I figured it would just be quicker to write a program in c.
But realloc() kept throwing errors on the 3rd or 4th call, but only for one variable. Did I make a mistake? It happens
So, went and upgraded the distro once again, and sure enough, it was the compiler. I have to admit I was taken by surprise when I didn't get that long assertion failed message instead (and who is the retard who makes assertions that are so long and complicated that they're going to need an audit just to verify that they actually do what they claim to do???).
So
All this got me thinking - how come the code I wrote today, in c, to parse out some files doesn't run faster than the c code I wrote a couple of decades ago on a machine that was 100x slower?
The answer is simple, and disappointing. Past a certain level of complexity of the software stack (OS, libs, compiler), you don't get improved performance. It all gets sucked up by the stack.
20 years from now, are we going to have machines with a terabyte of ram, 256 cores, and only running as fast, on average, as an old 386 because by then we'll have past the peak and gone into negative returns territory but can't go back because everything would break worse? For example, code with so many security checks that it's into "infinite bug" state, where fixing one exploit opens up another one (personally, I think we're there already, but that's another story).
Well... (Score:1)
Re: (Score:2)
Java works when it does - and I checked - running a java program I wrote, it still runs. jEdit doesn't, so the upgrade probably hosed the $CLASSPA
Re: (Score:2)
Shared libraries were a hack to deal with the tiny-RAMed machines of yesterwhen. Microsoft learned 20 years ago that it was stupid, and it took them well over 10 years to dig their customers out of DLL Hell. Even so, way too many programs still rely on environmental dependencies, and way too many programs still have issues. Linux people should have learned this lesson from MS decades ago.
Here's the deal, kids: if you have to deliver a common library with your product, just statically link it. Listen to
Re: (Score:2)
Re: (Score:2)
Swap isn't as bad as you claim. In Microsoftistan, swap and memory mapped file IO are very closely related. The modern loaders are far more efficient and much faster than a stupid loader, and programs launch and run much faster as a result. Plus, if you run a profile guided optimization (correctly), swapping is minimized anyway.
Of course swap doesn't help forever. The more you swap, the more you thrash, and it doesn't take too long before your performance curve looks like tangent(90). I'd almost rather
Re: (Score:2)
Even firefox and libreoffice and eclipse won't fill all that up any time soon :-)
Not for home users (Score:2)
20 years from now, are we going to have machines with a terabyte of ram, 256 cores, and only running as fast, on average, as an old 386 because by then we'll have past the peak and gone into negative returns territory but can't go back because everything would break worse? For example, code with so many security checks that it's into "infinite bug" state, where fixing one exploit opens up another one (personally, I think we're there already, but that's another story).
This may be true for companies stuck using some particular application, but hopefully desktop applications will continue to improve through competition: when one browser gets too slow, people start switching.
Probably not be true for your average IE/MSOffice user though.
Re: (Score:2)
For example, we now know that c does not produce portable programs, unless you really fudge the definition of "portable", and that we can do better. That dynamic linking was a huge blunder over the long haul. That we i