Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
User Journal

Journal tomhudson's Journal: Another "interesting" upgrade attempt 8

Regexes are a PITA to work with, but editing them is more so when you're having a hard time seeing properly - all those (*$^!.*/ tend to look more like comic-book swearing than ever.

So I figured it would just be quicker to write a program in c.

But realloc() kept throwing errors on the 3rd or 4th call, but only for one variable. Did I make a mistake? It happens ... but after wasting more time than I want to admit, I said to heck with it, it's got to be the compiler.

So, went and upgraded the distro once again, and sure enough, it was the compiler. I have to admit I was taken by surprise when I didn't get that long assertion failed message instead (and who is the retard who makes assertions that are so long and complicated that they're going to need an audit just to verify that they actually do what they claim to do???).

So ... all's well that ends well ... except ... lots of programs now just "sort of" work. Firefox crashes on start, then works okay on restart. Close it, restart - crash again immediately. Restart, runs fine. Close, restart, crash again ... Other programs are now missing chunks of functionality, dead areas that don't respond to the mouse, whatever. And this is after pretending it was a sick windows box instead of a linux box (update, reboot, test, force update, reboot, test).

All this got me thinking - how come the code I wrote today, in c, to parse out some files doesn't run faster than the c code I wrote a couple of decades ago on a machine that was 100x slower?

The answer is simple, and disappointing. Past a certain level of complexity of the software stack (OS, libs, compiler), you don't get improved performance. It all gets sucked up by the stack.

20 years from now, are we going to have machines with a terabyte of ram, 256 cores, and only running as fast, on average, as an old 386 because by then we'll have past the peak and gone into negative returns territory but can't go back because everything would break worse? For example, code with so many security checks that it's into "infinite bug" state, where fixing one exploit opens up another one (personally, I think we're there already, but that's another story).

This discussion has been archived. No new comments can be posted.

Another "interesting" upgrade attempt

Comments Filter:
  • ... that's why managed languages, like your behated Java or C# (don't know if you hate that one), actually do work quite reasonably. With Java (I've been a Java Developer for ~10 years), the lag is most cause by the JVM startup times. That is bad for small programs, but for server-side stuff where it continues running forever, it isn't a big deal.
    • The real issue is that end user c program performance is well under what it was 2 decades ago on a per-clock-tick basis, because of all the overhead of everything under it. It's that which is the real problem, because it's an indicator that we've gone down the wrong path in terms of both operating system design and what we basically "want to do" with computers.

      Java works when it does - and I checked - running a java program I wrote, it still runs. jEdit doesn't, so the upgrade probably hosed the $CLASSPA

      • by plover ( 150551 ) *

        Shared libraries were a hack to deal with the tiny-RAMed machines of yesterwhen. Microsoft learned 20 years ago that it was stupid, and it took them well over 10 years to dig their customers out of DLL Hell. Even so, way too many programs still rely on environmental dependencies, and way too many programs still have issues. Linux people should have learned this lesson from MS decades ago.

        Here's the deal, kids: if you have to deliver a common library with your product, just statically link it. Listen to

        • Interestingly enough, I didn't bother creating a swap file on this install (it turns out that freebsd doesn't like my video, so I switched from opensuse to fedora) ... it takes both ram and cpu to manage the swap space, as well as an extra delay if something has to be pulled from swap, and this 8-year-old box is not exactly like my prematurely dead dual-core. Only 2 gigs, a single core, and right now it's running 3840x1200x32bit (which also takes memory, not just on the card, but from the system), a web se
          • by plover ( 150551 ) *

            Swap isn't as bad as you claim. In Microsoftistan, swap and memory mapped file IO are very closely related. The modern loaders are far more efficient and much faster than a stupid loader, and programs launch and run much faster as a result. Plus, if you run a profile guided optimization (correctly), swapping is minimized anyway.

            Of course swap doesn't help forever. The more you swap, the more you thrash, and it doesn't take too long before your performance curve looks like tangent(90). I'd almost rather

            • Oh, I'm not saying swap didn't have its place i the scheme of things ... but in 3 years when a computer with 32 gigabytes of ram sells for $500 (half the current cost), who's going to bother?

              Even firefox and libreoffice and eclipse won't fill all that up any time soon :-)

  • 20 years from now, are we going to have machines with a terabyte of ram, 256 cores, and only running as fast, on average, as an old 386 because by then we'll have past the peak and gone into negative returns territory but can't go back because everything would break worse? For example, code with so many security checks that it's into "infinite bug" state, where fixing one exploit opens up another one (personally, I think we're there already, but that's another story).

    This may be true for companies stuck using some particular application, but hopefully desktop applications will continue to improve through competition: when one browser gets too slow, people start switching.
    Probably not be true for your average IE/MSOffice user though.

    • The problem is your average user is still going to be stuck paying the penalties for all the bad design decisions we've made in the last 20-40 years, because we're all so worried about how much it would cost to throw everything out and start from scratch using all the lessons we've learned.

      For example, we now know that c does not produce portable programs, unless you really fudge the definition of "portable", and that we can do better. That dynamic linking was a huge blunder over the long haul. That we i

HEAD CRASH!! FILES LOST!! Details at 11.

Working...