Comment I'm not sure I believe it (Score 1) 43
What about people who see color on four dimensions instead of more common three?
What about people who see color on four dimensions instead of more common three?
That said, steering wheel buttons are better.
PuTTY has been available for Linux for a long time too - nothing to remember there.
Or use the one I wrote: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fstromberg.dnsalias.org...
It can start a remote shell from a menu in 3 clicks, if you count the click for starting it up.
It sets up X11 tunneling, and can do multi-hop ssh - nice if you have a bastion host to hop through.
It also can set up tty logging, and $PS0, which is great if you have to do a "what happened when" post mortem.
Statically typed languages are fast, dynamically typed languages are slow. CPython is ultra slow. Nothing new.
Actually, for sufficiently large inputs, AOT implementations and JIT'd implementations are the same, perfomance-wise. Keep in mind that a JIT has access to runtime info an AOT optimizer (usually) doesn't.
But there's nothing that prevents an AOT implementation from inserting a JIT, and there's nothing that stops a JIT from doing whole-program analysis.
Again: for sufficiently large inputs.
Utter BS. The C Standard requires arrays to be allocated contiguously. int A[x][y][z] is layout in just like Fortran would (except of course, in row-major instead of column-major row). If you're talking about arrays of pointers, those aren't multi-dimensional arrays (even if the access syntax appears the same).
No, it's not "utter BS".
in C if you want to pass an array to a function, you either need an array of pointers to arrays, or you need to act sort of Pascal-ish and treat the dimensions as part of the type. Most C programmers would opt for the former, not the latter.
Granted, it's been decades since C was my favorite language. Maybe the situation has improved?
An O(n^2) algo with still be slower than an O(n log n), no matter how its compiled or translated.
That's mostly what I was taught in school - but there was a brief aside, one day, saying that sometimes a worse algorithm could be faster if it, for example, stayed all in memory instead of hitting disk.
EG, Python has a list type, which is kind of like an array, but the types can be heterogeneous, and they resize automatically. They're much faster than a linked list in Python, even though many algorithms that repeatedly append to a list are amortized O(n). Underneath it all, they're O(n^2), because to resize a list sometimes you have to copy it, but the memory latency of a list is great because it's contiguous memory locations.
Linked lists in Python are pretty slow, comparatively, because each element of the list is in noncontiguous memory. That is, traversing the linked list from beginning to end is a different cache line hit for (almost?) every element.
And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones