Comment Re:That ain't nuthin' (Score 1) 112
Also, that I have two feet attached to my toes.
You saying you have twenty feet?
I'd have thought you'd have been aware of that no matter how much weight you were carrying.
Also, that I have two feet attached to my toes.
You saying you have twenty feet?
I'd have thought you'd have been aware of that no matter how much weight you were carrying.
You can see the evolution of the global warming argument in that acronym.
When it was just "global warming", the argument was basically "global warming is not happening".
Then, when it got too hard to sustain pure denial, they added "anthropogenic", so the argument became "OK, global warming is happening, but it's not us that's causing it"
Now we see "catastrophic" added and the argument morphs again: "OK we are causing global warming, but it's not going to be as bad as people say".
If things carry on like this, presumably the next letter will be "L" for "liable". LCAGW: "Ok, global warming might get pretty bad, but I don't see why I should have to pay for it"
Arnold in one of his textbooxs demonstrated that, to make a weather prediction one month in advance
...
In his textbooks, does he end each chapter with the words "I'll be back!" by any chance?
There is nothing fundamentally different from spoken languages in programming languages
Well, there is the matter of ambiguity. Most human languages have scope for ambiguity in the syntax. A piece of computer code means one thing and one thing only. If it doesn't, it's a bug.
Also computer languages evolve differently from spoken ones. Spoken languages may have a precise syntax, but the speakers are free of ignore or adapt it and the meaning can still be carried across. If you try and get creative with the syntax of a computer language, the computer either doesn't understand (if you'll pardon the anthropomorphism) or worse, it misunderstands and does something other than what you wanted. If you want to evolve a computer language, it needs a change to the language spec and to conforming compilers.
I'm sure there are other differences as well.
Anyone can put up a web page, and Javascript and PHP have a large footprint there. (I guess Java, on the enterprise server side?) It's not hard to imagine there's lots of folks that have to deal with these languages as part of their larger duties, but aren't really trained as programmers in any traditional sense. That could fuel a bunch of StackOverflow traffic for sure...
Whichever ranking you look at will be skewed by the methodology. It feels like web-oriented languages are overemphasized in this cut.
Of course, my own worldview is skewed, too. I deal more with low-level hardware, OS interactions, etc. You won't find a lick of Javascript or PHP anywhere near any of the stuff I work on daily. Lots of C, C++, some Go and Python.
Problem is, you just locked yourself into Windows PCs
Sure, but that is 95% of the world... I didn't pick it, the world did, I'm just using what everyone else uses...
Have you ever considered a career as a Lemming?
Ah, OK, so it is more or less the latest version of ASaP/ASaP2. I just made a post up-thread about my memory of ASaP. It looked interesting, but as you point out, it has some real practical issues.
At the time we spoke with them, it sounded like whenever you loaded an algorithm chain, you had to map it to the specific chip you were going to run it on, even, to account for bad cores, different core speeds, etc. Each core has a local oscillator. Whee...
I'm familiar with Dr. Baas' older work (ASaP and ASaP2). He presented his work to a team of processor architects I was a part of several years ago.
At least at that time (which, as I said, was several years ago), one class of algorithms they were looking at was signal processing chains, where the processing steps could be described as a directed graph of processing steps. The ASaP compiler would then decompose the computational kernels so that the compute / storage / bandwidth requirements were roughly equal in each subdivision, and then allocate nodes in the resulting, reduced graphs to processors in the array.
(By roughly equal, I mean that each core would hit its bottleneck at roughly the same time as the others whenever possible, whether it be compute or bandwidth. For storage, you were limited to the tiny memory on each processor, unless you grabbed a neighbor and used it solely for its memory.)
The actual array had a straightforward Manhattan routing scheme, where each node could talk to its neighbors, or bypass a neighbor and reach two nodes away (IIRC), with a small latency penalty. Communication was scoreboarded, so each processor ran when it had data and room in its output buffer, and would locally stall if it couldn't input or output. The graph mapping scheme was pretty flexible, and it could account for heterogenous core mixes. For example, you could have a few cores with "more expensive" operations only needed by a few stages of the algorithm. Or, interestingly, avoid bad cores, routing around them.
It was a GALS design (Globally Asynchronous, Locally Synchronous), meaning that each of the cores were running slightly different frequencies. That alone makes the cores slightly heterogeneous. IIRC, the mapping algorithm could take that into account as well. In fact, as I recall, you pretty much needed to remap your algorithm to the specific chip you had in-hand to ensure best operation.
The examples we saw included stuff familiar to the business I was in—DSP—and included stuff like WiFi router stacks, various kinds of modem processing pipelines, and I believe some video processing pipelines. The processors themselves had very little memory, and in fact some algorithms would borrow a neighboring core just for its RAM, if it needed it for intermediate results or lookup tables. I think FFT was one example, where the sine tables ended up stored in the neighbor.
That mapping technology reminds me quite a lot of synthesis technologies for FPGAs, or maybe the mapping technologies they use to compile a large design for simulation on a box like Cadence's Palladium. The big difference is granularity. Instead of lookup-table (LUT) cells, and gate-level mapping, you're operating at the level of a simple loop kernel.
Lots of interesting workloads could run on such a device, particularly if they have heterogenous compute stages. Large matrix computations aren't as interesting. They need to touch a lot of data, and they're doing the same basic operations across all the elements. So, it doesn't serve the lower levels of the machine learning/machine vision stacks well. But the middle layer, which focuses on decision-guided computation, may benefit from large numbers of nimble cores that can dynamically load balance a little better across the whole net.
I haven't read the KiloCore paper yet, but I suspect it draws on the ASaP/ASaP2 legacy. The blurb certainly reminds me of that work.
And what's funny, is about 2 days before they announced KiloCore, I was just describing Dr. Baas' work to someone else. I shouldn't have been surprised he was working on something interesting.
Ahem, forgot about the angle brackets.
"Blessit! Lemme look... <tappity clickity tappity> Hey, it's there all right! OK, just a sec... <tappity clickity tap... save... compile> There, that ought to patch it. Dist it out, wouldja?"
Without ANY bugs? Really? The only way this idea works is if you have a divine programmer who cannot make any mistakes who created the universe
Reminds me of one of my favourites from
"Yo, Mike!"
"Yeah, Gabe?"
"We got a problem down on Earth. In Utah."
"I thought you fixed that last century!"
"No, no, not that. Someone's found a security problem in the physics program. They're getting energy out of nowhere."
"Blessit! Lemme look... Hey, it's there all right! OK, just a sec... There, that ought to patch it. Dist it out, wouldja?"
-- Cold Fusion, 1989
Once it hits the fan, the only rational choice is to sweep it up, package it, and sell it as fertilizer.