
Erm, "0b" is a common prefix for binary numbers in many programming languages. GP was not saying "b1" as in "base 1".
I think it's great. I never knew how to pronounce it before. Skiffy?
served by QZHTTP. This web server is used by QQ to serve millions of Qzone sites beneath the qq.com domain
A quorum of queasy, quitting queens, quaffing questionable quaaludes, quietly quote quips of quality quite exquisitely.
Anyone can see the machine code on your closed-source software and hack you with ease.
Security through obscurity is worthless. These are blatant, obvious lies.
Then, when both inevitably explode on some mission, we start sending four. One of THOSE is definitely bound to make it.
Use XFCE. It has the versatility of GTK without the bloat and overhead of GNOME.
chmod 600 sign
It's the ice-9 strangelets that have me worried.
Ugh. That should read:
Oh, totally. But when my boss needs someone to spend 6 months of nothing but writing automated regression tests, shit, forget it.
When you need 5k lines of code intimately tied to the internals of the C library and you have under a month to do it, I'm useful.
Oh, totally. But when my boss needs someone to write 6 months of nothing but writing automated regression tests, shit, forget it.
...and instead of "clearly optimal", I should have said "a likely improvement" because no heuristic is guaranteed to provide an improvement in performance 100% of the time.
It requires nothing of the sort. It just requires that you sometimes try going up instead of down.
Depending on the terrain, however, "sometimes going down" could almost always result in worse performance.
One solution is more randomness in the culling heuristics, as well as bigger populations and occasional bigger mutations.
Of course, this applies to whole populations, not individuals. Simply saying "follow the gradient and occasionally don't" is more likely to result in degraded average performance. Actually knowing when going against the gradient is helpful and not harmful, and knowing which direction to take against the gradient, OR having a good heuristic for guessing when it's a good idea, on the other hand, is clearly optimal.
Randomly doing something counterintuitive may reveal clues toward a more optimal solution, but in itself it is no guarantee of improvement.
Vista is a very good example of what happens when you take tough theoretical problems and throw entry-level programmers at them who haven't spent enough time converting C code to assembly on 4-bit microcontrollers with 64 bits of onboard RAM to appreciate the inherent value in code optimization and algorithm design, and there's enough processor speed and memory available that nobody notices or cares about the inefficiencies until it hits shelves and millions of end-users are forced to hit "Allow" 300 times a day.
Except following the gradient is just an example of using a suboptimal solution that works in the majority of cases, and is significantly less difficult to implement than the "next step up," which requires, at the very least, an internal model of the surrounding terrain. If it is actually known that going down will help you get higher, it's not actually a dumb decision despite how it may appear to agents without the "internal model" algorithm.
In fact, the gradient follower in that case is actually the dumber process, because it takes only one factor into account. But if the gradient follower is able to observe the internal modeler performing counterintuitive steps and achieving greater results, it may attempt to modify its own behavior without understanding the justification behind it, or the full ramifications thereof. This is where IT Managers come from.
Disk crisis, please clean up!