Comment Is that a good idea? (Score 1) 107
US Slows Plans To Retire Coal-Fired Plants as Power Demand From AI Surges
Starving an AI from power? Is that a good idea? Did we learn nothing from The Matrix?
US Slows Plans To Retire Coal-Fired Plants as Power Demand From AI Surges
Starving an AI from power? Is that a good idea? Did we learn nothing from The Matrix?
Assuming your phone was on in the first place. If you're going somewhere you don't expect to have cell service it makes sense to turn it off or put it into flight mode. If you're prepared for the trip you also carry spare battery packs and/or a small solar charger.
If you need to call for help you turn it on and try, or turn it on if you can hear the sound of helicopters nearby.
When I go camping, I leave my cell phone in the car, powered off. There's no point in carrying it as there won't be any signal in the back-country.
You probably need something like a phased array to give some directionality sensing to the receiving antenna?
No need for that. All you have to do is fly in a pattern once you find a signal, keeping an eye on the signal strength. For example, if you get a hit while flying north, you turn around and fly south half a km to the west. Your signal is going to be weaker or stringer. If it's weaker, the transmitter is further east, and if it's stronger then the transmitter is further west. It's basically the children's game Hot and Cold, but with cellular technology. Granted, a directional antenna will make searching a lot simpler, but a simple yagi or loop antenna would suffice for that. No need to go to all the trouble and expense of a phased array.
But there is no denying that when rich people hoard too much for themselves, the sheer fact that they are hoarding it denies it to others who need it, leaving them without any way to earn it, and that harm is evil.
If they were hoarding food or shelter or other necessities you might have a point, but no one needs money for its own sake. It's just a medium of exchange. If a portion is taken out of circulation the remainder just becomes that much more valuable, and the amount of goods which can be purchased with the part still in circulation remains the same.
Not, of course, that the wealthy are actually hoarding money in any meaningful sense. Their wealth takes the form of capital investments, which they earn a return on by producing things other people want.
So the EU starts talking about cutting off Russia, and the United States is the bad guy?
This isn't about the EU vs. the US, but rather about being vulnerable to sanctions along the lines of what the EU is proposing at the hands of any foreign government. The EU may be the one acting right now, but it could just as easily be the US next time (AWS and Azure have already cut off certain Russian customers... voluntarily, for now). A smart company at risk of being targeted will not wait until after they've been formally sanctioned to address this vulnerability.
The Bill of Rights (first ten amendments) were written and ratified around the same time as the main Constitution and intended to be a part of it from the beginning. They are, in effect, part of "the original document" despite technically being amendments. Even the later amendments went through the prescribed ratification process and obtained the endorsement of a supermajority of the states, which puts them far above any bills passed by Congress, much less mere reinterpretation of the Constitution (as amended) contrary to the original intent to suit either modern sensibilities or the whims of the current administration.
Yes, that's correct. The Haskell runtime works primarily through a "mutator" task (or tasks, in the multithreaded case) which replaces unevaluated thunks with their results as they are needed, "forcing" them to Weak Head Normal Form or WHNF—basically a known data constructor (head) with possibly-unevaluated fields. (Normal Form or NF would additionally have fields recursively evaluated to Normal Form.)
This process is invisible to the code, except that there are ways to make it happen early as a performance or memory-use optimization. For example, "x `seq` y" which says "when it comes time to evaluate y, evaluate x first but ignore the result"; usually y would include a reference to x, but not in a way which would get evaluated immediately, and x names some expression which references a large data structure but summarizes it into a much smaller result. If it weren't evaluated early the thunk for x would prevent the referenced data structure from being garbage-collected. Another use case for forcing early evaluation would be benchmarks, where you want to measure the time to produce the entire result, not just a thunk which would evaluate to the result. Also, sometimes knowing that a value should be evaluated strictly can also help the compiler to produce more efficient code through inlining or other optimizations.
Though I don't honestly understand your comment about linear/memoisation vs exponential. Zipwith is basically talk recursive, so linear time and tail is constant?
The reason the first version is exponential (in most languages, including Haskell) is that it evaluates "fib n" repeatedly. If you save the results (memoize) so that you only evaluate "fib n" once for each "n" then becomes linear (assuming the memoization itself is constant-time). The second version does this by arranging the results into the list "fibs" where the n-th element of the list is "fib n". In the expression "1 : 1 : zipWith (+) fibs (tail fibs)", the result of "zipWith" starts at index n = 2 while "fibs" starts at index n = 0 and "tail fibs" starts at index n = 1, so this is equivalent to "fibs !! 0 = 1; fibs !! 1 = 1; fibs !! n = (fibs !! n - 2) + (fibs !! n - 1)"—just as in the first version. The difference is that each result is kept in the list once it's been computed and not re-evaluated each time it's referenced, so the number of additions is linear with respect to "n".
Going into a bit more detail, since Haskell uses lazy evaluation at first the list just consists of thunks (unevaluated expressions). (This technically includes the "next" or "tail" pointers as well as the list elements, which is why we can define "fibs" as an infinite list, but I'll ignore that for now.) When you evaluate "fibs !! n" it first evaluates "fibs !! n - 2" and "fibs !! n - 1", adds them, and then mutates the list entry to store the evaluated result in place of the original thunk. Of course the recursive calls also store their results before returning, so when "fibs !! n - 1" needs "fibs !! n - 2" that result is already available and doesn't need to be reevaluated. (Note that I'm just using "fibs !! n" as shorthand for "the n-th element of fibs" here; we don't actually walk the list from the start each time. We step through the output of zipWith once, and zipWith steps through its inputs "fibs" and "tail fibs" once each in parallel, making this part linear-time.)
The scope of "fibs" here is a particular call to "fib n" so it can be garbage-collected once we have our result. If you moved the definition of "fibs" to the top level then it would avoid re-evaluation across multiple calls, at the cost of keeping the evaluated part of the list in memory for the duration of the program.
You could also use an array instead of a list, since there is a known upper bound on the number of elements needed for a given "n"; this would be more typical of the "dynamic programming" approach to memoization, but in Haskell it's a bit longer, and with list fusion optimizations the array isn't necessarily more efficient. The result would look like:
fib n = fib' n
__where
____fib' 0 = 1
____fib' 1 = 1
____fib' m = (fibs ! (m - 2)) + (fibs ! (m - 1))
____fibs = listArray (0, n - 1) (map fib' [0..])
Where fib' looks a lot like the first recursive definition, except that it refers to the array rather than calling itself directly. (But keep in mind that "fibs ! n" is just a particular instance of "fib' n", so it's still recursive. The difference is that the result is stored in the array and not discarded.)
This can easily be generalized for any recursive function with a parameter suited for use as an array index:
memoize
memoize bounds f = let arr = listArray bounds (map (f (arr !)) (range bounds)) in (arr !)
-- abstract out the recursive calls
fib fib' 0 = 1
fib fib' 1 = 1
fib fib' n = fib' (n - 2) + fib' (n - 1)
naiveFib n = fix fib n
memoizedFib n = memoize (0, n) fib n
Which can then be used as the basis for all kinds of dynamic programming solutions.
To me, the basic recursive version (exponential time without more machinery!) is one of the least elegant solutions.
The simple recursive definition:
fib 0 = 1
fib 1 = 1
fib n = fib (n - 2) + fib (n - 1)
... is an elegant definition, but not the most efficient or elegant evaluation strategy. However, this is also a simple recursive form:
fib n = fibs !! n where fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
... and in a lazy-by-default language like Haskell it would be evaluated in linear time, not exponential, due to implicit memoization.
Of course the most efficient version would have to be the closed form:
fib n = round (((1 + sqrt 5) / 2)**(n + 1) / sqrt 5)
... at least up to n > 70, where rounding error starts to interfere due to the use of IEEE FP intermediate values. Using the CReal type from the numbers package for the intermediate expression this gives correct answers up to at least n = 2000, and probably well beyond, though it becomes very slow due to the expensive math operations. (The recursive lazy list version is much faster, even for n = 200000.) However, this doesn't serve as a very instructional comparison of recursion vs. iteration.
The problem isn't the signals. US civilian-grade (exportable) GPS receivers have limitations on speed and altitude to prevent them from being used in ICBMs which aren't present in military-grade units. Of course in principle that's just a software issue, so they might be able to work around it with some reprogramming while continuing to use the same hardware. Also, as an export restriction, those limits may not apply to receivers designed and manufactured outside the US.
The COCOM Limits only apply when traveling at speeds above 1,000 knots and/or altitudes above 18 kilometers, so you wouldn't encounter these limits while traveling in a standard passenger plane. It can be an issue for high-altitude balloons and similar projects, depending on how the manufacturer chose to interpret the conditions ("and" or "or").
It's not going to pass, so it doesn't really matter. And even if it did the "retroactive effect" portion specifically targeting Disney would almost certainly be struck down as an illegal bill of attainder. But in a situation like this where a treaty mandates something against the people's interests—such as over-long copyright terms in the case of the Berne Convention—the right answer would be to withdraw from the treaty. Nothing is truly set in stone.
They're not blocking the destination addresses (i.e. the IP addresses), they're trying to block three specific domain names, plus any future domain names registered by the (unknown) defendants, which is obviously unknowable. The domain names aren't part of any "delivery service" involving the defendant's web sites. They're the content of other "packages" addressed to DNS servers not associated with the defendants.
An ISP can refuse to reply or reply with incorrect data (which would be ignored by clients with DNSSEC enabled) to requests for these domains on its own DNS servers, but clients are not obligated to, and IMHO should not, use the resolvers provided by their ISPs. They can use a third-party resolver, their own recursive resolver, or even a local hosts file to map from domain names to IP addresses. In the first two cases the ISP may be able to filter the requests, if a secure protocol like DoH isn't used, but that would involve "looking in the packages" and "refusing delivery service" based on the content, or fraudulently impersonating the remote server.
Beyond that point, without looking at the content (which may well be encrypted) the ISP can only filter based on the IP addresses, where your USPS address-filtering analogy actually fits... but the address ranges involved could be something like all of Cloudflare, or AWS, or some other widely-used hosting provider. Cutting their subscribers off from a large subset of the Internet would not be a practical or proportional response.
It has just been discovered that research causes cancer in rats.