Shutting down NASA Earth Science moves it over to NOAA.
No, it doesn't. It just shuts it down.
You apparently didn't notice that Trump was elected by a minority of the voters, the Republicans in the House of Representatives were elected by a minority of the voters, and the Republicans in the Senate were elected by a minority of the voters.
It's not surprising. Authoritarians rarely have much regard for the will of the majority. But minority rule can't last for long. The real majority will have its way, one way or another.
Welcome to permanent minority status, white man. You'd better hope the majority doesn't treat you as badly as you treated them when you had power
You do realize that most of what you posted is a lie, right? Of course you do. Lying is what you do.
The earth has been cooler for the entire period during which anything resembling human beings evolved. Antarctica wasn't in its current position when it was warmer than it is now. And, without human carbon releases the planet maintains a relatively temperate climate over long periods of time through the action of the carbonate-silicate cycle. Of course when you dig up half a billion years worth of stored organic carbon and burn in in a century, the carbonate-silicate cycle ain't gonna fix that.
And of course, continuing to release more CO2, that's your fault, not mine.
NASA is doing climate research because 4 decades of political leaders decided NASA should be doing climate research. If you are deluded enough to think Trump is just going to move things around to NOAA rather than eliminating inconvenient research, you deserve what you get. Good luck with that.
I think this is a bit misleading. I those numbers are based on linear pixel density divided by FOV (for the DK2, 1080 pixels/100 degrees). That is OK to a first approximation, but the LEEP optics in the rift do not evenly map the pixels. The pixels near the center of the screen are much less stretched than the pixels at the edges. This is appropriate because our eyes have better resolution near the center of our retina. If you are principally looking forward as you are when using your real monitor, the effective pixel density of the HMD is going to be much higher than stated above. If you are looking out of the corner of your eye, it will be much worse. Assuming HMD continue to use flat panels with LEEP optics, a proper 4k panel may be adequate to allow proper desktop representations. Of course, all of this math changes once we start using curved OLEDs, etc.
There is also probably some subpixel rendering improvements that can be done as well. I continue to be amazed at how much readability improves when using ClearType or similar subpixel font rendering even on high DPI monitors. Of course, the same subpixel ideas/antialiasing ideas may need to be applied to the entire windowing system allowing LEEP distortion/viewing angle compensation for borders, widgets, etc. There are lots of opportunities here to design a 3D windowing interface and get all of these things right. I'd love to have the 27" and 30" monitors on my desk to be the last I ever buy.
Ninite.com is the only place I go for software on a new Windows installation. Select what you want and it gives you one installer. And you get exactly what you asked for. No search bars or crapware. It has been working great for years now.
But that was not my question. I fully understand how to use lookup tables/Chebyshev expansions of exp(x) and ln(x) to implement pow(x,a)--I have implemented these many times. My question was specifically on your assertion that any differentiable function could be evaluated with as a Newton-style iterative correction and thus provide arbitrarily precise results. I asked specifically to see how that is accomplished for pow(). There is no corrective mechanism in the algorithm you have stated above. The precision you get is a function of the precision that you've baked into your lookup table--and then it become the space/accuracy trade-off. On desktop/server CPUs, that trade-off is more often than not won by Chebyshev expansions, especially in a world of SSE/AXV vector instructions.
So, if there is a technique that does allow me to start with an initial guess x[0] of pow(a,b) and then create corrections of the form: x[i+1] = x[i] - f(x[i]) where f() uses only intrinsic math operations (+,-,*,/,etc) but not transcendentals, then I am quite anxious to see it.
With a table of values in memory you can also narrow down the inputs to Newton's method and calculate any differentiable function very quickly to an arbitrary precision. With some functions the linear approximation is so close that you can reduce it in just a few cycles.
No, you can't. I know this was done in Quake3 fastInvSqrt(), but that is the exception, not the rule in my experience. x = pow(a,b) is a differentiable function. How can you assemble a root function/Newton iteration to successively correct an initial guess for x to arbitrary precision--without actually calling pow() or other transcendental function? I have built Newton (and Halley and Householder) iterations to successively correct estimates for pow(a,b) when b is a particular rational number. You can re-arrange the root function to only have integer powers of the input a and of the solution value x, and those can be computed through successive multiplication. These can be fast, but they are certainly not useful when b is something other than a constant rational number. And even if the exponent value has only a few significant digits, the multiplication cascade starts to get expensive (that was the reason to use Halley/Householder because once you have f' calculated, f'' and f''' are almost free.)
If you know otherwise, please let me know. My current fast pow() function leverages IEEE floating point formats and Chebyshev polynomial expansions to get reasonable results. If there is way to polish an approximate pow() result with Newton (or higher order) iteration, I would be happy to learn it.
That's the problem with people who think that knowing a subject makes it possible to get every answer correct. Some of the best courses I took had questions on exams that were not possible to answer correctly without access to a supercomputer and a few hundred CPU months, where the instructor was looking for depth of knowledge and technique rather than "the right answer". It makes me wonder if those that advocate for absolute grading have ever had to do anything difficult in their lives. Or ever considered that two exams on exactly the same material could have different difficulties.
It's also not true that scores are proportional to knowledge. An obvious example is the multiple choice, multiple answer test where negative scores are quite possible.
Typically I design upper division exams for an average of 50%. I could easily design for 84% "standard," but it would tell me and my students less, because there would be less distinction at the upper end. It would be more difficult for students to know what they do and don't understand well. Yet you would have me punish my students for making the people with As work a little harder and maybe learn a little more.
Multics is security spelled sideways.