Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:I wonder how this would be for software develop (Score 3, Interesting) 47

I think this is a bit misleading. I those numbers are based on linear pixel density divided by FOV (for the DK2, 1080 pixels/100 degrees). That is OK to a first approximation, but the LEEP optics in the rift do not evenly map the pixels. The pixels near the center of the screen are much less stretched than the pixels at the edges. This is appropriate because our eyes have better resolution near the center of our retina. If you are principally looking forward as you are when using your real monitor, the effective pixel density of the HMD is going to be much higher than stated above. If you are looking out of the corner of your eye, it will be much worse. Assuming HMD continue to use flat panels with LEEP optics, a proper 4k panel may be adequate to allow proper desktop representations. Of course, all of this math changes once we start using curved OLEDs, etc.

There is also probably some subpixel rendering improvements that can be done as well. I continue to be amazed at how much readability improves when using ClearType or similar subpixel font rendering even on high DPI monitors. Of course, the same subpixel ideas/antialiasing ideas may need to be applied to the entire windowing system allowing LEEP distortion/viewing angle compensation for borders, widgets, etc. There are lots of opportunities here to design a 3D windowing interface and get all of these things right. I'd love to have the 27" and 30" monitors on my desk to be the last I ever buy.

Comment Re:Lookup tables are faster and more accurate (Score 2) 226

But that was not my question. I fully understand how to use lookup tables/Chebyshev expansions of exp(x) and ln(x) to implement pow(x,a)--I have implemented these many times. My question was specifically on your assertion that any differentiable function could be evaluated with as a Newton-style iterative correction and thus provide arbitrarily precise results. I asked specifically to see how that is accomplished for pow(). There is no corrective mechanism in the algorithm you have stated above. The precision you get is a function of the precision that you've baked into your lookup table--and then it become the space/accuracy trade-off. On desktop/server CPUs, that trade-off is more often than not won by Chebyshev expansions, especially in a world of SSE/AXV vector instructions.

So, if there is a technique that does allow me to start with an initial guess x[0] of pow(a,b) and then create corrections of the form: x[i+1] = x[i] - f(x[i]) where f() uses only intrinsic math operations (+,-,*,/,etc) but not transcendentals, then I am quite anxious to see it.

Comment Re:Lookup tables are faster and more accurate (Score 2) 226

With a table of values in memory you can also narrow down the inputs to Newton's method and calculate any differentiable function very quickly to an arbitrary precision. With some functions the linear approximation is so close that you can reduce it in just a few cycles.

No, you can't. I know this was done in Quake3 fastInvSqrt(), but that is the exception, not the rule in my experience. x = pow(a,b) is a differentiable function. How can you assemble a root function/Newton iteration to successively correct an initial guess for x to arbitrary precision--without actually calling pow() or other transcendental function? I have built Newton (and Halley and Householder) iterations to successively correct estimates for pow(a,b) when b is a particular rational number. You can re-arrange the root function to only have integer powers of the input a and of the solution value x, and those can be computed through successive multiplication. These can be fast, but they are certainly not useful when b is something other than a constant rational number. And even if the exponent value has only a few significant digits, the multiplication cascade starts to get expensive (that was the reason to use Halley/Householder because once you have f' calculated, f'' and f''' are almost free.)

If you know otherwise, please let me know. My current fast pow() function leverages IEEE floating point formats and Chebyshev polynomial expansions to get reasonable results. If there is way to polish an approximate pow() result with Newton (or higher order) iteration, I would be happy to learn it.

Comment Re:oblig (Score 0) 625

I don't think that the author got it all exactly right, but your points are wrong. Business is driven mindlessly by ONE THING--maximizing profitability. The time will come...and right soon...when machines will work well enough to eliminate many existing blue collar jobs and a large fraction of low-skilled white collar jobs too. There is and will be pushback, in the form of higher minimum wages and requirements for health insurances. And all those pushbacks do is accelerate the process. We can likely thank China, India, and Mexico for providing cheap labor and forestalling the onset of this mechanization a decade or two. Had they not been there to take our manufacturing jobs, serious automation efforts would have started even in the early 90s. As greed is the only acceptable (and fiduciarily mandated) corporate ethos, we should expect corps to follow its guiding light to its logical end. As soon as Walmart can stock their shelves with a robot, they will. As soon as McDonalds can reliably serve food without a single worker on staff, they will. As soon as Fedex can roll a Google truck, they will. Human labor is viewed as a "commodity" and it reliably becomes more expensive with time.

I think the mistake that the author makes is that he assumes humans will be part of the machine. They won't. There will be strikes and protests and maybe even legislation, but those will only slow the pace of change, not stop it. The US and Western European economies will look quite different in 50 years. I doubt anything but boutique items will be made even in part by humans. Before then, we will all have to ask ourselves what we do with our time and how do we provide for our needs. The end of the story effectively contrasts two possible outcomes.

Comment Re:quick key repetition (Score 2) 207

Replace the 100 million key dictionary with a PRNG seeded with a secret key, some time information, and source/destination ports and addresses. The NSA would have the PRNG, the key, and the seeding input from the packet. They could deduce the key without much effort and the keys would appear truly random to anyone without knowledge of the secret key, no matter the sample size.

Comment How do you limit authentication?! (Score 1) 194

This idea is terrible. It is even worse than RFID credit cards. Since it has active electronics, I assume this is able to do challenge/response authentication, which is good. But how do you disable it? Somebody just "bumps" into you with a scanner and pays their dinner bill with your gut. At least RFID-chipped cards can be stored in a conductive pouch to prevent walk-by theft.

Whatever shape these new authentication methods take, they need to be at minimum:

(1) Challenge/Response based, and
(2) Momentary ON

Requirement 1 kills most biometrics systems. And Requirement 2 kills most implant/ingested systems.

Comment Re:Timex Sinclair 1000 (Score 1) 623

Yeah...TImex Sinclair 1000, a hand-me-down BW TV from my grandfather, and my old tape recorder. That was the beginning for me...back in 6th grade. I bought from Hills for $50, I think. I spend many hours on it...until the C64 and floppy drive came a few Christmas' later. I look back on those days and the oceans of free time I had...peeking and poking and disassembling code. I remember spending many hours typing in code from Compute! Gazette...that was all C64. And Transactor too.

Comment Re:My first computer (Score 1) 212

I had the Timex Sinclair 1000 as well, but not 16KB module. Paid $60 for it at Hills--I was in 6th grade. It learned quickly to be careful with my precious 2k of RAM, but I coded a fairly accurate image of the Space Shuttle and figured out how to make it "fly" across the screen. Hard to believe I have been writing code for almost 30 years!

Comment Re:*Orbital* angular momentum (Score 1) 147

I think the key take-away is that there is another physical signal dimension to exploit--frequency, directionality, polarization, and now orbital angular momentum. They have demonstrated that they can distinguish between two channels on the same frequency using orbital angular momentum as the differentiator. So, OAM mode can be added to the tool kit. If they can distinguish among a few dozen modes and still allow beam forming, this could provide a huge benefit for cellular and other wireless networks. If they can distinguish among hundreds or thousands of modes, it could be truly transformative. It has been a long time since my EM class, but I wonder if similar mode discrimination could be applied to waveguides.

Comment Re:fused off? Really?! (Score 4, Informative) 127

Probably referring to efuses that can be burned out on the die. These are common and allow CPU/GPUs to have unit-specific information (like serial numbers, crypto keys, etc) coded into otherwise identical parts from the fab. Video game systems like the 360 use them as an anti-hacking measure...disallowing older version of firmware to run on systems that have certain efuses "blown." Likely, there is an efuse for each core or group of cores. Those can be burned out if they are found to be defective or to simply cripple a portion of the part for down-binning. That is a practice at least as old as the Pentium 2.

Slashdot Top Deals

"I may be synthetic, but I'm not stupid" -- the artificial person, from _Aliens_

Working...