Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Intervention != accident (Score 1) 67

AFAIK an intervention is not an event that would have been an accident, but rather a situation in which the control software of the vehicle decides that it cannot solve the current driving situation. Without human intervention such a car is expected to pull over in a safe manner. Also, self-driving car companies are offering the combination of software + human operators for these interventions. Hence the measure should be the same as for human drivers: miles driven per accident caused.

Comment They already co-design the hard-/software (Score 1) 223

Basically, the procurement process for supercomputers is like this: the buyer (e.g. a DOE lab) will ready a portfolio of apps (mostly simulation codes) with a specified target performance. Vendors then bid for how "little" money they'll be able to meet that target performance. And of course the vendors will use the most (cost/power) efficient hardware they can get.

The reason why we're no longer seeing custom built CPUs in the supercomputing arena, but rather COTS chips or just slightly modified versions, is that chip design has become so exceedingly expensive and that the supercomputer market is marginalized by today's mainstream market.

Also, the simulation codes running on these machines generally far outlive most supercomputers. The stereotypical supercomputer simulation code is a Fortran program written 20 years ago, which received constant maintenance in the past years, but no serious rewrite is viable (costs exceed price of hardware). So vendors will look for low-effort ways of tuning these codes for their proposed designs. Sticking with general purpose CPUs is in most cases the most cost efficient way.

Comment Capacity vs. capability (Score 1) 223

So, what you describe is essentially the difference between capacity and capability machines. The national labs have both, as there are use cases for both. But the flagship machines, e.g. Titan at the Oak Ridge Leadership Computing Facility (OLCF), are always capability machines -- built to run full system jobs, jobs that scale tens of or hundreds of thousands of nodes.

Comment Exascale machines are for scientific computing (Score 2, Informative) 223

These Peta/Exascale supercomputers are build for computer simulations (climate change, nuclear weapons stewardship, computational drug design, etc.), not for breaking encryption. That's also one reason no one is using them to mine Bitcoins: they're just not efficient at that job. To compute lots of hashes, dedicated hardware designs (read: ASICS) far outpace "general purpose" supercomputers.

Comment Why is the hardware so complex/expensive? (Score 2) 217

From what I read the dongle is merely the interface from the camera (USB) to the smartphone (USB). That should be trivial. (For my setup a USB OTG cable + adapter to mini USB is sufficient, there are tons of apps to control cameras).

The article states that they had to use a beefier micro controller etc., but I wonder: why not do all the processing on the smart phone? These days our phones have so much processing power AND sensors, there should be no need to do any kind of non-trivial logic outside, especially when you're just trying to launch your first product.

Comment We're no longer at the origin (Score 1) 181

Architectural improvements for general purpose CPUs yield less and less benefits: Even more registers? Even better branch prediction? Even larger caches? It'll all yield but a few percent, at least for current Intel designs. So, the way to go is currently more and more cores, but what good is it to have many cores that can't all fire simultaneously?

Slashdot Top Deals

In these matters the only certainty is that there is nothing certain. -- Pliny the Elder

Working...