Comment Re:What is the long term plan? (Score 5, Informative) 41
This visualizes this in graphical form.
So long as we use the real physicists definitions and not something out of Stargate SG1, those parallels will always remain undetectable. SF writers tell stories about interacting with other universes - physicists define them in ways that show they can't be interacted with to be verified.
(emphasis added) Your implication is that physicists have invented parallel universes, adding them to their theories. In actuality, parallel realities are predictions of certain modern theories. They are not axioms, they are results. Max Tegmark explains this nicely in a commentary (here or here). Briefly: if unitary quantum mechanics is right (and all available data suggests that it is), then this implies that the other branches of the wavefunction are just as real as the one we experience. Hence, quantum mechanics predicts that these other branches exist. Now, you can frame a philosophical question about whether entities in a theory 'exist' or whether they are just abstractions. But it's worth noting that there are plenty of theoretical entities that we now accept as being real (atoms, quarks, spacetime, etc.). Moreover, there are many times in physics where, once we accept a theory as being right, we accept its predictions about things we can't directly observe. Two examples would be: to the extent that we accept general relativity as correct, we make predictions about the insides of black holes, even though we can't ever observe those areas. To the extent that we accept astrophysics and big-bang models, we make predictions about parts of the universe we cannot ever observe (e.g. beyond the cosmic horizon).
An untestable idea isn't part of science.
Indeed. But while we can't directly observe other branches of the wavefunction, we can, through experiments, theory, and modeling, indirectly learn much about them. We can have a lively philosophical debate about to what extent we are justified in using predictions of theories to say indirect things are 'real' vs. 'abstract only'... but my point is that parallel realities are not alone here. Every measurement we make is an indirect inference based on limited data, extrapolated using a model we have some measure of confidence in.
Occam's Razor
...
Occam's Razor is frequently invoked but is not always as useful as people make it out to be. If you have a theory X and a theory X+Y that both describe the data equally well, then X is better via Occam's Razor. But if you're comparing theories X+Y and X+Z, it's not clear which is "simpler". You're begging the question if you say "Clearly X+Y is simpler than X+Z! Just look at how crazy Z is!" More specifically: unitary quantum mechanics is arguably simpler than quantum mechanics + collapse. The latter involves adding an ad-hoc, unmeasured, non-linear process that has never actually been observed. The former is simpler at least in description (it's just QM without the extra axiom), but as a consequence predicts many parallel branches (it's actually not an infinite number of branches: for a finite volume like our observable universe, the possible quantum states is large but finite). Whether an ad-hoc axiom or a parallal-branch-prediction is 'simpler' is debatable.
Just about any other idea looks preferrable to an idea that postulates an infinite number of unverifiable consequents.
Again, the parallel branches are not a postulate, but a prediction. They are a prediction that bother many people. Yet attempts to find inconsistencies in unitary quantum mechanics so far have failed. Attempts to observe the wavefunction collapse process have also failed (there appears to be no limit to the size of the quanum superposition that can be generated). So the scientific conclusion is to accept the predictions of quantum mechanics (including parallel branches), unless we get some data that contradicts it. Or, at the very least, not to dismiss entirely these predictions unless you have empirical evidence against either them or unitary quantum mechanics itself.
We have combined ultrasensitive magnetic resonance force microscopy (MRFM) with 3D image reconstruction to achieve magnetic resonance imaging (MRI) with resolution <10 nm. The image reconstruction converts measured magnetic force data into a 3D map of nuclear spin density, taking advantage of the unique characteristics of the 'resonant slice' that is projected outward from a nanoscale magnetic tip. The basic principles are demonstrated by imaging the 1H spin density within individual tobacco mosaic virus particles sitting on a nanometer-thick layer of adsorbed hydrocarbons. This result, which represents a 100 million-fold improvement in volume resolution over conventional MRI, demonstrates the potential of MRFM as a tool for 3D, elementally selective imaging on the nanometer scale.
I think it's important to emphasize that this is a nanoscale magnetic imaging technique. The summary implies that they created a conventional MRI that has nanoscale resolution, as if they can now image a person's brain and pick out individual cells and molecules. That is not the case! And that is likely to never be possible (given the frequencies of radiation that MRI uses and the diffraction limit that applies to far-field imaging.
That having been said, this is still a very cool and noteworthy piece of science. Scientists use a variety of nanoscale imaging tools (atomic force microscopes, electron microscopes, etc.), but having the ability to do nanoscale magnetic imaging is amazing. In the article they do a 3D reconstruction of a tobacco mosaic virus. One of the great things about MRI is that is has some amount of chemical selectivity: there are different magnetic imaging modes that can differentiate based on makeup. This nanoscale analog can use similar tricks: instead of just getting images of surface topography or electron density, it could actually determine the chemical makeup within nanostructures. I expect this will become a very powerful technique for nano-imaging over the next decade.
The goal of Computer Science is to build something that will last at least until we've finished building it.