Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:What is the long term plan? (Score 5, Informative) 41

The main purpose is to slow the spread, so that health care infrastructure can keep up with the demand. The quality of care also improves over time, since health practitioners learn more and more about how best to manage the disease. (In the extreme case, if we can slow the spread enough then some people will get the vaccine before getting the real virus.)

This visualizes this in graphical form.

Comment Re:Makes one wonder (Score 1) 41

There's a difference between "works" and "works well". I was recently scheduled to teach a 2-day short course recently; the meeting was cancelled (due to COVID19) so we switched to giving the lectures through video-conferencing and doing Q&A using a chat channel. It worked okay, but was not nearly as engaging as an in-person meeting. When courses are run well, the back-and-forth between instructor and students helps make the content more relevant and memorable. (E.g. the instructor can read body language and know when a concept needs to be re-explained.)

Overall, there are certainly lessons to be learned in terms of leveraging online education models to improve efficiency. And I'm not defending the dated "professor droning in front of bored students" teaching model, which could indeed be improved in numerous ways (including by leveraging online components). However currently there is no online experience that can replicate the advantages of in-person discussion, and thus a purely online course will not be as effective as a properly run in-person lecture+discussion.

Comment Re:This should be a given.. (Score 3, Informative) 47

The base-pair sequence of DNA determines its biological function. As you say, this sequence determines what kinds of proteins get made, including their exact shape (and more broadly how they behave).

But TFA is talking about the conformation (shape) of the DNA strand itself, not the protein structures that the DNA strand is used to make.

In living organisms, the long DNA molecule always forms a double-helix, irrespective of the base-pair sequence within the DNA. DNA double helices do actually twist and wrap into larger-scale structures: specifically by wrapping around histones, and then twisting into larger helices that eventually form chromosomes. There are hints that the DNA sequence itself is actually important in controlling how this twisting/packing happens (with ongoing research about how (innapropriately-named) "junk DNA" plays a crucial role). However, despite this influence between sequence and super-structure, DNA strands essentially are just forming double-helices at the lowest level: i.e. two complementary DNA strands are pairing up to make a really-long double-helix.

What TFA is talking about is a field called "DNA nanotechnology", where researchers synthesize non-natural DNA sequences. If cleverly designed, these sequences will, when they do their usual base-pairing, form a structure more complex than the traditional "really-long double-helix". The structures that are designed do not occur naturally. People have created some really complex structures, made entirely using DNA. Again, these are structures made out of DNA (not structures that DNA generates). You can see some examples by searching for "DNA origami". E.g. one of the famous structures was to create a nano-sized smiley face; others have 3D geometric shapes, nano-boxes and bottles, gear-like constructs, and all kinds of other things.

The 'trick' is to violate the assumptions of DNA base-pairing that occur in nature. In living cells, DNA sequences are created as two long complementary strands, which pair up with each other. The idea in DNA nanotechnology is to create an assortment of strands. None of the strands are perfectly complementary to each other, but 'sub-regions' of some strands are complementary to 'sub-regions' on other strands. As they start pairing-up with each other, this creates cross-connections between all the various strands. The end result (if your design is done correctly) is that the strands spontaneously form a ver well-defined 3D structure, with nanoscale precision. The advantage of this "self-assembly" is that you get billions of copies of the intended structure forming spontaneously and rapidly. Very cool stuff.

This kind of thing has been ongoing since 2006 at least. TFA erroneously implies that this most recent publication invented the field. Actually, this most recent publication is some nice work about how the design process can be made more robust (and software-automated). So, it's a fine paper, but certainly not the first demonstration of artificial 3D DNA nano-objects.

Comment Non-deterministic sort (Score 4, Interesting) 195

Human sorting tends to be rather ad-hoc, and this isn't necessarily a bad thing. Yes, if someone is sorting a large number of objects/papers according to a simple criterion, then they are likely to be implementing a version of some sort of formal searching algorithm... But one of the interesting things about a human sorting things is that they can, and do, leverage some of their intellect to improve the sorting. Examples:
1. Change sorting algorithm partway through, or use different algorithms on different subsets of the task. E.g. if you are sorting documents in a random order and suddenly notice a run that are all roughly in order, you'll intuitively switch to a different algorithm for that bunch. In fact, humans very often sub-divide the problem at large into stacks, and sub-sort each stack using a different algorithm, before finally combining the result. This is also relevant since sometimes you actually need to change your sorting target halfway through a sort (when you discover a new category of document/item; or when you realize that a different sorting order will ultimately be more useful for the high-level purpose you're trying to achieve; ...).
2. Pattern matching. Humans are good at discerning patterns. So we may notice that the documents are not really random, but have some inherent order (e.g. the stack is somewhat temporally ordered, but items for each given day are reversed or semi-random). We can exploit this to minimizing the sorting effort.
3. Memory. Even though humans can't juggle too many different items in their head at once, we're smart enough that we encounter an item, we can recall having seen similar items. Our visual memory also allows us to home-in on the right part of a semi-sorted stack in order to group like items.

The end result is a sort that is rather non-deterministic, but ultimately successful. It isn't necessarily optimal for the given problem space, but conversely their human intellect is allowing them to generate lots of shortcuts during the sorting problem. (By which I mean, a machine limited to paper-pushing at human speed, but implementing a single formal algorithm, would take longer to finish the sort... Of course in reality mechanized/computerized sorting is faster because each machine operation is faster than the human equivalent.)

Comment Re:Just another step closer... (Score 1) 205

You make good points. However, I think you're somewhat mischaracterizing the modern theories that include parallel universes.

So long as we use the real physicists definitions and not something out of Stargate SG1, those parallels will always remain undetectable. SF writers tell stories about interacting with other universes - physicists define them in ways that show they can't be interacted with to be verified.

(emphasis added) Your implication is that physicists have invented parallel universes, adding them to their theories. In actuality, parallel realities are predictions of certain modern theories. They are not axioms, they are results. Max Tegmark explains this nicely in a commentary (here or here). Briefly: if unitary quantum mechanics is right (and all available data suggests that it is), then this implies that the other branches of the wavefunction are just as real as the one we experience. Hence, quantum mechanics predicts that these other branches exist. Now, you can frame a philosophical question about whether entities in a theory 'exist' or whether they are just abstractions. But it's worth noting that there are plenty of theoretical entities that we now accept as being real (atoms, quarks, spacetime, etc.). Moreover, there are many times in physics where, once we accept a theory as being right, we accept its predictions about things we can't directly observe. Two examples would be: to the extent that we accept general relativity as correct, we make predictions about the insides of black holes, even though we can't ever observe those areas. To the extent that we accept astrophysics and big-bang models, we make predictions about parts of the universe we cannot ever observe (e.g. beyond the cosmic horizon).

An untestable idea isn't part of science.

Indeed. But while we can't directly observe other branches of the wavefunction, we can, through experiments, theory, and modeling, indirectly learn much about them. We can have a lively philosophical debate about to what extent we are justified in using predictions of theories to say indirect things are 'real' vs. 'abstract only'... but my point is that parallel realities are not alone here. Every measurement we make is an indirect inference based on limited data, extrapolated using a model we have some measure of confidence in.

Occam's Razor ...

Occam's Razor is frequently invoked but is not always as useful as people make it out to be. If you have a theory X and a theory X+Y that both describe the data equally well, then X is better via Occam's Razor. But if you're comparing theories X+Y and X+Z, it's not clear which is "simpler". You're begging the question if you say "Clearly X+Y is simpler than X+Z! Just look at how crazy Z is!" More specifically: unitary quantum mechanics is arguably simpler than quantum mechanics + collapse. The latter involves adding an ad-hoc, unmeasured, non-linear process that has never actually been observed. The former is simpler at least in description (it's just QM without the extra axiom), but as a consequence predicts many parallel branches (it's actually not an infinite number of branches: for a finite volume like our observable universe, the possible quantum states is large but finite). Whether an ad-hoc axiom or a parallal-branch-prediction is 'simpler' is debatable.

Just about any other idea looks preferrable to an idea that postulates an infinite number of unverifiable consequents.

Again, the parallel branches are not a postulate, but a prediction. They are a prediction that bother many people. Yet attempts to find inconsistencies in unitary quantum mechanics so far have failed. Attempts to observe the wavefunction collapse process have also failed (there appears to be no limit to the size of the quanum superposition that can be generated). So the scientific conclusion is to accept the predictions of quantum mechanics (including parallel branches), unless we get some data that contradicts it. Or, at the very least, not to dismiss entirely these predictions unless you have empirical evidence against either them or unitary quantum mechanics itself.

Comment Re:Can't have it both ways (Score 1) 330

I disagree. Yes, there are tensions between openness/hackability/configurability/variability and stability/manageability/simplicity. However, the existence of certain tradeoffs doesn't mean that Apple couldn't make a more open product in some ways without hampering their much-vaunted quality.

One way to think about this question to analyze whether a given open/non-open decision is motivated by quality or by money. A great many of the design decisions that are being made are not in the pursuit of a perfect product, but are part of a business strategy (lock-in, planned obsolescence, upselling of other products, DRM, etc.). I'm not just talking about Apple, this is true very generally. Examples:
- Having a single set of hardware to support does indeed make software less bloated and more reliable. That's fair. Preventing users from installing new hardware (at their own risk) would not be fair.
- Similarly, having a restricted set of software that will be officially supported is fine. Preventing any 'unauthorized' software from running on a device a user has purchased is not okay. The solution is to simply provide a checkbox that says "Allow 3rd party sources (I understand this comes with risks)" which is what Android does but iOS does not.
- Removing seldom-used and complex configuration options from a product is a good way to make it simpler and more user-friendly. But you can easily promote openness without making the product worse by leaving configuration options available but less obvious (e.g. accessed via commandline flags or a text config file).
- Building a product in a non-user-servicable way (no screws, only adhesives, etc.) might be necessary if you're trying to make a product extremely thin and slick.
- Conversely, using non-standard screws, or using adhesives/etc. where screws would have been just as good, is merely a way to extract money from customers (forcing them to pay for servicing or buy new devices rather than fix old hardware).
- Using bizarre, non-standard, and obfuscated file formats or directory/data-structures can in some cases be necessary in order to achieve a goal (e.g. performance). However in most cases it's actually used to lock-in the user (prevent user from directly accessing data, prevent third-party tools from working). E.g. the way that iPods appear to store the music files and metadata is extremely complex, at least last time I checked (all files are renamed, so you can't simply copy files to-and-from the device). The correct solution is to use open formats. In cases where you absolutely can't use an established standard, the right thing to do is to release all your internal docs so that others can easily build upon it or extend it.

To summarize: yes, there are cases where making a product more 'open' will decrease its quality in other ways. But, actually, there are many examples where you can leave the option for openness/interoperability without affecting the as-sold quality of the product. (Worries about 'users breaking their devices and thus harming our image' do not persuade; the user owns the device and ultimately we're talking about experience users and third-party developers.) So, we should at least demand that companies make their products open in all those 'low-hanging-fruit' cases. We can then argue in more detail about fringe cases where there is really a openness/quality tradeoff.

Comment Re:n = 1.000000001 (Score 3, Informative) 65

I'm somewhat more hopeful than you, based on advances in x-ray optics.

For typical x-ray photons (e.g. 10 keV), the refractive index is 0.99999 (delta = 1E-5). Even though this is very close to 1, we've figured out how to make practical lenses. For instance Compound Refractive Lenses use a sequence of refracting interfaces to accumulate the small refractive effect. Capillary optics can be used to confine x-ray beams. A Fresnel lens design can be used to decrease the thickness of the lens, giving you more refractive power per unit length of the total optic. In fact, you can use a Fresnel zone plate design, which focuses the beam due to diffraction (another variant is a Laue lens which focuses due to Bragg diffraction, e.g. multilayer Laue lenses are now being used for ultrahigh focusing of x-rays). Clever people have even designed lenses that simultaneously exploit refractive and diffractive focusing (kinoform lenses).

All this to say that with some ingenuity, the rather small refractive index differences available for x-rays have been turned into decent amounts of focusing in x-ray optics. We have x-rays optics now with focal lengths on the order of meters. It's not trivial to do, but it can be done. It sounds like this present work is suggesting that for gamma-rays the refractive index differences will be on the order of 1E-7, which is only two orders-of-magnitude worse than for x-rays. So, with some additional effort and ingenuity, I could see the development of workable gamma-ray optics. I'm not saying it will be easy (we're still talking about tens or hundreds of meters for the overall camera)... but for certain demanding applications it might be worth doing.

Comment High resolution but small volume (Score 5, Informative) 161

The actual scientific paper is:
C. L. Degen, M. Poggio, H. J. Mamin, C. T. Rettner, D. Rugar Nanoscale magnetic resonance imaging PNAS 2009, doi: 10.1073/pnas.0812068106.

The abstract:

We have combined ultrasensitive magnetic resonance force microscopy (MRFM) with 3D image reconstruction to achieve magnetic resonance imaging (MRI) with resolution <10 nm. The image reconstruction converts measured magnetic force data into a 3D map of nuclear spin density, taking advantage of the unique characteristics of the 'resonant slice' that is projected outward from a nanoscale magnetic tip. The basic principles are demonstrated by imaging the 1H spin density within individual tobacco mosaic virus particles sitting on a nanometer-thick layer of adsorbed hydrocarbons. This result, which represents a 100 million-fold improvement in volume resolution over conventional MRI, demonstrates the potential of MRFM as a tool for 3D, elementally selective imaging on the nanometer scale.

I think it's important to emphasize that this is a nanoscale magnetic imaging technique. The summary implies that they created a conventional MRI that has nanoscale resolution, as if they can now image a person's brain and pick out individual cells and molecules. That is not the case! And that is likely to never be possible (given the frequencies of radiation that MRI uses and the diffraction limit that applies to far-field imaging.

That having been said, this is still a very cool and noteworthy piece of science. Scientists use a variety of nanoscale imaging tools (atomic force microscopes, electron microscopes, etc.), but having the ability to do nanoscale magnetic imaging is amazing. In the article they do a 3D reconstruction of a tobacco mosaic virus. One of the great things about MRI is that is has some amount of chemical selectivity: there are different magnetic imaging modes that can differentiate based on makeup. This nanoscale analog can use similar tricks: instead of just getting images of surface topography or electron density, it could actually determine the chemical makeup within nanostructures. I expect this will become a very powerful technique for nano-imaging over the next decade.

The Courts

Submission + - LANCOR v. OLPC Update (groklaw.net)

drewmoney writes: According to an article on Groklaw: It's begun in a Nigerian court. LANCOR has actually done it. Guess what the Nigerian keyboard makers want from the One Laptop Per Child charitable organization trying to make the world a better place?

$20 million dollars in "damages", and an injunction blocking OLPC from distribution in Nigeria.

Privacy

Submission + - Can Blockbuster be sued over Facebook/Beacon? (computerworld.com)

An anonymous reader writes: A professor at the New York Law School is arguing that Blockbuster violated the Video Privacy Protection Act of 1988 (Bork law) when movie choices that Facebook members made on its Web site were made available to other members of the social network via Beacon. The law basically prohibits video rental outfits from disclosing rental choice of their customers to anyone else without specific writtine consent. Facebook's legal liability in all of this is unclear though with Blockbuster it's a straightforward case of not complying with the VPPA, the law professor says. http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9053002&intsrc=hm_list
NASA

Submission + - NASA to scientists: Reveal sex history or lose job 1

Markmarkmark writes: "Wired is reporting that all NASA JPL scientists must 'voluntarily' (or be fired) sign a document giving the government the right to investigate their personal lives and history 'without limit'. According to the Union of Concerned Scientists this includes snooping into sexual orientation, mental & physical health as well as credit history and 'personality conflict'. 28 senior NASA scientists and engineers, including Mars Rover team members, refused to sign by the deadline and are now subject to being fired despite a decade or more of exemplary service. None of them even work on anything classified or defense related. They are suing the government and documenting their fight for their jobs and right to personal privacy."
Space

Submission + - Earth's Evil Twin (esa.int)

Riding with Robots writes: "For the past two years, Europe's Venus Express orbiter has been studying Earth's planetary neighbor up close. Today, mission scientists have released a new collection of findings and amazing images. They include evidence of lightning and other results that flesh out a portrait of a planet that is in many ways like ours, and in many ways hellishly different, such as surface temperatures over 400C and air pressure a hundred times that on Earth."
Robotics

Submission + - Robots assimilate in cockroach society (nytimes.com)

sufijazz writes: "Scientists have gotten tiny robots to not only integrate into cockroach society but also control it. This experiment in bug peer pressure combined entomology, robotics and the study of ways that complex and even intelligent patterns can arise from simple behavior. Animal behavior research shows that swarms working together can prosper where individuals might fail, and robotics researchers have been experimenting with simple robots that, together, act a little like a swarm.

The BBC also has a video story on this."

Microsoft

Submission + - EU thinktank urges full Windows unbundling

leffeman writes: An influential Brussels think tank is urging the European Commission to ban the bundling of operating systems with desktop and laptop computers. The Globalisation Institute's submission to the Commission says that bundling 'is not in the public interest' and that the dominance of Windows has 'slowed technical improvements and prevented new alternatives entering from the marketplace.' It says the Microsoft tax is a burden on EU businesses: the price of operating systems would be lower in a competitive market. This is the first time a major free-market think tank has published in favour of taking action against Microsoft's monopoly power.

Slashdot Top Deals

The goal of Computer Science is to build something that will last at least until we've finished building it.

Working...