Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Have they moved to LLVM/Clang? (Score 1) 26

LLVM/Clang builds the DragonFly world and kernel but does not yet build the boot loader. It can be brought in via dports. So it isn't 100% yet but very close. When it does get to 100% it will become one of our two officially supported compilers. Those are currently gcc-4.7 and gcc-5.2.1.

Wayland support isn't really up to us, but there is wayland support in XOrg that I think works for programs desiring to use that API. Don't quote me on it though.

-Matt

Ok, got it. No quoting.

Comment Re:The answer is 42, er...I mean, encryption. (Score 1) 239

Nice in theory. Not so much in practice. With crypto, the devil's in the details. Here are just a few of the hard problems:

...

"The perfect is the enemy of the good" -- Voltaire.

Yes, those are all hard problems, but at least a widespread partial solution would make mass surveillance at least an order of magnitude more difficult and push TLAs to be more focused in their data gathering.

Also, a partial solution has the chance to be improved into better solutions. This would be a much better situation than what we have now. The fact that we can't solve all those hard problems now should not be an excuse to do nothing.

Comment Re:I no longer think this is an issue (Score 1) 258

You misunderstand how AIs are built.

The AI is designed to improve/maximize its performance measure. An AI will "desire" self-preservation (or any other goal) to the extent that self-preservation is part of its performance measure, directly or indirectly, and to the extent of the AI's capabilities. For example, it doesn't sound too hard for an AI to figure out that if it dies, then it will be difficult to do well on its other goals.

Emotion in us is a large part of how we implement a value system for deciding whether actions are good/bad. Avoid actions that make me feel bad; do actions that make me feel good. For an AI, it's very similar. Avoid actions that decrease its performance measure; do actions that increase its performance measure.

The first big question is implementing a moral performance measure (no biggie, just a 2000+-year old philosophy problem). The second big question is keeping that from being hacked, e.g., by giving the AI erroneous information/beliefs. Judging by current events, we don't do very well at this, so I can't imagine much better success with AIs.

Slashdot Top Deals

"It may be that our role on this planet is not to worship God but to create him." -Arthur C. Clarke

Working...