Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:Why is a garbage collector even needed? (Score 1) 385

GCs are very useful in a few cases. In particular if you have a graph data structure which can have cycles and holds no resources then a GC will save you a lot of work. As soon as you introduce cycles smart pointers and destructors stop being enough.

Personally I think OO programs should avoid having cycles in their dependency graphs anyway. If Foo requires Bar to work and Bar requires Foo, which one do you destroy first? Foo might use Bar in its destructor and viceversa. GC'd languages resolve this by saying "we'll destroy them in whatever order we want, don't count on your references being valid". Smart pointers can solve this by declaring one of those as a weak reference, thereby making it obvious which one gets destroyed first.

On the other hand if you're writing a compiler or some other graph-heavy program, having a free-form data structure with automatic GC where referents don't get destroyed while you hold them (no weak_ptrs) will save you a lot of trouble. The flip side is that you shouldn't let any of those objects hold onto external resources like open files, database connections, etc.

So... GC for PODs and smart_ptr for OO. The lack of a generic C++ GC does present a problem, I think.

Comment Re:Does TFA actually explain things? (Score 2) 385

Second, here's the newer, better syntax:

void moveswapstr(string& empty, string & filled)
{ //pseudo code, but you get the idea

  size_t sz=empty.size();

  const char *p= empty.data(); //move filled's resources to empty

  empty.setsize(filled.size());

  empty.setdata(filled.data()); //filled becomes empty

  filled.setsize(sz);

  filled.setdata(p);
}

Regarding the first comment, no, I really don't, unless the point is that this is what the code for "moving" would look like if implemented in older versions of C++.

Yes, that is what the code looks like without move constructors. That has nothing to do with C++11, and yes, it's really ugly.
This that follows is the declaration (not implementation) of a move constructor.

If you’re implementing a class that supports moving, you can declare a move constructor and a move assignment operator like this:
class Movable
{
Movable (Movable&&); //move constructor
Movable&& operator=(Movable&&); //move assignment operator
};

Ok, cool... But where is this used in the "moveswapstr" example? Does this make the "naiveswap" example automagically faster? Or is there some other syntax? It doesn't really say:

Now you can do
Movable foo = bar();
and it'll call the move constructor rather than the copy constructor when constructing foo. The advantage is important. For example, in the case of a string or a vector you don't copy the contents with the copy constructor just to delete the original when the rvalue goes out of scope. There is a way to force the call to the move assignment, which you could use to force the naive swap to move data from one object to another rather than copy/destroy the data, so you end up with three lines of clean code (I don't recall the syntax now, but it's not that different)

The C++11 Standard Library uses move semantics extensively. Many algorithms and containers are now move-optimized.

...right... Still, unless I actually know what this means, it's useless.

Nope. For example, when vector grows its storage it currently copies all the objects from the old to the new storage and then destroys the old. Now it'll automatically notice the move constructors and move the data, leaving old objects empty. You get the performance advantage just by implementing a move constructor.

And since the standard library classes now have move constructors you get this for free for any vector<string> or vector<vector<foo>> or whatever without changing your code.

It looks like there's a lot of good stuff here, and the article is decently organized, but the actual writing leaves me balanced between "Did I miss something?" like the above, and enough confusion that I'm actually confident the author screwed up. For example:

In C++03, you must specify the type of an object when you declare it. Yet in many cases, an object&rsquo;s declaration includes an initializer. C++11 takes advantage of this, letting you declare objects without specifying their types:
auto x=0; //x has type int because 0 is int
auto c='a'; //char
auto d=0.5; //double
auto national_debt=14400000000000LL;//long long

Great! Awesome! Of course, this arguably should've been there to begin with, and the 'auto' in front of these variables is still annoying, coming from dynamically-typed languages. But hey, maybe I can write this:

for (auto i = list.begin(); i != list.end(); ++i) ...

Instead of:

for (std::list<shared_ptr<whatever> >::iterator i = list.begin(); i != list.end(); ++i) ...

Yes you can. Grab any recent gcc and pass it --std=gnu++0x (I think) and it'll compile

It's almost like C++ wanted to deliberately discourage abstraction by making it as obnoxious as possible to use constructs like the above. Anyway, that's what I expected the article to say, but instead, it says this:

Instead, you can declare the iterator like this:
void fucn(const vector<int> &vi)
{
vector<int>::const_iterator ci=vi.begin();
}

...what? Am I missing something, because this doesn't seem to be about type inference at all. Did we switch to another topic without me noticing? Nope, it continues:

C++11 offers a similar mechanism for capturing the type of an object or an expression. The new operator decltype takes an expression and &ldquo;returns&rdquo; its type:
const vector<int> vi;
typedef decltype (vi.begin()) CIT;
CIT another_const_iterator;

That's cool. Kind of funny how it needs yet another language construct to do it, or that I've been able to do stuff like this in other languages for ages, but hey, I can't complain... much.

But the article seems full of stuff like this. Either I suck at C++ more than I thought, or this guy should've at least proofread a bit. I mean...

Yeah that's just more bad writing. He's showing how to use decltype to basically copy the type of a variable to another, or to give a short name to something you often use.

If the changes in C++11 seem overwhelming, don't be alarmed. Take the time to digest these changes gradually.

It's not the changes that seem overwhelming, as these are mostly taking the good ideas from other languages and backporting them to C++. If I'm alarmed, it's because C++ was already bloated from the first attempt at this -- backporting objects and exceptions to C. But what's actually overwhelming here is trying to read this article, and that's not the language's fault.

I've noticed I tend to emphasize objects less and less the more I work with newer C++ versions. It's not the mess of pointers of Java, but rather collections and smart pointers now. The old bloated stuff gets used less and replaced with cleaner stuff, but old things still compile. Exceptions are still useful, but combined with destructors on the stack, smart pointers, RAII, and so on you end up writing exception-clean code without having a mess of try/finally (like java).

The problem I see for the future is old libraries. This is all nice if you're building a new program, but if you need to interface with code that uses naked pointers, posix threads, their own string class and containers, you won't be able to take advantage of these new things or you will get a mess just trying to mix all the different coding styles.

Comment Re:Is the gold rush over? (Score 1) 768

But the only way to double your money is if you manage to convince other people that this is worthwhile.

Currently, bitcoins are worthless except as much as you can convince other people to buy them. You can't do much with them except to sell it back. It looks like people are trying to force a bubble to form

Banks and real state agents told everyone that they should buy an expensive house because they could make money by flipping it. That's what's happening now with bitcoin. That's what your comment suggested. It seems only useful for flipping.

(Ok, also useful for the old miners to make money from the gullible. See slashvertisements/spam that don't even try to convince you to flip, just that it's a thing/wave of the future and you should buy some)

Comment Re:Is the gold rush over? (Score 4, Insightful) 768

Which is why you're seeing all the bitcoin stories spamming news sites lately.

Err, I guess I should put that in the form of a question...

Amir, what do you know about all these stories appearing on news sites lately? Are they a result of early adopters trying to monetize on their investment and realizing that the only way they can is if they get enough buy-in from popular opinion? Would you consider starting a new bitcoin database now that the code is stable and letting new mining start from scratch now that it's relatively well known and the code is stable? Or would that hurt your bottom line too?

Ooops, I guess it's just one question per post. Sorry, I guess this one won't get picked...

Comment Re:X window (Score 1) 264

I've been against this wall before. There are a few things to consider:
- In a university environment the "compute cluster" is not going to be in a data center far away, but rather in "lab" (read office) with 16 8-core machines, so the machines might actually be used locally either with a monitor for each grad student or a KVM switch for the single student/admin. For newbie admins it's easier to flip the KVM switch and click their way through the admin guis.
- In a mixed Win/Linux environment, you're right, all you need is a XServer on the windows side, but the only freely available ones are (as of a year ago or so):
  - An old version of MingW that hangs on the current Ubuntu and Debian desktops.
  - Cygwin/X, which is a pain to set up.
- You can also set up VNC, but the split VNC/unix passwords, painful setup to start x sessions upon a vnc conection, and IIRC having to give everyone a VNC number was a PITA. Not to mention performance differences. Then again, given the requirement, they're probably thinking of something like this. Bring up the X server with a default session and people can connect remotely, or something
- Not all newbie admins know how to install the desktop environment (gedit, apps, etc) without also installing the X server. Even for those who do, they might just disable starting gdm on boot, so that X is still available if you need it with a simple startx. It won't take up much resources when it's down, and 1GB of disk space isn't much nowadays

Personally I access servers from either Linux or Cygwin, via ssh and X forwarding, but it's kind of hard to get it into windows people's heads that you use a server remotely without opening a "session" with desktop and start menu remotely. Also, even if the desktop environment isn't installed I usually install gedit for quick edits (I'm an emacs guy, but gedit is easier for the rest of the team, and for small one-line edits easier even for me).

Though a pet peeve of mine is that there isn't a quick util to bring up the machine's application menu from the command line. Basically, pop up Gnome's or KDE's app/start menu. Why do I have to wade through hundreds of .desktop files to figure out how to run the SAPGui app in my desktop remotely from my laptop? The data is there in a standard format, there's code for it, just not available from the command line.

Also keep in mind these aren't admins accessing the servers. For example some might want to bring up R's graphical UI and do some work on it. Some of the programs/plugins might make use of the resources on the other machines, but some are just normal desktop programs. If their workstations are windows, they might end up connecting remotely and bringing up an IDE for developing their calculation/analysis routines anyway, serving dual purpose as compute cluster and simple workstations.

The start menu is a good way to see what is available and isn't in each remote machine. Whether the rest of the "session" framework is useful is another story.

Comment Re:Lunchbreaks (Score 2) 475

I agree with both of your points, but I think you're missing the whole point of the article.

Let me give you an example. A few years ago I joined a team where most people stayed around for lunch (some lived close enough to go have lunch at home, or their spouses worked close enough to go eat with them, etc). So a lot of the time we'd have 5-10 people from a couple projects eating lunch together, but with one unbreakable rule: no talking about work. People would actually stop you if you brought up anything work related.

Now I'm an introvert, and most of the time I'd sit there and listen to them talk, but I did talk sometimes. The end result was that we stopped being "just those people I work with" and the whole "I'm here to work not to make friends" attitude lessened. I don't know if it helped improve our productivity, but the working environment was clearly improved.

We all were annoyed at our boss when meetings ran late, when we were asked to go to a stupid company get-togethers, but it was an us-against-them thing rather than a me-against-the-company thing. I don't know if that makes sense to you. For example, sometimes we'd skip the company's thing and go to one our houses and do something by ourselves.

Forced lunch-meetings are horrible, but having the company provide a (discounted) cafeteria, encouraging teams to eat together, etc. does help with interpersonal relationships. And if you intend to work with the same people for many years that's very important.

Comment Re:Did you know (Score 3, Informative) 335

Well, at least they did place it on Tokyo. And the rest are actual nuclear plants but they missed a few.

An accurate map is on the last page of this report. 16 nuclear plants total, 12 of them active and unaffected. That's 40 nuclear reactors working safely, 8 safe even after the quake, and 6 at Fukushma Daiichi giving them trouble.

Comment Re:Not Good (Score 2, Informative) 335

Actually, they still do, but it's just the highlights and don't do much explanations:

Japans Atomic Industrial Forum has better presentations, aparently from TEPCO data:

World nuclear news has some explanations of the events, as does MIT NSE Nuclear Information Hub

Those are the places I turn to when people start talking about normal media coverage. I just saw a CNN report that started out with clips of people saying that there was another explosion and that there was a fire on reactor 4. I went "shit" and checked. Turns out those were old clips from a few days ago when there were explosions and fires.

It looks to me like things are more or less under control. The cores should now be in cold shutdown putting out nominal heat. Barring another accident (explosion, earthquake, tsunami, pump propeller breaking up and tearing a hole through a pipe, etc.) they should have things sorted out in a week or two. Not to say it's not a mess. Food from fukushima might need to be thrown out for a week or two while cesium decays and there will be rolling blackouts until this stabilizes enough for workers to take a look at the other 3 nuclear plants and restart them. but still it won't be anywhere near the disaster the media makes it out to be.

As to the release of these pictures, while information is good and all, after this is all said and done TEPCO will still have to keep these power plants secure, and there are reactors just like these that will have to stay online until new ones are built. I understand Fukishima Daini and others use the same models. Handing high-res pictures of the facility to potential nuclear terrorists sounds like a bad idea, and the people who know what to censor are slightly busy at the moment.

Comment Re:A very sad day (Score 4, Informative) 688

I know that RTFA'ing is not well received around here, but in this case reading the second one would be a good thing.

First of all, the resolution only gives the different countries permission to defend civilians, not to depose Gaddafi.

Considering that the widespread and systematic attacks currently taking place in the Libyan Arab Jamahiriya against the civilian population may amount to crimes against humanity...
Analysis: These first two highlighted sections emphasise that this is all about defending the civilian population in Libya from attacks by its own government. One of the conditions for action set out by Nato countries has been "a demonstrable need" to intervene. ...
1. Demands the immediate establishment of a ceasefire and a complete end to violence and all attacks against, and abuses of, civilians;
Analysis: The overriding stated aim is to halt the fighting and to achieve a ceasefire. It does not explicitly call for the removal of Col Muammar Gaddafi though one can assume that this is what the countries promoting this resolution would like. Many of their leaders have said so quite explicitly.

Also, other countries are barred from putting in occupation forces and so on. Current attacks seem to be aimed at anti-air defenses so their forces can start enforcing a no-fly zone without having their planes shot down.

They'll probably target sites shelling other cities and so on.

Comment Re:What's Wrong with Happy Kids? (Score 3, Insightful) 458

Growing up is about "turning into something you're not". Otherwise you'd stay a child forever.

While the submitter does seem like a troll with his "unremarkable bits of plastic" thing, he does have a point that if everyone is giving them the same thing then (a) they are all trying to turn them into the same thing they are not (e.g. gun wielding/fire truck driving men) and (b) the children haven't had a chance to see if they even like anything else.

It's a risk thing too. You can give them the same thing as everyone else and they will thank you. Or you can give them a Rubik cube, a set of Lego, or something else and there's about even odds that they'll play with it for a day and forget about it, or they might start playing with it and you'll hear from their parents months later that they didn't drop it ever since.

These are children you're talking about. Give them a great big expensive toy and they'll end up playing with the box for hours instead.

Comment My assessment (Score 1) 375

I'm part of a team looking into moving our company to Linux in the long term. Some 3000+ workstations with windows XP, MS office, exchange, etc.

Currently we're looking at Ubuntu and servers in Debian. My assessment is:
- You need directory services. Fedora Directory services (389 server) is hard to install on Debian/Ubuntu and has a lot of trouble with their two way AD replication. Other people who have worked with OpenLDAP report severe corruption when synchronizing multiple masters across unreliable links. Both arer a pain to set up windows clients for.
- Both Ubuntu 9.10 and Macs can join Active Directory using Likewise Open. Ubuntu 10.04 included it in their main repository, adverrtised the integration and completely fucked it up. Most of the bugs are fixed in the PPA, but they haven't bothered to put the fixes in their supported repositories for the last 6 months, and the same bugs are in 10.10. Upgrading 9.10 -> 10.04 with break your configuration, unless you know enough to add the PPAs beforehand or a private repo. With the PPAs it works well, but single-sign-on doesn't work (worked in 9.10) and it has problems when working from home.
- Some things aren't implemented. Windows can authenticate with Radius (WPA Enterprise, VPN, etc) with the machine's AD password. Ubuntu + Likewise doesn't have that capability, though it's relatively easy to script yourself. You have to log in and enter a password for the wireless, (hard if you need the wireless to log in) or set your password to be used for anyone who uses the computer (bad if you ever change your password)
- Ubuntu has a bunch of embarrassing bugs that prevent me from just giving it to one of my users. The original OOo in 10.04 couldn't even join cells selected with the mouse. It's sad when MS's products have more quality than yours.

Maybe we'll start over looking at Fedora, I'd like to hear about people's experiences with their quality assurance.

All that aside, the other big points to watch:
- Email is a problem. Web based solutions preclude you from having PSTs locally for personal history/backups (which is very common at my company). If you don't switch to a web based solution then Evolution is a mess with its exchange integration. The old connector only connects through OWA, and loses synchronization (says there are unread emails but won't show you them or download them, silently stops updating, etc) and crashes every once in a while. The MAPI connector has some weird issues with character encoding. You can use Thunderbird but you lose all the Gnome integration. And either you switch windows users to thunderbird too or support two different programs. You could install a Linux based email and calendaring server too, that can sync email, appointments and everything else with linux windows, macs and phones , but it's nontrivial. Just choosing the right combination of solutions is a big project.
- Access and excel macros have to be rebuilt. A lot of people at our company use them. Every department seems to have a VBA expert building mini-applications and data analysis spreadsheets connected to our data warehouse that then become business-critical. This is not a problem until you want to switch.
- MS Project. If your people use it and need it, there is just no good replacement. Serena Openproj is the closest, but it hasn't been updated in two years and has a bunch of bugs. Plus it's missing things like multi-project and a bunch of features our users need.
- Custom apps: Our intranet won't show well under Firefox and we have a bunch of custom apps (VB and other languages). The former have to be redone anyway to update for a new IE, but the latter are a lot of work in our case.

A migration project is a big undertaking that probably won't be completely justified by cost. On the other hand I don't agree with just laying on your laurels and mindlessly updating to the latest MS offering. Do look into switching every once in a while. If things are good enough for you then switch, even if it takes a lot of work. You can also switch bits and pieces. Migrating the back-end away from exchange and the workstations to web or Thunderbird might lower your costs, simplify your support and simplify a later full migration. It won't be easy, at all, but how else are you going to justify your paycheck? :-)

On the back end, DHCP/DNS are easy. Squid/Squidguard instead of ISA server are pretty easy except for the AD integration, but they're not too hard either. There is a JBoss doc somewhere showing how to create a service/machine account in AD that can later be used in a kerberos keytab to authenticate your users with single-sign-on, and Squidguard can use their AD account to check their access rights if you add Samba to your server. Email is next on our list. Puppet is supposed to be a great tool for reducing your workload, as is Nagios. Postgres is easy to set up, but redoing all your store procedures and porting your apps can be a big job. MS Analysis Services is harder to replace.

Seriously, study the solutions, decide for yourself. Don't just assume it'll work, and don't just assume it won't.

Slashdot Top Deals

"Love your country but never trust its government." -- from a hand-painted road sign in central Pennsylvania

Working...