Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
User Journal

Journal Journal: Beowulf!

Woohaa! I got myself a Beowulf cluster! 20 nodes of 2GHz P4's using Open-MPI. Cool. .. So what do i do with it now that i can call myself a Beowulf administrator? No idea. I don't even know if what i have now can be called a Beowulf. I thought that since this is so popular amongst geeks, there would be a plethora of documentation available, but almost all links point to the original Beowulf documentation which is just a description of how things should look like. No tutorial or practical examples. After a long search i came to the conclusion that any cluster can be called a Beowulf if it uses some form of MPI (Message Passing Interface). Now that i got this running i thought about compiling a test program that came with the open-mpi package called pi.c. An extremely simple program that calculates pi using several nodes/cpu's. Except that the more nodes i specify, the longer the calculation takes. Uhm, i don't know much (if anything) about HPC, but i don't think this is supposed to happen. Today i have more time to play with my cluster. I can either setup Hadoop on it now since the whole idea is to make a PoC on clustered filesystems, or i can try to see if i can compile something useful using MPI. Decisions decisions ...

User Journal

Journal Journal: Clustered file systems

We're currently experimenting with clustered file systems. In the running to become our selling products are GlusterFS and Hadoop. The first because of its flexibility and easiness of setting it all up, the latter for the sheer fact that large companies are already using it and on paper it looks very slick. Hadoop only has the drawback of requiring java which in turn means the relatively cheap nodes suddenly become more expensive since we have to buy more ram.

GlusterFS should be able to support striping over AFR (automatic file replication) and thus supply a good mix between performance and reliability but so far i'm unable to produce a working configuration for this. The only setup i'm using right now is a 20-node cluster which contains 10 AFR pairs and are clustered with Unify. Even over GigE this performs quite well. I don't see any performance increase with the various optimizers yet. I have tried all but the 'boost' optimizer, so that's still worth a shot. Also some days ago 1.3.10 was released which fixes a hang bug when pulling out an AFR node. I'm not sure if this will fix the overwrite bug then, as i had to use the latest mainline 2.5 version for that fix.

Hadoop will be tested next week if all goes well. It will be just a matter of taking out the 256M RAM modules from 10 nodes and placing them in the other half, so java should have enough. Why one would develop such a system in java is beyond me, but then again, i'm not a developer so they must have had good reasons to do so.

Why am i typing all this? I don't know. Go see icanhascheezburger.com if reading this made you feel sad.

Slashdot Top Deals

There are two kinds of egotists: 1) Those who admit it 2) The rest of us

Working...