I'll preface this by saying that I'm an HPC admin for a major national lab, and I've also contributed to and been part of numerous HPC-related software development projects. I've even created and managed a distribution a time or two.
There are two important questions that should determine what you run. The first is: What software applications/programs are you expecting the cluster to run? While some software is written to be portable to any particular platform or distribution, scientists tend to want to focus more on science than on code portability, so not all code works on all distributions or OS flavors. Small clusters like yours often focus on a few particular pieces of scientific code. If that's the case for you, figure out what the scientists who wrote it use, and lean strongly toward using that.
The second question is, who will run it? Many small, one-off clusters are run by grad students and postdocs who work for their respective PI(s) for some number of years and then leave. In this scenario, it's important to make sure things are as well-documented and industry-standard as possible to ease the transition from one set of student admins to the next. (And yes, PI-owned clusters have a surprisingly long lifespan. Usually no less than 5 years, often longer.) To that end, I strongly recommend RedHat or Scientific Linux.
We, and most large-scale computational systems groups, use one of two things: RHEL and derivatives, or vendor-provided (e.g., AIX, Cray). We run CentOS but are moving away from it ASAP. The Tri-Labs (Livermore, Sandia, and Los Alamos) use TOSS, which is based on CHAOS (https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fcomputing.llnl.gov%2Flinux%2Fprojects.html), which is based on RHEL. Many other sites use Scientific or CentOS. Older versions of Scientific deviated more from upstream, which caused sites like us to use CentOS instead. That's no longer true with SL6, and since CentOS 6 doesn't even exist yet (and RHEL6.1 is already out!), there are strong incentives to move to SL6.
Let me address some other points while I'm at it:
Why RHEL? If you can run RHEL itself, do so. RHEL isn't built with the same compilers it ships with; the binaries are highly optimized. Back when we were working on Caos Linux, we did some benchmarks that showed RHEL (and Caos, FWIW) to be as much as twice as fast as CentOS running the exact same code. So if performance is a consideration, and you can afford a few licenses, it's definitely worth considering. The support can be handy as well, particularly if this is a student-run cluster.
Why Scientific Linux? If you need a free alternative to RHEL or are running at a scale that makes RHEL licensing prohibitive, SL is the way to go, without a doubt. It's maintained professionally by a team at Fermilab whose fulltime job is to do exactly that. They know their stuff, and they're paid for it by the DOE. Other rebuild projects suffer from staffing problems, personality problems, and lack-of-time problems that SL simply doesn't have.
Why not Fedora? Stability and reliability are critically important. Fedora is essentially a continuous beta of RHEL. It lacks both the life-cycle and life-span of a long-term, production-quality product.
Why not Gentoo? Pretty much the same answer. The target audience for Gentoo is not the enterprise/production server customer. Source-based distributions do not provide the consistency or reproducibility required for a scale-out computational platform. You'll also have a hard time getting scientific code targeted at Gentoo or other 2nd-tier distributions.
Why not Ubuntu or Debian? Ubuntu is a desktop platform, not a server platform. Again, it boils down to their target market. There's really no value-add in the server space with Ubuntu, so why not just run Debian? If Debian's what your admins know best, it's worth considering, but keep in mind that very, very few computational resources run Debian, so you may have to do a lot more fending for yourself if you go that route.
Why not SLES? Mostly a personal choice, but with its uncertain future, I'd be hard-pressed to say it's a safe option. If you have a support contract from, e.g., IBM, that's different. But judging by your cluster size, I'm going to wager that's not the case. :-)
Why not ROCKS? Anyone who runs large systems will tell you that stateful provisioning is antiquated at best, largely because it simply doesn't scale well. ROCKS is firmly locked into the stateful model, and rather than rethinking their design, are trying to find ways to make it faster. You can only say, "It's just a flesh wound!" so many times before the King is going to call it a draw and gallop on by you. ;-) Everyone I've talked to who does large-scale systems (including executives at Adaptive Computing, IBM, and others) agrees that stateless is the future. Right now, the best choice for cluster provisioning by far is xCat. There is work being done on the next generation solution, but for now, xCat is a well-established, proven product. (We still use PERCEUS here, and it's solid and easy to use, but no new features or releases have appeared in quite awhile....)
As for the question about user-friendliness, it depends on the people for whom you wish it to be friendly. If you want friendliness for the admin, what I've seen of Bright Cluster Manager looks promising. (I don't know if Scyld still uses BProc, but what I know about it has thoroughly convinced me never to touch the stuff.) IBM also has its Management Suite for Cloud that looked quite friendly at SC10.
For the users, there are a number of portal options you could try, including one from Adaptive (makers of Moab) that greatly simplifies job submission. But the truth is, it's just Not That Hard(tm) to write up a template qsub script and hand it off to your users. You really want to spend more time worrying about how to manage the resource efficiently and competently and make sure you maximize performance and stability. That's what will get the most science done in the least amount of time...and isn't that really the point? :-)