Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment I recently switched to KDE as well (Score 1) 289

They recently forced us at work to upgrade from Ubuntu 10.04 to 12.04. We have our choice of desktop environments (Gnome 3, KDE 4, XFCE 4, Cinnamon, Unity).

I spend my day in a combination of Chrome, the terminal, and Eclipse.

I have determined that KDE is the least bad of all of these alternatives. There are actually things that I like about it, too.

Like:
- Konsole has a lot of nice features, such as activity notification on tabs in the background
- The volume buttons on my headset actually work (this was not the case for XFCE)
- Bluetooth actually works
- I can have a traditional taskbar
- You can turn off the more flashy desktop effects
- Built-in dark color theme for the Oxygen theme (large areas of white on my 30" monitor are distracting)
- I can apply the dark Qt/KDE theme to GTK+ applications, including Eclipse

Dislike:
- General lack of polish. This is my #1 complaint about KDE, and it's everywhere. The text on my window list buttons is too low, and on the clock it's too high. The "AM" or "PM" on the clock is cut off. Text on buttons has virtually zero top/bottom padding, which looks bad. UI elements are inconsistently aligned. UI strings are often awkwardly phrased.
- Verbosity. I don't need to be notified every time I plug in a USB device, every time the power state of my machine changes, every time the network status changes, every time a file operation completes, every time a daemon crashes, or every time the desktop indexer is done. You can disable pretty much all of these notifications, but to some degree it's like playing whac-a-mole.
- Crashiness. Sometimes, daemons decide to crash randomly. Occasionally, the compositor goes crazy and locks up the entire desktop.
- Insane defaults. Preferences are nice, but they need to be set to reasonable values by default. For example, there are *way* too many global key bindings by default, the eye candy is set to an annoyingly high level by default, single-click select in file dialogs contradicts every other desktop, the default panel is huge, and a whole ton of other things.
- No good system monitor widget. GNOME 2.x had an awesome panel widget that would display CPU, network, and memory; it even displayed I/O wait CPU time in a different color, which was awesome.
- The cashew. It makes no sense, and you can't get rid of it.

If I could have GNOME 2.x back, I would. But KDE 4.x is the best of the current bunch.

Comment Re:What the hell is Wayland? (Score 2) 319

X works very well over a LAN, and, as bandwidth becomes cheaper, problems running over a WAN will go away.

No, it won't. The problem with X over a WAN is latency, and no amount of technology is going to change the fact that light can only go so fast.

The company I work for has a *very* fast WAN between offices, and X over the WAN is still a dog. The problem is that X is to a large degree synchronous, and operations involve multiple round trips. So no matter how much bandwidth you have, you get killed by latency.

The solution to this is to either use a framebuffer-based protocol (VNC and friends) or to use an asynchronous compressing X (NX). Neither of which is really taking advantage of the network features of X.

Comment Re:If $3000 is the societal cost to you not (Score 3, Informative) 2416

Another bullshit hit piece on CFLs.

1 penny per month a normal rate (10 cents per kWh) is 100Wh. A typical incandescent bulb is 60W, a similar CFL is 15W. That saves 45W. So if you replace a single 60W light bulb with a CFL and it's used 3 hours per month, you've already saved more than a penny per month.

I'm guessing you have more then a single 60W light bulb and that you use it for more than 3 hours per month.

I could talk about how actual tests show that CFLs last way longer than incandescent bulbs, or that most CFLs are crushed and recycled in the USA, or that shipping things "25000 miles from China" (it's closer to 7500 miles; no point is "25000" miles away) is actually not all that energy intensive.

But I don't think your rant is based on facts. It's based on a need to be contrarian, to be seen as anything but "green", and to oppose environmental regulations.

We can have a legitimate discussion about whether the government has the right to enact environmental regulations, about whether they are effective, and about whether they are necessary. But if you start with information that is wrong, we can't really discuss anything.

Comment Re:"just put it in Neutral" (Score 1) 911

In our '06 Prius, at moderate/high speeds the car simply won't let you shift from D to N, and I really doubt the computer would pay any attention at all if the driver were to try holding the power button down. But I'll try that out when I get a chance.

This is incorrect. You can shift to neutral from D at any time in the Prius, from any speed.

At anything above a couple MPH, if you push "Park", you end up in neutral. If you try to go in reverse, you end up in neutral as well.

Also, holding the power button down works fine, at any speed. It even works if the computer gets screwed up and can't detect the speed. You do lose power steering if you do this (you keep power brakes).

Also, the Prius has a break override system already.

Comment Re:Not seeing (1) (Score 1) 305

Whether the performance of a managed language (most commonly Java or .NET) is inferior to C++ is almost entirely dependent on your workload.

Java running on the Oracle VM (HotSpot) is actually faster than C++ in many ways (object allocation, virtual function/method calls, the built-in data structures). "Modern" C++ code that uses STL and smart pointers tends to waste a lot of time copying memory around and/or tracking reference counts. Of course, the GC in Java also spends a lot of time copying memory around, but it can be done concurrently and for most programs it is quite efficient.

C++ wins big in two main ways: latency and with loopy scientific code. The fact that you have GC in Java makes latency hard to predict, and if you need to meet latency targets a high percentage of the time C++ is probably the right choice. As for scientific code, Java VMs typically make very poor use of vector instructions and do not have anywhere near the sort of loop/memory ordering optimizations you find in something like ICC.

Comment Re:Android ftl? (Score 1) 358

People often get it confused with garbage collection and while the end results are similar, ARC occurs only occurs at compile time so there is no runtime performance hit.

What are you talking about? Reference counting is absolutely a form of garbage collection, and it's not a particularly good one at that. And it most certainly does have a performance impact; indeed, it's significantly slower than tracing GC in most cases.

There are advantages to reference counting. It's simpler, releases memory sooner, and has more predictable performance characteristics than trace-based GC. But it is a fallacy to pretend that it has "no runtime performance hit".

Comment NEX-5N (Score 3, Informative) 402

Ignore the people telling you to get a DSLR because it has better picture quality.

There are a lot of factors that determine the quality of your images, but the most substantial is sensor size. The sort of DSLRs that you would buy (that is, the ones under $2000) use APS-C sized sensors.

Guess what the Sony NEX-5N (a MILC) uses? An APS-C sensor. And it's arguably the best APS-C sized sensor on the market.

The NEX-5N takes pictures that rival any APS-C DSLR, and it does so for a considerably lower price than many DSLRs.

There are still a lot of good reasons to buy an APS-C DSLR over the NEX-5N:

  • Lenses. The NEX series uses E-mount lenses, and there aren't a lot of choices. This is improving, but we're still talking about the difference between thousands of lenses for EOS (Canon) or F-mount (Nikon) and fewer than 20 for E-mount. You can get adapters for A-mount (Sony DSLR) lenses, but they bulk up the camera. You can also get adapters for virtually any other format (including EOS and F-mount) but you lose auto-aperture and autofocus.
  • Speed. The NEX-5N is not a slow camera by any means, but there are many DSLRs that are faster.
  • Battery life. DSLRs can keep the screen off, plus they generally have larger batteries. The NEX-5N lasts ~350 shots on a charge; expect 2x that from a DSLR
  • Manual controls. I find the controls on the NEX-5N to be fine, especially since you can customize the buttons and create a
    custom quick menu. Still, DSLRs typically have more buttons which means easier access to settings quickly.
  • Viewfinder. If you want a viewfinder, optical is tough to beat (though the NEX7 has an OLED viewfinder that is excellent).

And there are a lot of good reasons to buy an NEX-5N over an APS-C DSLR:

  • Lens adapters. You can mount basically any 35mm lens on the NEX-5N with an adapter because the flange-back distance is lower than on any other format. This includes Canon and Nikon lenses, classic and modern rangefinder lenses like Leica lenses, and a lot more. Yes, you have to use manual focus and aperture. But it's still a very cool capability.
  • Size. The NEX-5N is way smaller and lighter than any DSLR. Even with the Sony 18-55mm lens it fits in a large pocket or small camera bag, and it's even smaller with the Sony 16mm pancake lens.
  • It doesn't look like a DSLR. This may be a big factor if you don't want to look like a professional photographer (for example, at concerts or while doing covert journalism).
  • Video. The NEX-5N takes 1080p60 video in H.264 at 28Mbps, and 1080p24 at 24Mbps. Most DSLRs in the same price range (and even many that are more expensive) are limited to 1080p24 or 720p60, both of which are inferior if you want to record fast action (like sporting events) or just hate low-frame-rate video.
  • Value. The NEX-5N has better high-ISO performance, better dynamic range, and more resolution than basically any camera under $1000.

I love my NEX-5N. It is not perfect for everyone, or for every purpose. But if you aren't interested in buying a ton of lenses, you don't like using a viewfinder, and you prefer a compact camera without crappy picture quality, the 5N is a really good choice.

Comment Re:Standard is an end-user idea (Score 1) 373

So complaining about Android having a "more standard" connector totally misses the fact that from the standpoint of people buying the phone, the Android connector is simply not as standard.

What the hell are you talking about? Every Android phone that you can buy today uses a USB micro-B connector. Every recent BlackBerry device, every Windows Phone device, every Kindle, every Nook, and a significant fraction of non-smartphone devices use micro-B as well.

My USB wireless headset uses micro-B. My portable hard drive uses micro-B.

To claim that the 30-pin dock connector is more common than micro-B is flat-out wrong. Micro-B is quite literally the *only* connector I have on portable electronics that I own. Being able to bring a *single* cable on a trip that will charge my phone, charge my wireless headset, and connect to my portable hard drive is invaluable. Try doing that with a dock connector.

Comment Re:Tell them the truth... (Score 1) 315

I'm a new software engineer for Google. My job is low-stress, my workload is reasonable, and there are many different options for the advancement of my career, regardless of whether I want to write code on a day-to-day basis long term. The pay and benefits are also good, and I get to travel quite a bit.

One of my friends is also a new software engineer, but he started out at a small company that made medical software for smartphones and the web. He decided that he didn't like the company, and now he's at another startup working on software to help consumers monitor and reduce their energy consumption.

I have another friend at Microsoft, and one at Amazon. They are also paid well, enjoy their jobs, and feel that they have many, many options.

Maybe my peer group is not representative of the software world as a whole. I am well aware that there are crappy software companies out there, but the reality is that you are still much better off going into CS from a versatility and marketability standpoint than most other degrees. Nearly every product or service involves software, and someone has to write it.

Comment Re:The problem is the law (Score 5, Informative) 217

2011 US House reauthorization of the PATRIOT Act (HR 514):

Republicans:
Yea - 210
Nay - 26
No vote - 5

Democrats:
Yea - 67
Nay - 122
No vote - 4

In 2011, 35% of Democrats voted to reauthorize the PATRIOT ACT and 87% of Republicans do. The sooner you stop thinking that there are no differences between the parties, the sooner you can realize that your vote actually does make a difference.

I know that it's popular to trumpet the 'there is no difference' line on Slashdot. But instead of doing that, why not do some actual research into the positions and voting records of your candidates? Maybe then you will figure out that there *are* real differences and that the reality of a complex representative political system means that you are going to disagree with your representatives on a good number of issues.

Comment Re:Entrenching the Class Divide. (Score 1) 427

I disagree.

My parents are well-off but they are hardly 'rich' in the traditional sense. They most certainly do not have 'connections' to anyone in a powerful position. I went to public school, got good grades, and did well on the ACT. Based on my grades, residence, and ACT scores, admission to my university (University of Colorado) was guaranteed by law.

I have nothing but respect for the CS department at the University of Colorado. All of my professors were excellent - they were both experts in their field and cared deeply about seeing their students succeed. But CU is not particularly well known in CS, nor do we typically rank in the top 30 or so programs on any of the BS 'top school' lists. My chances of getting into a company like Google with a degree from a lesser-known school and no experience would be damn near zero.

In 2007 I got a (paid) internship with Agilent. I ended up designing, implementing, documenting, and deploying a CRM application that is used by 300+ call-center workers to direct sales inquires. My internship was originally 3 months, but I ended up working part-time for another 8 months during the school year to finish the project.

When Microsoft did interviews at my university, my Agilent experience was part of why I was able to get a pre-screen interview, and it was a major part of why they decided to fly me to Redmond for a day of interviews. Less than a week later I was offered another (paid) internship at Microsoft. I was the only person that year from my university to intern at Microsoft.

In 2009 and 2010, I worked (paid) internships at Google. This year I started there as a full-time employee. There is no doubt that my internship experience was instrumental to me being able to get a job at Google.

In a world without internships, I would just have been another student from a lesser-known CS program. There would be no reason for a company like Google or Microsoft to take a chance on me.

In the technology world I would not recommend accepting an unpaid internship. Google and Microsoft pay their interns extremely well, and Agilent paid me an excellent salary too. There are too many companies out there who are willing to treat you as a real employee, who are willing to give you real work, and who are willing to pay you. If you are talent, do not take a job for $9 per hour that involves unskilled work. It won't look good as work experience and there are so many better opportunities.

Comment Re:Ha Ha, mine goes to 11 (Score 1) 615

Reply to: Re:Ha Ha, mine goes to 11

Re:Ha Ha, mine goes to 11 (Score:3, Interesting)
by alt236_ftw (2007300) Alter Relationship on Sunday June 05, @09:11AM (#36344906) Homepage
Single point of failure.

Essentially, you will need to carry a copy of your password bank with you AND the application which opens it at all times to function.
This means that if it gets compromised (your memory stick gets stolen/your dropbox account gets compromised/ etc...) an attacker will only need to guess/bruteforce/dictionary attack/social engineer/look over your shoulder one password and gain access to everything in your wallet.

No one is going to do that. Seriously. No attacker that I am worried about is going to go to the trouble and risk of physically stealing my property to get into my accounts.

And, by the way, even if they do get my vault they need to crack my salted, 10 character, lowercase/uppercase/number/symbol password, which has been run through a million iterations of SHA-256 to generate the key for my password vault. Good luck with that one.

I am not immune to attack. But I am a hell of a lot harder to attack than the typical user. That alone means that the chances of me being a target are very low.

Comment Re:It's because hardware has stalled (Score 1) 231

I now sit with a nearly *5 year old* dual core 2.4GHz CPU (overclocked to 3.3GHz mind you) and I can't find even a $1000 CPU that will give me anywhere near a worthwhile performance bump for anything other than super specific parallelizable applications like scientific computations or workstation-style 3D rendering.

You're not looking hard enough.

I have a laptop with a 2.53GHz Core 2 Duo Penryn (which is actually a better architecture than your Conroe or Athlon 64). It's a fine machine.

I also have a desktop with an i7-2600, a $300 CPU.

It's night and day when you push the machine. Even in single-threaded code the i7 is about twice as fast, and in multi-threaded code it's 3x, 4x, or even more in many cases.

Clock for clock, Sandy Bridge chews up the first-generation Core 2 CPUs and spits them out. And then my i7 is clocked higher - much higher.

Slashdot Top Deals

[A computer is] like an Old Testament god, with a lot of rules and no mercy. -- Joseph Campbell

Working...