Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Cronies, thugs, and dictators, oh my! (Score 1) 70

It's like they have faith in their government taking their money and using it for the public good rather than allowing them to hoard it for themselves. I think your statement is meant to be an indictment of communism. But all you've demonstrated is that individuals who hoard money are bad and that the people need to be empowered to enforce redistribution of wealth, with force as a deterrent for this type of behavior.

Comment Re:Hours and hours (Score 1) 91

Bear in mind that this is not raytracing. NVidia's backend is server obviously using a path tracing algorithm based on the videos; the images start "grainy" and then clear up as they are streamed. Path tracing works like ray tracing with a huge sampling rate, shooting perhaps 30 rays per pixel. Moreover, whereas ray tracers only have to compute rays recursively when they strike a reflective/refractive surface, path tracers always recurse, usually around 5-10 times, for each of the 30 rays per pixel. (The "graininess" occurs due to the fact that not enough samples have been taken yet; as more samples are taken, it goes away.)

There's also a good chance this is a bidirectional path tracer. There's not enough footage of caustics stuff to tell for sure, but most rendering engines these days use this technique as well. In that case there is an entire other phase consisting of mapping light onto surfaces. This sampling is done before the path tracer actually renders, and is about the same computational intensity.

So a path tracer is around 150x more computationally intensive than a ray tracer; possibly up to 300x for bidirectional path tracers. While "neat, I can make a translucent cube and change its refractive index" is certainly computationally easy enough for a cell phone, the hardware simply isn't appropriate for path tracing algorithms, especially with scenes of any degree of complexity. NVidia seems to be specifically marketing this at the photorealistic rendering market (although I'm not sure how bit that is). POV-Ray in its DOS days (a simple raytracer at the time, although now it supports more advanced rendering features) isn't really in this league.

Comment Once, with a build system... (Score 1) 683

At a previous job of mine, I was working with the SCons build system; it's basically Make, but written in Python. It's actually really nice if you know Python, but also fairly slow. In it, every filesystem object (files and directories) are maintained as "Nodes" in a big graph.

Anyway, the project was using an old version of SCons along with lots of legacy code, and with this version, for some reason, when my build script was added, a conflict resulted where a Node in the build system representing a file was initialized twice, once as a directory, once as a file (it was actually a file).

Nowhere in my build script was this file even referenced; it wasn't even a dependency of any of the stuff being generated by my code. After hours of trying to find what was causing the conflict, I eventually figured out I could call File("theFile") to (sort of) "cast" the Node as being a File in the build system, and it would work. To this day, I believe that's how it's implemented, and I have no idea why it worked. :)

Comment Re:Wha...? (Score 1) 251

After doing some fishing around in the OS X version, I've noticed the main problem here:

[~ : jlatane]% ps -x | grep Chrome
571 ?? 0:00.90 /Applications/Google Chrome.app/Contents/MacOS/Google Chrome -psn_0_315469
573 ?? 0:01.09 /Applications/Google Chrome.app/Contents/MacOS/Google Chrome --lang=en --type=renderer --channel=571.1a638f0.1327077787
589 ttys000 0:00.00 grep Chrome

The renderer is the same binary as the main process, but with some different flags used. I don't quite see why they're doing it this way, as having a separate image for the renderers would be much more efficient. In fact, the only reason not to use a separate image is so that they can just fork() rather than fork()/exec(), but the fact that the command line arguments are different for each process indicates that's not happening anyway. They could definitely reduce the time to create tabs even further, as the image size of a simple renderer would be much smaller than that of the full application. Also, they wouldn't have to link the renderers against Frameworks that expect UI events (although, depending on the layout of their code, this could potentially be resolved with lazy linking). Speaking of which, I think you meant "Carbon" when you said "Cocoa":

[~ : jlatane]% otool -L /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome | grep Cocoa
[~ : jlatane]% otool -L /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome | grep Carbon
/System/Library/Frameworks/Carbon.framework/Versions/A/Carbon (compatibility version 2.0.0, current version 136.0.0)

Of course, this is just me taking a quick look at their linking setup. The fact that they've got a 26M image that they're essentially just duplicating for each new tab is a little troubling; why didn't they, at the very least, separate WebKit into its own library/Private Framework rather than statically link it in? The only possible performance benefit this design holds is not waiting for dyld to resolve runtime search path information (on OS X), but that's certainly outweighed by the delay of copying such large images. It all seems far too amateur-ish for Google to me.

Comment Re:10 gigs? (Score 2, Informative) 81

Well, first off, dependencies are, much more often than just the "Library" directories, in their own "Framework" directories. Check /System/Library/Frameworks for the important core Mac OS X frameworks and /Library/Frameworks for your basic system frameworks. You've also probably got a ~/Library/Frameworks directory but there's probably nothing interesting in there unless you're a developer. The rest of the "Library" directories consists more of non-reusable stuff.

However, plenty of applications do just bundle their own versions of dependencies. Just taking a glance around my system, 26.9 of Adium's 60.2MB consists of the "Frameworks" directory in Adium. 122.2MB of iWeb is Frameworks, many of which would probably be useful if they were universally available to developers (FTPKit?). Open source (and open-source-based) applications tend to be the worst about this since they have a habit of packaging large parts of the Linux ecosystem since minor incompatibilities OS X's BSD-grounded system make proper ports less convenient. Having both Crossover and Crossover Games take so much space with so many identical dependencies is just silly. Other notable applications on this front include Battle for Wesnoth and OOo.

Across all applications, localizations are a bit more of a problem, as you said. An even bigger problem is that binaries are often larger simply because they're written in Obj-C; Obj-C supports some very, very cool runtime features not available in any other compiled language, but they add considerably to the binary size.

In general, though, you're right - OS X is far better than Windows about sharing dependencies properly, but there's pretty much no way to get the tight dependency management Ubuntu/Fedora/openSUSE has without having a repository-based package manager, which is an entirely different software management philosophy. (Although the idealist in me likes to hope it's not the case, that model doesn't really foster the develop-something-good-and-make-money-quickly environment that I like about Mac OS X, since there's such a big barrier between you and users).

Comment Futurama! (Score 1) 1397

Not just my servers, but all of my hardware is named after Futrama characters. Hermes Conrad, a 320GB storage server, was recently replaced by Dwight Conrad, a new 1TB unit. My Palm is named Cubert Farnsworth, my main system Philip J. Fry (with the boot volume named Bender). And my Mighty Mouse's Bluetooth profile is named Nibbler. My old flash drive is named Morbo, with the new one named Calculon... my Wii is named Lrrr... honestly, it's gone a bit far. I'm going to have to recycle characters within a few years.

Comment Re:Very nice of them. (Score 1) 307

While I agree with most of your points, a bunch of extra graphics cards won't really be helpful for ray tracing because of the amount of recursion required. It can be done iteratively with some modifications so it would actually run on GPU hardware, but the overhead in doing this is greater than the performance boost parallelism grants the user.

Besides, ray tracing pretty much sucks compared to modern rasterization techniques until you add in radiosity, caustics, distribution, and other extensions. And if all that's added in, I don't care if you have 12 cores, it will not run in real time.

Comment Re:Sounds great. (Score 2, Informative) 225

Merry Christmas. NTFS-3G is more than fast enough to read documents from your Windows partitions. The only time its slower speeds will really be a noticeable problem, in my experience, is if you run OS X applications from your NTFS disk. But why would you keep your OS X applications on an NTFS volume?
Software

Submission + - Asked to install Pirated Software, what do you do?

An anonymous reader writes: I am an IT professional, and due to budget constraints, I have been told to install multiple copies of MS Office, despite offering to install OpenOffice, and other OpenSource Office products. Even though most of the uses are for people using Excel like a database, or formatting of text in cells, other programs are not tolerated. I have been over ruled by our controller, to my disagreement. Other than drafting a letter to the owners of the company on how I disagree with the policy, what else can I do? I would never turn them in, but I am in tough place by knowing doing something illegal. I want to keep my job, but disagree with some of the decision making on this issue.

Slashdot Top Deals

APL is a write-only language. I can write programs in APL, but I can't read any of them. -- Roy Keir

Working...