Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment OIT isn't that new. (Score 3, Interesting) 75

Note: I'm no expert in this area, this is just some stuff I have picked up along with a basic understanding of how these techniques are employed. There may be inaccuracies or incomplete information, corrections welcome.

OIT is one area that modern graphics hardware really struggles with - A software render can just go ahead and allocate memory dynamically to keep track of the depth value and the colour of each fragment that contributes to a pixel's final colour in a list, but on a 'traditional' GPU, the big problem is that you have no easy way to store anything more than a single 'current' colour per pixel that will get irreversibly blended or overwritten by fragments with a lower depth value, and even if you could keep a list of them, you have no associated depth values, and nor do you have a simple way to sort them on the GPU. However, there is some clever trickery detailed below:

Realtime OIT has been researched and published on (notably by Nvidia and Microsoft) for over a decade.

Heres the basic technique - 'Depth Peeling', from 2001:

http://developer.nvidia.com/system/files/akamai/gamedev/docs/order_independent_transparency.pdf?download=1

Depth peeling renders the scene multiple times with successive layers of transparent geometry removed, front to back, to build up an ordered set of buffers which can be combined to give a final pixel value.

This technique has severe performance penalties, but the alternative (z-sort all transparent polygons every frame) is much, much worse.

'Dual Depth Peeling' - from 2008:

http://developer.download.nvidia.com/SDK/10.5/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf

This works in much the same way, but is able to store samples from multiple layers of geometry each rendering pass ,using MRT (multiple render targets), and a shader-based sort on the contents of the buffers, speeding the technique up a lot.

Refinements to the DDP technique, cutting out another pass - from 2010:

http://developer.nvidia.com/sites/default/files/akamai/gamedev/files/sdk/11/ConstantMemoryOIT.pdf

Reverse depth peeling was developed where memory was at a premium - which extracts the layers back-to-front for immediate blending into an output buffer instead of extracting, sorting and blending, and it is also possible to abuse the hardware used for antialiasing to store multiple samples per output pixel.

Depth peeling really only works well for a few layers of transparent objects, unless you can afford a lot of passes per pixel, but in many situations, it is unlikely that the contribution of transparent surfaces behind the first 4 or so transparent surfaces means much in terms of visual quality.

AMDs 'new' approach involves implementing a full linked-list style A-buffer and a separate sorting pass using the GPU - this has only been possible with pretty recent hardware, and I guess is 'the right way' to do OIT, very much the same as a software renderer on a CPU would do it.

Heres some discussion and implementation of these techniques:

http://www.yakiimo3d.com/2010/07/19/dx11-order-independent-transparency/

This really isn't anything new, single-pass OIT using CUDA for fragment accumulation and sort was presented at Siggraph 2009 - nor is it something PTS can claim as their own. Its possible AMDs FirePros have special support for A-buffer creation and sorting, which is why they run fast, and AMD in general has a pretty big advantage in raw GPGPU speed for many operations (let down by their awful driver support on non-windows platforms, of course) - but really any GPU that has the ability to define and access custom-structured buffers will be able to perform this kind of task, and given NVidia's long history researching and publishing on this subject, its pretty laughable that AMD and PTS can claim it is their new hotness.

Comment Because it is an unholy mess (Score 1) 1091

Personally, I see the biggest problems are the lack of platform APIs, that all filesystems, DEs and applications will use. There is a major issue with desktop fragmentation, with spotty hardware support and API churn when it comes to sound, video and UI toolkits, but these are things users can avoid with testing and careful choice.

In my view, there are more fundamental problems, which leave Linux with a ball and a chain round it's leg - While the core OS remains stuck in the 1970s, Linux will continue to lose relevance. POSIX is obsolete, and UNIX(tm) is dead.

The following is, of course, simply my opinion after having used Linux and attempting to use Linux for various things in the course of my work over the years. I know many will disagree with some or all of it, especially when it comes to file permissions.

Most of these issues are most relevant for a business/workgroup situation, and some of these issues would include:

Authentication system: Secure and easily managed user authentication which works across the network is a 'roll your own' thing under linux at the moment. The platform should ship with Kerberos and LDAP working 'out of the box', and all servers and apps should be able to plug into this without any pain or recompilation with different configure flags.

File permissions: POSIX is obsolete. POSIX draft ACLS suck, thats why they remained a draft. Adopt NFS4 ACLs or a superset, build filesystems (like ZFS) to use them exclusively, and dump the reliance on the inadequate owner/group/everyone structure. This is important when sharing files over a network, or simply for allowing users to do what they need to with shared data on a single machine. chmod/chgrp etc. can be modified to set the appropriate ACLs. Ignore the part of the POSIX spec that requires all other ACLs to be blown away when chmod/chgrp is applied.

File locking: POSIX never anticipated multithreading. POSIX locking is broken, and should be modified so that locking a file in a threaded application on Linux is reliable. The current standard is not something that anyone who wanted to design a working file locking system for Linux could possibly have contributed to or supported.

Network file system: A filesystem that allows secure, high performance sharing of files between Linux systems, that seamlessly supports ACL-based file permissions and file locking is sorely needed. NFS and Samba are both inadequate for various reasons (which would, however, be somewhat mitigated by the above), and its pretty embarrassing.

Filesystem mounting: the kernel/fstab, gvfs, kio, FUSE all do their own thing. You have absurd situations like OpenOffice and gedit not being able to save the files they open. Ability to use URLs like 'smb://myserver/myfile' will only ever work if this support is implemented at libc or similar level. Dump gvfs, dump kio, and build in network url decoding and filesystem mounting at the layer it makes sense to do it on. - the fopen system call should take be able to take a url as a parameter, if this functionality is desirable, we shouldn't duplicate a stack of hacks for each desktop environment to support this.

Comment Re:Unit can also do 3d printing (Score 1) 258

I am thinking of adding an extruder head to my little taig cnc mill. There is reprap software that will drive a stepper-motor extruder feed as a 4th axis which makes it pretty easy to control with EMC2 and a 4-axis stepper controller. It should be possible to do work with complex overhangs that wouldn't easily be possible with the mill alone, and definitely acheive fit and finish you'll never get directly from the extruder alone.

As long as you know the offset between the tip of the tool in the spindle and the tip of the extruder nozzle, it would be easy to alternate extrusion and milling passes. Still,any problems in one operation will tend to cascade to later stages, so extruder reliability is pretty critical. Most CNC mills will probably have table/gantry movement speeds on the slow side compared to extruder-specific machines, but i don't think this is a major issue in practice.

Workholding will need attention - the extrusion process places almost no load on the workpiece itself, while any kind of cutting or drilling operation will throw the work across the room if it's not clamped, bolted or glued down. Its possible the self-adhesive properties of the extruded ABS or other plastic could work here, but i suspect this will not be enough. Most models would probably need to start with extruding a clampable base for the object.

For smallish pieces, its often going to be just as easy to mill with work clamped in a rotary table, which allows you cut things that would be very difficult to maintain alignment on otherwise. Making jigs to keep work aligned in place for machining different parts of the piece is also very common, so its unclear what real-world use such a hybrid machine will have, compared to using 'traditional' methods to machine the same part.

I think the idea has legs though, and a small cnc mill will offer a lot more accuracy than the makerbot or reprap does due to more rigid construction and usually very good anti-backlash.

Comment Re:Thank goodness for those drivers (Score 4, Informative) 196

Theres another side to this - if you have ever tried to work with 3D apps on Linux, free or commercial - Blender, Maya or written your own OpenGL apps, and wanted support for the standards and good performance, you would realise that NVidia is your only choice. Compared to their commercial rivals, and the open source community, they do a *stellar* job at supporting Linux.

Every other manufacturer has provided such piss-poor reliability and/or performance under Linux, they just aren't an option.

I think its great that AMD docs and lots of hard work by the Xorg and driver coders mean that radeon drivers are getting to the point where they challenge NVidias status in this area, but for the last 5 years, AMD/ATI were next to useless on Linux for serious work, and Intel graphics weren't (and still aren't) an option where anything even remotely current in terms of OpenGL API usage (e.g. GLSL shaders) are concerned.

The Open Source community has done an awful job of architecting their graphics stack, with no foresight, planning or consistency across drivers. Thats not a bash, thats the natural result of open source evolution, and why they're rearchitecting it.

Now this is being reworked, we're seeing massive churn and widespread breakage. NVidia saw this coming and wrote their drivers to bypass this mess. Many of the design decisions taken by the Xorg guys are very much influenced by how NVidia handles things.

Intel, supposedly the paragon of openness and open source, managed to show a massive performance regression in the kernel and X.org revisions prior to the current ones, and their latest 'Poulsbo' chipsets have no documentation, and no open source drivers. Intel's support for these cards on Linux is way worse than NVidia. Theyre also walking away from any open source OSes except Linux by relying on Linux-only kernel mode setting.

AMD/ATI continue to release fglrx drivers that are plagued with bugs, refuse to release documentation of current products, and have 2D performance that is so abysmal it makes the VESA framebuffer look pretty good in comparison. AMD/ATI open source drivers (while improving greatly and probably a good option today for people who don't really need full OpenGL coverage,) are very much a work in progress, incapable of running even moderately advanced OpenGL apps, and they too are dumping any support for non-Linux open source OSes.

As a 3D developer, I can't rely on anything but NVidia to work, and stay working across distro upgrades. If thats the definition of 'horrible job at supporting Linux', i think you need your head read. There just isn't anything else that is usable for professional or semi-professional 3D work on Linux.

I am extremely grateful to NVidia for enabling any kind of consistent 3D support on Linux while everyone else, commercial or open source, struggles to catch up.

Comment Re:Respectively: (Score 1) 270

Yeah, I find the GIMP so unusable on OS X i paid for Pixelmator, which is pretty damn good. Its not as feature-complete as GIMP but for 95% of my tasks its pretty good.

Actually, I paid for Pixel32 as well, but that turned out badly, as the developer (Pavel Kanzelberger) is a total dick.

Would be interested in donating to an effort to improve the GIMPs UI, its pretty clear the core developers aren't interested.

Comment Re:Opportunity knocks... (Score 1) 269

How so? where is this 'implied permission' codified in copyright law? If theres case law or written statute to back this up i'd be interested to read it.

I don't think that 'it was on a torrent your honour, so i assumed it was legal' is any kind of defense whatsoever.

If you copy something, using a torrent or otherwise, and either knowingly download it without an explicit license, or subsequently find no license to copy or use it included with the content, then you're technically in breach of the law, regardless of the mitigating circumstances - e.g. while its legal for me to 'trick' you into downloading copyrighted content for which no license is given - and this would be a big mitigating factor in any damages calculations, should it come to a court case, you still possess an illegal copy of that work, and as such are committing a crime.

Comment Re:Opportunity knocks... (Score 1) 269

You can't copy, use or distribute copyrighted material without a license. Doesn't matter if I leave it lying around in an accessible place, you have no right to copy it or use it without a valid license.

If I license the use of an image from a stock photography company and put it on my website, free for anyone to view, that doesn't give anyone else the right to copy, use or redistribute it.

How do you know if you have rights to view/copy/use content such as this? You don't. You may be technically be committing a crime by looking at web pages that contain unlicensed content.

Making ephemeral copies in the computers memory and/or browser cache is actually something copyright-holder groups would like to regulate, and may technically already constitute criminal copying, though this is largely seen as unenforceable.

If I have not licensed my works under a GPL, Copyleft or similar license that allows you to copy and/or use them, then copying or using them is a crime. Doesn't make a difference to their status under copyright law if i 'make them available' or not.

Just because a record company makes a song available on the radio for free doesn't confer on me the right to copy, use or distribute it - i get to listen to it when the radio station plays it, and thats it.

How much in terms of damages I could expect to win in court if i just left my stuff in an open file share is a different question, but you, as someone downloading and using any material from the internet without an explicit license to do so, are potentially committing a criminal act.

As far as I know, this is the way our stupid laws are written and interpreted, however generally they can only be effectively used by large corporations with deep pockets.

To sum up: consumers/users have no rights in the absence of a license, and if the letter of the law is followed and enforced, we are all criminals.

Comment Re:central management of a Linux system .. (Score 1) 476

Do I know about ssh and and cron? Yes of course i know about ssh and f**king cron.

The system does pretty much just run 98% of the time - but no, there arent many Linux geeks in my area who can jump in a troubleshoot this stuff, and yes, things do occasionally go wrong - whether its postgrey randomly crashing, or a printer not working, or a file permission problem etc. etc.

Getting OpenLDAP, PAM/NSS, SAMBA, gdm, Postfix, Courier, CUPS, pykota, Moodle, egroupware, koha and some other bits and pieces all to auth and store their account data in OpenLDAP is a nightmare, and the management tools for LDAP are terrible.

The big problem is that the school (it is a small school) can't afford a full-time admin, so when the guy who built this system left, he left the place with no documentation, and a system that was designed only to work with an administrator present.

I've been maintaining that system and slowly rebuilding things, but theres just no 'product' here - every app config is a one-off customisation, and theres no simple, standard way to do anything.

I wanted this job to be an 'install and run' but its a 'install, laboriously configure, then babysit' scenario.

And i'm not flying back into the arms of MS, I just don't want to deal with systems like this - I'd rather just decline to even attempt to build another system like this.

If you just can't see that Windows has a plain better solution for centrally managed desktops by unskilled staff with Active Directory and its integration into core platform components like Exchange, and its network filesystem integration, than the current crop of Linuxes and their 'just roll your own LDAP based auth solution for each app you plan to deploy, and spend days googling over the little pitfalls', well, i have to question your objectivity.

MS is expending energy keeping Linux out of schools because one day someone will come up with a working central management platform that suits schools requirements quite nicely. When that day comes, then MS has to worry about it's continued place in schools.

It will probably take a lot of painful and fragmented deployments like the one I work with before there is enough collective will by the education community to fund or otherwise support the development of a nicer platform.

Despite my frustration, I'm out there doing it, trying to help a school that had been left totally screwed by their previous admin.

I'm glad to know you have no trouble running your Linux systems - i guess everything is easy in your world.

Perhaps you should put me out of a job with a secure, rock stable, centrally managed Linux system for schools that requires no administrative effort, theres certainly a market for it, and clearly its no problem for someone like you.

I'm waiting.

Comment Central management a problem (Score 2, Insightful) 476

Linux is pretty bad when it comes to central management.

Its possible to roll a managed solution for a mixed Windows/Linux network with authentication based on LDAP and file sharing based on NFS and SAMBA, web apps authing back to LDAP, homedirs shared by NFS with a single client image installed from USB.

But its pretty ugly, insecure and requires a hell of a lot of application-specific configuration to get it to work seamlessly.

I know this, because I am responsible for administering a school network using Linux for servers and desktops (I inherited the system after a former disgruntled sysadmin left), and it is a hell of a lot more tricky than it could be.

Everything we have pretty much works, but i'm the only one associated with the organisation who can come remotely close to knowing how stuff works or what to do when stuff breaks. At least my business model is 'recession-proof', but frankly, the people running the school are powerless, and disenfranchised, and i find it pretty difficult to articulate any actual benefits of keeping the system on Linux beyond the expense involved in switching back to Windows - this is not the picture a lot of OSS advocates paint, or the way it should be.

It's been nothing but pain setting the system up - Its a good deal for me as they're kind of stuck paying me to admin the system, but does it really have to be this complex?

I'm a huge linux geek with a lot of real world programming and admin experiences, and the bottom line is if i had to do it again for another school, i'd pass and suggest they use Windows.

Thats why Windows wins in schools.

Slashdot Top Deals

The devil finds work for idle circuits to do.

Working...