Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Console APIs vs PC APIs - an explanation (Score 5, Interesting) 323

The way things work on consoles is approximately similar to Windows/Linux/Mac, except for these important distinctions:
1. the hardware is a known target, as such the shader compilers and other components are carefully optimized only for this hardware, they do not produce intermediate bytecode formats or make basic assumptions of all hardware.
2. the APIs allow injecting raw command buffers, which means that you do not have to use the API to deliver geometry in any way shape or form, the overhead goes away but the burden of producing a good command buffer falls on the application when they use these direct-to-hardware API calls.
3. the APIs have much lower overhead as they are not a middle-man on the way to the hardware, but an API implemented (if not designed) specifically for the hardware. For example Microsoft had the legendary Michael Abrash working on their console drivers.
4. the hardware memory layout and access bandwidth is known to the developers, and certain optimization techniques become possible, for example rendering to a framebuffer in system memory for software processing (on Xbox 360 this is done for certain effects, on PS3 it is heavily utilized for deferred shading, motion blur and other techniques that run faster on the Cell SPE units), in some cases this has other special implications, like storage of sound effects in video memory on PS3 because the Cell SPE units have a separate memory path to video memory and thus can tap into this otherwise "unused" bandwidth for their purposes of sound mixing.
5. 3D stereo rendering is basic functionality on consoles.

The article is making the argument that we should be able to produce command buffers directly and insert them into the rendering stream (akin to OpenGL display-lists but new ones produced every frame instead of statically stored).

It is also making the argument that we should have explicit control over where our buffers are stored in memory (for instance rendering to system memory for software analysis techniques, like id Software Megatexture technology, which analyzes each frame which parts of the virtual texture need to be loaded).

There are more subtle aspects, such as knowing the exact hardware capabilities and designing for them, which are less of a "No API!" argument and more of a case of "Please optimize specifically for our cards!", which is a tough sell in the game industry.

AMD has already published much of the information that studios will need to make use of such functionality, for example the Radeon HD 6000 series shader microcode reference manual is public already.

Intel also has a track record of hardware specifications being public.

However NVIDIA is likely to require a non-disclosure agreement with each studio to unlock this kind of functionality, which prevents open discussion of techniques specific to their hardware.

Overall this may give AMD and Intel a substantial edge in the PC hardware market - because open discussion of graphics techniques is the backbone of the game industry.

On the fifth point it is worth noting that NVIDIA Geforce drivers offer stereo rendering in Direct3D but not OpenGL (despite it having a stereo rendering API from the beginning), they reserve this feature only for their Quadro series cards for purely marketing reasons, and this restriction prevents use of stereo rendering in many OpenGL-based indie games, another case of consoles besting PC in functionality for ridiculous reasons.

The Internet

Submission + - Unorthodox links to the internet (economist.com)

An anonymous reader writes: Savvy techies are finding ways to circumvent politically motivated shutdowns of the internet. Various groups around the world are using creative means like multi-directional mobile phone antenna and even microwave ovens to transmit internet traffic accross international borders.

Comment Re:PC Version (Score 1) 246

Given the amount of work that goes into replacing all content of the game, I wouldn't call it a "free ride" even if it did get to use the same code.

However as has been established, the gamecode is being redone based on non-GPL sources to ensure that nothing is "ripped off", even though this means the game may differ more significantly than intended, further fracturing the community of players.

The relicensing of the GPL gamecode for the game was intended to preserve the authenticity of the gameplay experience, not to harm anyone, but since a few of the contributors don't want to play ball, the ball goes elsewhere.

Comment Re:What this is: (Score 2, Informative) 246

I only want to point out that in a recent analysis of the DarkPlaces engine source, 1.29% of lines that are not license headers or blank lines, have never been modified.

Put another way, the engine is no more than 2% Quake1 codebase, and a vast majority of the code was written by me, especially the platform independent core portions.

Tracking down contributors when there is one primary author of the entire codebase, who knows almost every line of it by memory, is not as hard as it sounds.

Comment Re:What's really happening here? (Score 5, Informative) 246

The engine has been licensed as non-GPL for Sony Playstation 3 and Microsoft Xbox 360, these are very closed platforms and the game had no chance of reaching them under GPL, publishers would not touch it.

IllFonic actively promotes the GPL Nexuiz for all operating systems.

The console game code is being started fresh now that GDC is over, no GPL claims can apply to it.

Note: Nexuiz 1.0 was to be a commercial game in the first place, but was GPLed for the enjoyment of everyone, this deal pertains to the name and concept, not the community enhancements that occurred after the original release.

Slashdot Top Deals

"It doesn't much signify whom one marries for one is sure to find out next morning it was someone else." -- Rogers

Working...