Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:My perspective (Score 5, Interesting) 112

When MPEG LA first announced the VP8 pool formation, a rush of companies applied to be in the pool, partly because everyone wanted to see what everyone else had. That gave way to some amount of disappointment. And by 'some amount' I mean 'rather a lot really, more than the MPEG-LA would care to admit.'

Eventually, things whittled down to a few holdouts. Those '11 patent holders' do not assert they have patents that cover the spec. They said '_may_ cover'. The press release itself repeats this. Then these patent holders said 'and we're willing to make that vague threat go away for a little cash'. Google paid the cash. This is what lawyers do.

That's why it's a huge newsworthy deal when companies like NewEgg actually take the more expensive out and litigate a patent. It is always more expensive than settling, even if you'd win the case, and very few companies are willing or able to do it. Google was probably able, but not willing.

We deal with this in the IETF all the time. Someone files a draft and a slew of companies file IPR statements that claim they have patents that 'may' read on the draft. Unlike other SDOs though, the IETF requires them to actually list the patent numbers so we can analyze and refute. And despite unequivocal third-party analyses stating 'there is no possibility patent X applies', these companies still present their discredited IPR statements to 'customers' and mention that these customers may be sued if they don't license. This is not the exception; this is standard operating procedure in the industry. These licensing tactics, for example, account for more than half of Qualcomm's total corporate income.

It's this last threat that Google paid a nominal sum to make go away. It's the best anyone can hope for in a broken system. If those 11 patent holders had a strong claim, it is exceedingly unlikely they would have agreed to a perpetual, transferable, royalty free license.

Comment My perspective (Score 5, Insightful) 112

I'll add my own thoughts here, also posted at http://xiphmont.livejournal.com/59893.html

"After a decade of the MPEG LA saying they were coming to destroy the FOSS codec movement, with none other than the late Steve Jobs himself chiming in, today the Licensing Authority announced what we already knew.

They got nothing. There will be no Theora patent pool. There will be no VP8 patent pool. There will be no VPnext patent pool.

We knew that of course, we always did. It's just that I never, in a million years, expected them to put it in writing and walk away. The wording suggests Google paid some money to grease this along, and the agreement wording is interesting [and instructive] but make no mistake: Google won. Full stop.

This is not an unconditional win for FOSS, of course, the LA narrowed the scope of the agreement as much as they could in return for agreeing to stop being a pissy, anti-competetive brat. But this is still huge. We can work with this.

For at least the immediate future, I shall have to think some uncharacteristically nice things about the MPEG LA.*

*Apologies to Rep. Barney Frank"

Comment Re:Great lesson, but what's with the audio? (Score 1) 50

>If you insist on recording in stereo though, you might do as they did, and record with a Mid-Side array and use a matrix to decode back to L-R, so you can control the stereo spread in post-production.

That would not have controlled the reverb; the space this was recorded in was a concrete floor with concrete walls and no acoustic treatment. Like I said, it was a tradeoff, and one that was successful if not perfect.

Comment Re:Using real world audio waveforms? (Score 1) 50

Right, and this is why dither is only applied to 'last-mile' audio intended to be consumed. Dither 'screws' you in other ways if you intend to use that audio in production, such as losing all the property of removing the distortion, yet still having the additive noise. But we're still talking about changes 100+dB down.

>Counter nitpick: Monty, as a professional motion picture sound designer, I cannot tell you how distracting it is to hear your voice constantly changing its pan across the stereo field :)

The audio was recorded with a stereo pair. It wasn't panned artificially :-) Look down a few comments for more about this, you weren't the only person to complain.

Comment Re:Using real world audio waveforms? (Score 1) 50

As a nitpick, you get dithering losses _or_ quantization distortion, or a linear tradeoff between the two. You don't get the worst case of both on top of each other unless you screw up.

Without dither, worst case, all your 16 bit quant distortion products will be under -100dB regardless of input amplitude. I actually display the worst case in the video to make it easy to see. Quantization distortion aliases, and I chose an integer sample period so the aliased distortion would always land in the same bins after folding. If I hadn't, it would have spread out more and been even lower. If I had chosen a relatively prime frequency, the quantization distortion would have spread out across all bins equally.

Comment Response from CDParanoia author (Score 5, Insightful) 330

> Stop using cdparanoia - it isn't very good, at all. It tests poorly, we're sad to say.

Really! As the author, I'd love to hear hard specifics. or maybe a bug report.

> You want to use Secure Mode with NO C2, accurate stream, disable cache.

You can't disable the cache on a SATA/PATA ATAPI drive. The whole point of cdparanoia's extensive cache analysis is to figure out a way to defeat the cache because it can't be turned off. There is no FUA bit for optical drives in ATA or MMC.

The 'accurate stream' bit is similarly useless (every manufacturer interprets it differently) and C2 information is similarly untrustworthy.

Plextors are not recommended for error free or fast ripping. They try to implement their own paranoia-like retry algorithm in firmware and do a rather bad job about it. They also lie about error correcting information (you do not get raw data, you get what the drive thinks it has successfully reconstructed). Plextors often look OK on pristine disks, but if you hit a bit error (like on just about any burned disk), you don't know what it's going to do. Plextors are, overall, among the more troublesome drives _unless_ you're using a ripper that does no retry checking (ie, NOT cdparanoia and NOT EAC). If you use iTunes, you want a Plextor. Otherwise, avoid them.

Comment Re:General Consensus (Score 2) 330

That was true ~15 years ago. Since then, Plextor's firmware gets along very badly with the rippers that try to be frame accurate, because Plextor tries to implement a much lighter-weight more error prone version of the same algo on the drive. The drive still doesn't do a realiable job, and it seriously mucks up the ripper.

Comment Re:who records 'expensive movies' at 48k? (Score 1) 255

Regarding hte first point, that 120dB broadband noise figure is giving you at least 140dB of SFDR, probably much better, and the depth of any critical band is going to be even better yet. Even 16 bit data with a decent noise shaper is going to be 120dB deep in the 2-4kHz critical bands. (all of which doesn't disagree with anything you said of course)

Comment Re:An exercise for the reader (Score 1) 255

Oh! I remember this one :-) I'll be honest here-- this particular debate is outside my core expertise. I have enough background to say "this is all plausible" (I'm an electrical engineer after all), and I've discussed it in person with an author of a few papers on the same subject, but I'm only a dilletante when it comes to building amps.

>I guess my point is that it's too easy to make an error when seeing an "interesting idea and no data" and dismissing it.

I agree with you completely. Interesting ideas should be published; no paper is born in the state of being independently verified. I object to those who take these papers as evidence to support a position when no such validation has taken place. Thinking aloud is useful, but thinking aloud != hard data.

Comment Re:An exercise for the reader (Score 1) 255

Well, for the record, I've not been rejected, but I've only published within AES once.

It's not an attack, it's more a statement of truth. The AES publishes all sorts of things. Papers with interesting ideas and no data (eg, the J. Dunn 'equiripple filters cause preecho' paper, which presents a fascinating insight, even if it doesn't work out in practice), papers with data that are effectively WTFLOL (the famous Oohashi MRI paper) and papers that are more careful controlled studies. It runs the whole gamut on both sides, just as I said.

Do you deny that a substantial portion of the membership, including many elders of the group, are not 'bigger numbers are always audibly better' audiophiles? It was Andy Moorer himself who, with no hard data, kicked off the insane sampling rate race that now has some hardcore audiophiles wondering if 192kHz is enough.. they're holding out for 384kHz!

Is the AES a worthless cesspool? Oh heck no. Never said that. But treating its publications as more than a good industry rag (where it's sometimes hard to tell the research from the advertisements).. or perhaps an advanced debating club... is probably not a very good idea. Treating any one AES paper as gospel is just insane.

Slashdot Top Deals

The world will end in 5 minutes. Please log out.

Working...