Comment Re:Take it with a grain of salt (Score 1) 179
Why wouldn't this be captured in the prompt?
Why wouldn't this be captured in the prompt?
This feels like an ad masquerading as an article.
Hard to see why any company -- AI or not -- would ever decide to do this on its own.
It's always worth remembering, in these thought experiments, that publicly-traded companies are *already* a loose form of "AI." They combine multiple human intelligences (sometimes thousands) into one decision-making machine, and the decisions they use "shareholder value" as the value function they optimize against. These companies *do not* give away money to charity, they return money to the shareholders -- people who formed the company, people who jumped on the bandwagon early on, or people who bought their way in with previously existing capital.
Absent any change in policy, we should not expect an AI firm to behave any differently.
> They recorded a sound-alike BEFORE asking her to use her voice.
Isn't the central allegation here that, in fact, they had been asking to use her voice (and she had been declining) for some time?
Also, if they thought the voice was completely unrelated to ScarJo's, why did they seek her permission repeatedly?
"But what you get is not dollars but this like slice — you own part of the the productivity."
So the workers shall...control the means of production?
Yeah. Remember all those videos of studio heads, shortly after selling to Microsoft, about how glad they were for the stability and security of being part of a large organization like Microsoft? No longer are we subject to the whims of the marketplace! Now we just have to meet our internal metrics and we'll be able to stay afloat!
Whoops. Turns out selling out doesn't mean you are any less secure. It does mean you've given up all control, however.
The amazing part of this is that, with a name like "Big River," the rivals being investigated didn't figure out what was going on. They practically announced what they were doing!
I think the fundamental problem is not the quality or detail of the generation but simple supply and demand.
Video games are *already* hundred-hour plus endeavors overflowing with content that 99% of players never see.
Do you really have so much time on your hands that you're going to roam around an endless digital world that just keeps generating itself the further you go? Sure, it'll be fun for awhile, but at some point you have a *a life* to get back to. The very fact that highly-detailed, context-sensitive content will be generated by the push of a button will *inherently cheapen it* by ratcheting up the supply.
The content people most value will *always be* content with the highest demand-to-supply ratio. Historically, supply has been so small that the demand of all total players could be effectively treated as infinite. But now that may reverse -- the *supply* becomes near infinite, so now demand will be the limiting factor.
In other words -- you can generate incredibly detailed, enormous worlds, with full backstories for every character, but you'll have to beg people to play it.
Yes, thatâ(TM)s right! I donâ(TM)t think thatâ(TM)s a controversial take at this point. Tons of data keeps being discovered as basically buried perfectly within the LLMs, which leaks out in surprising ways. I am pretty sure there are papers directly comparing LLMs to other compression algorithms.
If you access GPT directly via the API (not via ChatGPT), and input the first few sentences of a paywalled article from any major news site, it's pretty easy after a few tries to find an article that GPT can recite verbatim, or nearly so. It's certainly true that LLMs are not *designed* to copy articles exactly, and yet it's also demonstrably clear that as part of the training process, some subset of content is preserved as an exact copy, or nearly so. In any event, the law doesn't care about the process by which your copy was made -- if you distribute something that a reasonably person would consider a copy of an original without permissions, you are likely violating copyright.
> The choices we make
You mean the choices we logically cannot make?
To be super clear, the only thing the first case conclusively proved is that the judge felt the FTC was not "likely" to prevail on the merits in a full case. Both sides had to make abbreviated arguments about whether or not to pause the deal until the full hearing could be made.
The FTC lost the "pause" battle, but is still allowed to do a deeper exploration and attempt to make a more comprehensive argument. Will it fare any better? There's only one way to find out...
âoeIt was hard to see how THIS slave complexâ not âoeIt was hard to see how ANY slave complexâ
Careful reading is a useful skill.
Ah thank you! AGPL is what I was thinking of.
This article is late to the party.
SaaS software killed most of open source's movement momentum a decade ago. The vast majority of code being written at companies these days is as closed as ever -- and it's still on github. It's accessible to employees in a corporate account, hosted and run on a server somewhere, and only accessible to users in the form of API access.
In other words, it's closed source. Even if it depends or relies on hundreds or thousands of open source libraries or modules! Most early licenses simply didn't consider the SaaS model when they were devised. They were intended to prevent the distribution of binaries without the distribution of code.
But for the most part companies simply stopped distributing binaries. It's perfectly within most licenses to run the code on behalf of users without giving them access to their code or data.
The GPLv3 attempted to correct for this, but it never caught on -- this battle years ago was the first big sign that "open source" was going to be won by SaaS companies.
"Spock, did you see the looks on their faces?" "Yes, Captain, a sort of vacant contentment."