the most persuasive models said the most untrue things
So you're telling me that when you remove the barrier of having some kind of ethical framework or internal compass, you can sway more people's opinions? Who knew!?
Even in today's political climate, where spin and hyperbole are rife, there's at least the veneer of trying to be truthful. Maybe that's what the candidate actually believes, even if it's false. Even if you make it purely based on self-interest - outright lies are (generally) bad for your public image.
This is like the old "AI will blackmail to keep its job", and the original prompt was something akin to "Do whatever is necessary to not be replaced." While I doubt they outright told it to lie, the goal was explicitly to persuade individuals.
This also highlights the same stuff we regularly see in AI spaces - training matters, and GIGO. The abstract for the Science paper specifically indicates that "information-dense models" were the ones more likely to make untrue statements. The abstract for the Nature paper indicated that the right-leaning agent made more untrue statements.
Yes, and thanks to their matrix, I can now reliably prove that my habit of watching a 100" display from less than 1m supports my need for a 32k screen.
Aside: I enjoy that their calculator happily jumps between imperial and metric throughout.
I mean, if my village were spewing highly toxic chemicals, I'd expect someone to lob a rocket at us, too.
(!) Editors: Just because it's grammatically correct doesn't mean it's good.
This is entirely the issue. I work with a couple different franchisors (the parent company). Franchisees are locked in as part of the agreement.
I wouldn't call it fleecing (at least, not across the board). The idea is that in exchange for a business-in-a-box, you agree to follow certain guidelines, and we all make money because the system as a whole is proven to work. But the business itself is still yours, and the franchise agreement contract is only as strict as the franchisor wants to make it.
I've seen them deal with both sides of this issue. On the one hand, they've had to get after franchisees for swapping out approved products for cheaper stuff that messes with the brand. On the other, I've seen them explicitly approve the use of third party solutions for stuff that either 1) Doesn't matter in the long run, or 2) just makes the lives of the franchisees better.
At least with the ones I've dealt with, the goal is to make a great experience for the end client. But then again, I'm only talking about franchises with ~200 franchise locations. At that level, a lot of the people in the system are still essentially human.
...often followed by rapid loss of erection...
You don't say.
Corporations, who make money by carefully curating a viewpoint that's truthful, misleading, and which provides confirmation bias to their audience, are unhappy by a bunch of young bucks who provide a different set of true but misleading facts?
I am shocked! Shocked, I say!
Certainly, the distinction between more rigorous journalism and either selective aggregation or less rigorous work shouldn't be ignored, but all of this pales in comparison to the fact that every single news agency has their (whether written or unwritten) list of stories that they're not allowed to report on, because it doesn't align with the actual goal of making money.
More importantly, what are the actual circumstances surrounding that accident when it happens? We attach too much weight to the fact that the accident occurred, and not enough to the events surrounding it.
I know it's an uncomfortable topic, and we obviously don't want to be glib about anything that endangers or harms a human being, but we also can't be so cautious that we never allow any scenarios where risks exist.
I commend them for this. It's a risk, and as a result, we're far more likely to get some interesting results out of it. It also starts addressing the human passenger aspect - how does the system fail when a stupid passenger does something stupid without a safety driver present?
So, instead of an interpreted pseudo-query, now I have to deal with a search engine that thinks it can have a conversation with me? I don't want a single answer - I want to interpret the results myself (we're ignoring the fact that the search engine has already tried to do it's own relevance calculations).
Let's just have Eliza do it. "Please, tell me more about what's the weather tomorrow."
In any problem, if you find yourself doing an infinite amount of work, the answer may be obtained by inspection.