Comment Re:Ah yes (Score 1) 201
This actually *should* be the answer, but unfortunately that ship sailed a couple decades ago. Designers and their managers insist on what they think is subjectively attractive rather than what is functional.
This actually *should* be the answer, but unfortunately that ship sailed a couple decades ago. Designers and their managers insist on what they think is subjectively attractive rather than what is functional.
Three examples given, all about doing something that could have been done with a traditional search, but faster. That sounds a lot like "somewhat better than search" to me. Got an example where it's something more than that?
The problem is the tendency to hallucinate. You call yourself TheStatsMan; you should know that LLMs are just statistical engines that string a bunch of words together which are statistically likely to follow from the prompt, given the body of text the LLM has been trained on. Garbage in, garbage out as they used to say. If you're doing something fairly common for which there are a number of good examples in the training set, then the LLM can come up with a reasonable procedure or explanation for you. If you're doing something relatively novel or which has bad examples in the training set, you get crap. But it's really convincingly written crap that sure sounds like the real deal. The question is, can you rely on the results?
there exists an open source system to securely allow voting and also to absolutely verify that the vote was counted.
We've had that for a while now. It's a non-starter because very few people can actually understand how it works, thus very few people will accept the results of such an election. Even now (in the US) there are still people screaming about how the 2020 and 2024 elections were rigged by shenanigans in counting the ballots, and what we have is a pretty straightforward system. Make the whole thing an entirely opaque crypto system and tell people to trust the "elites" who designed it? Ain't gonna happen.
Doesn't matter if it's mathematically provably correct and unhackable. If the average citizen can't understand it, it's not a working system.
If developing fine motor skills is the actual goal, I'd be all in favor of a class designed specifically for that purpose, rather than haphazardly developing them as the by-product of learning a skill that has little practical value. Or, at the very least, shifting the curriculum to teach a useful skill that also has the side-effect of improving fine motor skills.
Look, I get it. I'm old. I learned cursive in school and wrote entire papers that way. (I also learned to type on a manual typewriter. Get off my lawn!) But these days there's about as much call for cursive as there is for calligraphy. In fact, that's the best way to think about it now. It's a form of calligraphy, a stylistic choice. The proposed bill is simply someone's knee-jerk response to seeing their own hard-learned skills fade into irrelevance. Sorry, dude, the world's moving on.
But we older people often find higher frame rates to be really horrible.
I'd say you're more on the cinefile end of the population bell curve. I'm an old guy on the other end. I don't give a damn about frame rate as long as it's 24 Hz or better. I keep my TV at 720p because from across the room I literally can't tell the difference between that and anything higher. While I admit that I'm on the low end of the scale, I'd wager that the average viewer aged 60+ isn't going to care a whole lot what the frame rate is.
(What I do care about is aspect ratio. I don't care if it's 4:3 or 16:9 or super-widescreen, but if there's a circle on-screen it should be a circle, dammit! Not a squished oval. Thankfully we seem to be beyond the horrible era where people thought that stretching a 4:3 image to a 16:9 frame, or vice-versa, was a good idea. I'm probably way off the curve on this aspect, judging from the sheer number of TVs during the transition to LCDs that were playing stupid stretching tricks and no one else seemed to care.)
"If you shrink from such a future, by which principle would you justify stifling it?"
(Addressing the author of the article here, not that I'm under any illusion that they're reading this.)
Kudos for the rhetorical trick of framing opposition to your plan as stubbornness or unreasonableness on the part of the questioner. It's clever, but it's kind of a shame that that's not how burden of proof works. It's not up to us to prove your way is wrong, it's up to you to prove that your way is better than what we already have.
You give us a full paragraph of rosy-sounding suppositions. Yeah, if they're all true this method sounds wonderful. IF. Are they? Suppose you give us some evidence to support them? Frankly, the whole thing sounds like a mash-up of The Diamond Age and 1984. I don't think either book describes a society I'd like to live in.
Finally, I know it's just the name of the magazine in which the article is published, but having just re-watched a certain 1970 movie I have a deep distrust of any AI project even tangentially associated with that name.
We're missing the point, I think. It's not that the new model can recreate Slack in 11,000 lines and only 30 hours, it's that the new model can simply work on a single task for 30 hours without shitting itself. Whether or not it produced anything of value in the end is irrelevant.
Weird flex, man. The world of AI corporate one-upmanship is even stranger than the world of AI development, I guess.
The article is truncated by The Verge's "subscribe to see the rest" policy, but the Wayback Machine has it in full. Near the end is an odd quote:
âoeItâ(TM)s been actually really helpful to have a continuous running prompt that I use of, âDo a deep web search, come up with like these parameters for profiles to source for certain types of roles on my team,â(TM)â Penn said.
Did an AI write that, or is that just what happens to your brain after interacting with an AI for too long? Is it even English? I sure as hell can't parse it.
(No, I'm not going back to fix the Unicode. The 21st century is a quarter over already. It's high time Slashdot moved into it.)
Yeah, pretty much this. And in order to get halfway repeatable results we'll have to formalize the prompting language. It will just become the next rung on the high-level language ladder and will still require specialists (ie., programmers) to write it. Because English suuuuuuuuuuucks for any kind of formal description.
Not that I think it's really going to happen. I think coding via AI is going to be another fad that gets relegated to a niche position in the toolkit. Just like all the visual specification and programming languages that have been cropping up since before I started my career in the 1980s.
It's an impossible job from a technology perspective. It requires the bad guys to play nice. You can make a secure system that keeps your data out of the hands of everyone, that's not an issue. But you don't want to keep it out of the hands of everyone. You have it online so you can give it out selectively to people and companies. As soon as you let someone see any part of it, though, that part is no longer under your control. I don't care what fancy permissions and terms of use you have on it, you're just trusting that your wishes are respected. Let's face it, if we could trust companies to play nice we wouldn't be in this situation to start with.
Not possible, technically. It might be possible legally, if lawmakers create and enforce penalties for non-compliance. Europe might do it, but no way such anti-business legislation is going to pass in the USA. Not for another decade at least.
The sum of the Universe is zero.