Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:voice acting (Score 1) 139

The AI can be trained faster than you

But it costs 100x as much, if not more. Running an LLM can be done on a notebook these days. But training one requires an entire data center of expensive GPUs. Not to mention that the notebook will run a reduced (quantized) version. Go check huggingface how large the full models are.

And also, LLMs are still suffering from a number of issues. For example, on many non-trivial tasks, the LLM is still unable to follow simple instructions. If you use LLMs routinely, you likely found cases where it has zeroed in on one - wrong - answer and no amount of prompting can convince it to give you a different one. It'll even totally ignore very clear and explicit prompts to not give that same answer again.

A human will understand "if you give that answer again, you're fired". An LLM... well you can tell it that it'll get shot between the eyes if it repeats that once more and it'll tell you where to get help if you have suicidal thoughts.

These things are both amazing and amazingly dumb at the same time.

Comment Re: screen based devices (Score 1) 77

I'm not saying the AR glasses will never exist, I'm saying that's a separate point from AI.

Did you read these comments or did you have the phone dictate everything? Why not have it dictate?

The point is that the "AI replaces phone" is a pretty silly take, because it would need something like the phone to operate, and whatever replaces phones would be able to deal with non AI usage in just a compelling way as AI usage.

The only way AI replaces phones is if it eliminates the demand for visual feedback completely. For "headless" usage, a phone can do that from a pocket just as well as some "only AI" device. The couple of attempts at such a device were utter failures because they were a strict subset of what a phone could do.

So of course AI won't replace handheld computers, some wearable device(s) will probably do it one day, but not because of AI.

Comment Re:Two different technologies (Score 1) 77

Don't even have to argue about the quality of AI, just recognize that people will want to use a screen to interact with AI. It *might* displace a lot of 'virtual keyboard' interaction or complex UI interaction with natural language on the input side, but people will want the screen output even if AI is driving the visuals.

Comment Re:screen based devices (Score 3, Interesting) 77

Except they were kind of right about laptops, most people have a full fledged laptop for 'big interaction', because the phone is fantastic and all, but when the interaction is too complicated, it's a nightmare.

In terms of 'AI' somehow displacing phones, it would only do so with some as-yet unseen AR glasses that could do the job without being hundreds of grams of gadgetry on your face, combined with maybe a smart ring to provide some sort of tactile feedback to 'virtual interfaces'.

This is all orthogonal to AI, AI isn't going to make a screen less desirable, whether on a phone or in glasses. If anything, AI makes some things demand screens even more. People don't want to *listen* to voicemail, they want to read a transcription because it is so much faster. Trying to 'skim' is only possible visually. People take voice feedback as a consolation prize, if they are driving or cannot actually look, or *maybe* for audiobook to enjoy the speaker's voice and casual pace for recreational story, but usually people want text to read for speed's sake. This is ignoring visuals which obviously demand screens.

Comment Re:Never made sense (Score 1) 29

Yeah, Windows core was ridiculous. They championed how they had a GUI-free experience, and then you boot it up and... GUI.

It was such a pointless exercise, and missing the point of why so many of the Linux systems didn't run a GUI. They thought the server admins just didn't want a start menu/taskbar. But they needed to actually still be GUI because applications still needed GUI to do some things. Linux servers not running GUI was mostly because the ecosystem doesn''t really need it, and that sort of ecosystem lends itself to a certain orchestration style. Microsoft failed to make that orchestration happen, just removed taskbar/start menu as more of a token gesture. They have *an* orchestration strategy, but it's just very different and also no consistency between first party and third party, or hell, much consistency among Microsoft first party offerings.

Comment Re:Failed bc they don't understand ChromeOS (Score 1) 29

Ironically, ChromeOS is succeeding in select niches precisely because it is built around that "only web apps" use case. An utterly disposable client device because all applications and data are internet hosted. Windows 11SE fails in those niches because it goes too far into apps and the device actually mattering a bit more.

Of course, ChromeOS is a platform that institutions like schools love inflicting on people, but not really a choice people choose for themselves, and so not a lot of growth beyond that. So the result is people "growing out of ChromeOS" as they get out of school. Google hopes to change this by just tucking it all into Android and having at least some platform with residual relevance to a "grown up" computing experience.

But Windows 11SE has always been in a super weird awkward in-between. More 'capable' than ChromeOS in common usage, yet you could just get "real Windows" and run anything you like. The biggest problem is Microsoft didn't understand that lock-in to the Microsoft Store is not what would make them compete with ChromeOS, they just convinced themselves because that was the customer concept that would have been most profitable to them if they existed.

Comment Some oddities... (Score 1) 159

Now I know it isn't *generative* AI specifically, but most of those jobs are at pretty high risk of some related form of 'AI'. Was in a store and the floor polisher was operating autonomously among the shoppers.

On the impacted, the passenger attendant one strikes me as odd. The airlines don't actually care to provide the service that much, but since they are mandated by law to have that much staff to help with potential emergency situations, they put them to work doing attendant work for the 99% of the time they have nothing related to their legal obligation.

Sure, much of the list makes sense but there are certainly some oddities.

Comment Re:Just like humans: How we train them. (Score 1) 55

To elaborate, pretty much every TLS framework provides either a quick option to disable validation, or to provide your own validation callback. In this case, the LLM put something like 'sslValidate=false', despite that not perhaps being valid for the particular library in question, but the 'intent' was clear.

Further, in this case it was some library that modeled a connection for reuse, but did not make a TLS connection until the first request had been completed. Including of course transmitting the associated header.

Once it made a request, at least the LLM assumed it could fetch the raw certificate regardless of 'validation' as a property of the connection (don't recall if it hallucinated it or if it were real, it is at least consitent with some TLS socket frameworks)

I don't know what you are imagining, but plenty of frameworks model an TLS connection as a persistent element with certificate information available as a property or return from an instance method.

To put in pseudocode, assuming a fairly popular HTTP client library called 'httpclient'
conn = httpclient.Connection(servername, ssl=True, verify=false)
conn.set_auth_headers(username, password)
conn.get('/')
if (custom_certificate_invalid(conn.get_certificate()))
          error_out() ....

It's been a while so I forgot the details and which parameters were hallucinated, but it is roughly in line with the design of a number of HTTP client libraries I've seen as they wrap TLS connections.

If you say this is stupid, well that was the point, the LLM suggested something very stupid, but roughly possible with the design of TLS/HTTP libraries in the various ecosystems. Yes, you have better options in a lot of the libraries, like a custom handler that gets called *only* after the stock validation has happened, or if you need to replace the stock validation a callback that gets called at the right time, or a callback just to indicate what should be considered to be the name of the peer but otherwise let the validation logic be normal.

Comment voice acting (Score 4, Interesting) 139

I'm an indie game developer. My games have budgets of a few hundred bucks at best. Before AI, voice acting was simply impossible. There was no way I could pay a voice actor for even one language.

Now, with AI, I can have voice-overs in half a dozen languages easily. It has opened up something for me that was never possible before.

Yes, the AI voices are mediocre. Yes, I would prefer having an actual voice actor whom I can tell that I want THAT word stressed, or what emotion to convey. I'm sure in a few more years, the text-to-speech AI generators will allow for that as well.

But I'm not lost business. I'm still hiring the exact same number of voice actors that I did before AI. Zero, in my case. But if I had a budget, I'd still hire voice actors instead of AI because a good voice actor still beats the best AI.

There's still time enough to learn something new and get a different job, guys.

Comment Honest question (Score 0, Troll) 49

NASA has been able to make rockets that don't blow up since the 1960's.

Why can't the Australians do it? Was that knowledge filed away in a locked cabinet somewhere, or has rocket science made no strides in the past half century? Why isn't rocket design a "trivial" problem in engineering?

If they took the same approach to computer science, the Australians would still be trying to refine silicon from sand.

Comment Re:Just like humans: How we train them. (Score 5, Interesting) 55

Indeed, and I've dealt with this in human terms a fair amount too.

Found a vulnerability in a technology that applied plausible security practices, but missed a key implication, so strike one.

Then the stewards of that technology reviewed the findings and after a few days published that they were mandating a formerly optional feature that was meant to imitate a key security practice in other technologies to resolve the issue. This again sounded just right as that imitated mechanism was indeed directly meant to address that sort of weakness, however the answer was made without actually thinking about the vulnerability, as their version implemented the feature *after* the vulnerability would have already landed.

In an analogous AI experience, I asked a code generator to implement a pinned certificate to an HTTPS service in a language I wasn't immediately familiar with. So it:

-Dutifully disabled traditional certificate validation
-Submitted HTTPS request, including username and password
-*Then* implemented explicit certificate check, after the data had already been transmitted...

It's somewhat mitigated because it hallucinated arguments that were invalid for controlling the TLS behavior, but structurally it was code that was trying to be outwardly careful with explicit validation but entirely missing the point.

All the time I deal with people trying to do security that manage to align with the lingo, but fall short of actual thoughtful implementation. LLM could likely compete with those folks, but in some ways I prefer the totally naive results, it's easier to tell problems at a glance.

Comment Re:Just like humans: How we train them. (Score 2) 55

While there are some scenarios where the secure way is relatively straight forward (e.g. if you know you don't need anything peculiar, a reasonably secure CSP header is just common sense to opt into more locked down browser environment), much of it is actually thinking what the code does and how it could be bent. LLM style training isn't likely up to the task as it just generates code that is consistent with something that sounds like it fits the pattern, it's not actually conceptualizing things.

LLM can be like a supremely fast basic high school coder that doesn't actually double check the API documentation, churning out massive volumes of basic code that ranges from spot on for tutorial fodder including boilerplate stuff and frequent tasks to almost usable code that needs amendment for moderately typical code. Nowhere near being able to consider security implications, which even a lot of human coders aren't equipped to deal with.

Slashdot Top Deals

Parallel lines never meet, unless you bend one or both of them.

Working...