Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re: screen based devices (Score 1) 78

I'm not saying the AR glasses will never exist, I'm saying that's a separate point from AI.

Did you read these comments or did you have the phone dictate everything? Why not have it dictate?

The point is that the "AI replaces phone" is a pretty silly take, because it would need something like the phone to operate, and whatever replaces phones would be able to deal with non AI usage in just a compelling way as AI usage.

The only way AI replaces phones is if it eliminates the demand for visual feedback completely. For "headless" usage, a phone can do that from a pocket just as well as some "only AI" device. The couple of attempts at such a device were utter failures because they were a strict subset of what a phone could do.

So of course AI won't replace handheld computers, some wearable device(s) will probably do it one day, but not because of AI.

Comment Re:Two different technologies (Score 1) 78

Don't even have to argue about the quality of AI, just recognize that people will want to use a screen to interact with AI. It *might* displace a lot of 'virtual keyboard' interaction or complex UI interaction with natural language on the input side, but people will want the screen output even if AI is driving the visuals.

Comment Re:screen based devices (Score 3, Interesting) 78

Except they were kind of right about laptops, most people have a full fledged laptop for 'big interaction', because the phone is fantastic and all, but when the interaction is too complicated, it's a nightmare.

In terms of 'AI' somehow displacing phones, it would only do so with some as-yet unseen AR glasses that could do the job without being hundreds of grams of gadgetry on your face, combined with maybe a smart ring to provide some sort of tactile feedback to 'virtual interfaces'.

This is all orthogonal to AI, AI isn't going to make a screen less desirable, whether on a phone or in glasses. If anything, AI makes some things demand screens even more. People don't want to *listen* to voicemail, they want to read a transcription because it is so much faster. Trying to 'skim' is only possible visually. People take voice feedback as a consolation prize, if they are driving or cannot actually look, or *maybe* for audiobook to enjoy the speaker's voice and casual pace for recreational story, but usually people want text to read for speed's sake. This is ignoring visuals which obviously demand screens.

Comment Re:Never made sense (Score 1) 29

Yeah, Windows core was ridiculous. They championed how they had a GUI-free experience, and then you boot it up and... GUI.

It was such a pointless exercise, and missing the point of why so many of the Linux systems didn't run a GUI. They thought the server admins just didn't want a start menu/taskbar. But they needed to actually still be GUI because applications still needed GUI to do some things. Linux servers not running GUI was mostly because the ecosystem doesn''t really need it, and that sort of ecosystem lends itself to a certain orchestration style. Microsoft failed to make that orchestration happen, just removed taskbar/start menu as more of a token gesture. They have *an* orchestration strategy, but it's just very different and also no consistency between first party and third party, or hell, much consistency among Microsoft first party offerings.

Comment Re:Failed bc they don't understand ChromeOS (Score 1) 29

Ironically, ChromeOS is succeeding in select niches precisely because it is built around that "only web apps" use case. An utterly disposable client device because all applications and data are internet hosted. Windows 11SE fails in those niches because it goes too far into apps and the device actually mattering a bit more.

Of course, ChromeOS is a platform that institutions like schools love inflicting on people, but not really a choice people choose for themselves, and so not a lot of growth beyond that. So the result is people "growing out of ChromeOS" as they get out of school. Google hopes to change this by just tucking it all into Android and having at least some platform with residual relevance to a "grown up" computing experience.

But Windows 11SE has always been in a super weird awkward in-between. More 'capable' than ChromeOS in common usage, yet you could just get "real Windows" and run anything you like. The biggest problem is Microsoft didn't understand that lock-in to the Microsoft Store is not what would make them compete with ChromeOS, they just convinced themselves because that was the customer concept that would have been most profitable to them if they existed.

Comment Some oddities... (Score 1) 163

Now I know it isn't *generative* AI specifically, but most of those jobs are at pretty high risk of some related form of 'AI'. Was in a store and the floor polisher was operating autonomously among the shoppers.

On the impacted, the passenger attendant one strikes me as odd. The airlines don't actually care to provide the service that much, but since they are mandated by law to have that much staff to help with potential emergency situations, they put them to work doing attendant work for the 99% of the time they have nothing related to their legal obligation.

Sure, much of the list makes sense but there are certainly some oddities.

Comment Re:Just like humans: How we train them. (Score 1) 55

To elaborate, pretty much every TLS framework provides either a quick option to disable validation, or to provide your own validation callback. In this case, the LLM put something like 'sslValidate=false', despite that not perhaps being valid for the particular library in question, but the 'intent' was clear.

Further, in this case it was some library that modeled a connection for reuse, but did not make a TLS connection until the first request had been completed. Including of course transmitting the associated header.

Once it made a request, at least the LLM assumed it could fetch the raw certificate regardless of 'validation' as a property of the connection (don't recall if it hallucinated it or if it were real, it is at least consitent with some TLS socket frameworks)

I don't know what you are imagining, but plenty of frameworks model an TLS connection as a persistent element with certificate information available as a property or return from an instance method.

To put in pseudocode, assuming a fairly popular HTTP client library called 'httpclient'
conn = httpclient.Connection(servername, ssl=True, verify=false)
conn.set_auth_headers(username, password)
conn.get('/')
if (custom_certificate_invalid(conn.get_certificate()))
          error_out() ....

It's been a while so I forgot the details and which parameters were hallucinated, but it is roughly in line with the design of a number of HTTP client libraries I've seen as they wrap TLS connections.

If you say this is stupid, well that was the point, the LLM suggested something very stupid, but roughly possible with the design of TLS/HTTP libraries in the various ecosystems. Yes, you have better options in a lot of the libraries, like a custom handler that gets called *only* after the stock validation has happened, or if you need to replace the stock validation a callback that gets called at the right time, or a callback just to indicate what should be considered to be the name of the peer but otherwise let the validation logic be normal.

Comment Re:Just like humans: How we train them. (Score 5, Interesting) 55

Indeed, and I've dealt with this in human terms a fair amount too.

Found a vulnerability in a technology that applied plausible security practices, but missed a key implication, so strike one.

Then the stewards of that technology reviewed the findings and after a few days published that they were mandating a formerly optional feature that was meant to imitate a key security practice in other technologies to resolve the issue. This again sounded just right as that imitated mechanism was indeed directly meant to address that sort of weakness, however the answer was made without actually thinking about the vulnerability, as their version implemented the feature *after* the vulnerability would have already landed.

In an analogous AI experience, I asked a code generator to implement a pinned certificate to an HTTPS service in a language I wasn't immediately familiar with. So it:

-Dutifully disabled traditional certificate validation
-Submitted HTTPS request, including username and password
-*Then* implemented explicit certificate check, after the data had already been transmitted...

It's somewhat mitigated because it hallucinated arguments that were invalid for controlling the TLS behavior, but structurally it was code that was trying to be outwardly careful with explicit validation but entirely missing the point.

All the time I deal with people trying to do security that manage to align with the lingo, but fall short of actual thoughtful implementation. LLM could likely compete with those folks, but in some ways I prefer the totally naive results, it's easier to tell problems at a glance.

Comment Re:Just like humans: How we train them. (Score 2) 55

While there are some scenarios where the secure way is relatively straight forward (e.g. if you know you don't need anything peculiar, a reasonably secure CSP header is just common sense to opt into more locked down browser environment), much of it is actually thinking what the code does and how it could be bent. LLM style training isn't likely up to the task as it just generates code that is consistent with something that sounds like it fits the pattern, it's not actually conceptualizing things.

LLM can be like a supremely fast basic high school coder that doesn't actually double check the API documentation, churning out massive volumes of basic code that ranges from spot on for tutorial fodder including boilerplate stuff and frequent tasks to almost usable code that needs amendment for moderately typical code. Nowhere near being able to consider security implications, which even a lot of human coders aren't equipped to deal with.

Comment Re:Before you rail on this... (Score 1) 124

The thing is the LLMs don't really demand a great deal of 'literacy', so it's a bit silly to devote a lot of cycles to teaching literacy. It's kind of like back in the day when you had a whole course devoted to learning Microsoft Word, that was ridiculous.

One of the biggest areas for getting used to LLMs is also one that academic settings are the least well equipped to handle. How incorrect results manifest. Academic fodder tends to play to LLMs strengths, and trying to find counter-examples to illustrate LLM breaking down is a whac-a-mole as any noteworthy known example gets amended and/or is unreliable to produce in the wild. So you may be stuck having to go over captured examples rather than having the students experiment a lot with that phenomenon.

When we are learning math, we start by forbidding use of a calculator. Not because calculators are bad, but because we need to foster independent thought first. University probably should be mostly "LLM off" since the point is not to get academic material produced as quickly as possible, but to have people internalize some sampling of academic experience and equip them to operate independently.

Comment Re:Isn't that the point? (Score 1) 159

Depends.

Is it the case that everyone can stop working hard, or that some people get to do that while others get to work even harder to support that working hard?

If Norway gets to slack off and have a fantastic lifestyle, but only because they can import just tons of crap from more exploitative countries, that wouldn't be good.

If everyone around the world gets to slack off and still sustain a great lifestyle, then fantastic. This is in fact the goal, we should be thinking not of "can I get enough work" and more "can I have a good living?", but it needs to be considered as consistently as is feasible across the board.

This was one of the points in the story "Manna" that drove me nuts. Among other issues, they describe with sincerity a utopian socialist paradise, but they will only take so many people because that's all they can afford and leave behind those that didn't buy into the utopia when it could have just been a scam to suffer in dystopian societies. They explicitly expressed disdain for the unseen wealthy class for ignoring the plight of the less fortunate, then promptly have only a few people saved to the utopia while leaving most of the people in poverty behind to not help them either. Arguably even less, as the poverty class at least had housing and sustenance provided for by the dystopia, while the utopia explicitly left them to be "someone else's problem".

Comment Re:Yet another example of... (Score 1) 159

If you say the numerical representation of wealth can endlessly grow, sure, that's just a number. There's no such thing as objective, true numerical value, so if you have a system that is predicated on "line always goes up", you can make that system function even under static constraints by somehow changing how the numbers map to reality.

To his credit, he points to non-money indicators, like ability to succeed on tests.

It is fair to say that we can't consume exponentially more resources or create exponentially more physical goods every year, but ultimately we want to advance our standard of living and folks to do their fair part, so you don't end up with a hypothetical like one nation enjoys the high life with 8 hours worked per week while importing gob tons of stuff from another country where people work 80 hours a week.

Being sustainable can and should be part of the model. If we just went to coasting on our advancements in the 50s, we'd not have a lot of potentially sustainable advancements.

It's certainly a very weird thing to try to distill something as nuanced and complex as all of our resources, time, and thought into a single metric, yet we try and that number is flexible enough to, in theory, support infinite growth. The problem is less the absolute value, but discrepancies in how those values manifest person to person.

Slashdot Top Deals

"Love may fail, but courtesy will previal." -- A Kurt Vonnegut fan

Working...