Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Good (Score 1) 54

Eh, you're preaching to the choir. I haven't watched a Hollywood movie made in the last fifteen years. The last one I saw in a theatre was LOTR:ROTK. I've seen a couple of movies more recently than that, but they were old ones.

When the thirty-second trailer looks boring and heavily derivative, it's pretty difficult to imagine that the actual movie could hold my interest for over an hour. Sorry, not interested. Do you know how long it's been since I saw a movie trailer or advertisement, that made me want to watch the movie?

Comment Re:So what? (Score 1) 244

Eh.

Some cultural features are neutral, e.g., what you do about shoes when entering a home. Do you take them off and leave them at the door because shoes are dirty? Do you keep your shoes on because you have guests and you don't want them to have to look at your feet because feet are disgusting? If you do take off the shoes, does it matter how you line them up, and which direction the toes point? This all depends how you were raised; it's neither good nor bad, it's just culture.

But some cultural features are actively good for society. Japanese culture has features that lead to a low crime rate, for instance, and that's a good thing. Many cultures value things like integrity (particularly, keeping your word) and hospitality, and these are good positive values, that are good for society. I think it follows that there can also be cultural features that are bad for society. To avoid offending foreigners, I'll pick on my own society for an example here: in American culture, it's normal for people to deliberately lie to their children about important ontological issues, for entertainment purposes. That's *evil* but almost everyone here does it. (One of the best examples of this phenomenon, is Santa Claus.)

Bringing it back around to the Persian example from the article, my question would be, why is it that humans native to the culture only get this right 80% of the time. AI getting it wrong most of the time doesn't bother me, that's the AI companies' problem, and phooey on them anyway, so what. But if humans native to the culture are missing it 20% of the time, to me, that makes it sound like it must be some kind of esoteric, highly-situational interaction that regular people wouldn't have to deal with on anything resembling a regular basis; but no, we're talking about a basic social interaction that people have to do every day. Something seems off about that. That's a lot of pressure to put on people, to undertake something that difficult, and be expected to get it right all the time, and then catastrophically fail one time out of five. I don't think I'd want to live under that kind of social pressure.

Comment Re:Good (Score 1) 54

Eh, I kind of hope Hollywood goes all-in on AI generated content, tbh. They haven't produced much that's worth watching any time recently anyway, and if they go under, maybe it'll clear the way for better content creators to rise to prominence, maybe even someone who can figure out how to write a script from scratch, that is NOT the eighty-third sequel to a mediocre nineties action movie, or the twenty-seventh reboot of a superhero franchise.

Comment Eh. (Score 1) 109

On the one hand, yes, the job market *is* a bit down right now, and yes, getting a job, especially a decent one, has always been more difficult when you don't have any meaningful work experience yet.

But I don't think it's really significantly worse, at least here in the Midwest, than in past generations. The young people I know, generally have been able to find work that is commensurate with their qualifications, to an extent that is pretty comparable to what I've seen in the past, most of the time. Occasionally, somebody in a previous generation has gotten lucky and had an easier time and gotten snapped up for basically the first real job application they filled out, because the economy was up or whatever (my own experience getting an IT job in 2000 is an excellent example of this), but that has always been the exception rather than the rule. For most of history, getting your first really _good_ job has been difficult, and often required you to work a not-so-fantastic job for a few years first. (Heck, I worked fast food for several years, including a couple of years after getting my degree, before I lucked into that IT job. I've never regretted having that in my background, though I'm certainly pleased it didn't end up being my entire career.)

On the gripping hand, my experience with Gen Z is that in terms of employment opportunities, they aren't really any more entitled, on average, than Millennials were at the same age. Somewhat less so, if anything. If there's an aspect of their attitude that's worse, it's more social than professional and is related to how much they expect other people (especially casual acquaintances, like coworkers) to care about learning and accommodating all their personal idiosyncracies that aren't work-related. But this could be my Gen-X bias coming through: we were taught to only reveal personal stuff to people we're actually close to. We expected our phone numbers to be public knowledge, but we kept our personal feelings private. Gen Z is pretty much the reverse.

Comment Re:One non-inconsistent observation != PROOF (Score 1) 40

> "Proves" might be too strong

Different fields have different standards of proof. The most rigorous that I'm aware of, is in mathematics, wherein a proposal that almost all the experts think must surely end up being true, can be heavily studied and yet remain "unproven" for an arbitrarily large number of centuries, until eventually someone finds an actual real-world use case for the math that you get if it's NOT true. (The poster child for this is non-Euclidean geometry, but there are lots of other examples.)

There's an old joke about three university professors from England who took a trip up north together, and on their way out of the train station, the journalism professor looked over at some livestock grazing on a hill, and said, "Oh, look, the sheep in Scotland are black!" The biology professor corrected him, "Some of the sheep in Scotland are black." But the math professor said, "There exist at least three ship in Scotland, and at least three of them appear black on at least one side, at least some of the time."

Comment Re:Hurry up already (Score 1) 243

Sorry, no, that isn't the issue either. The problem the OP is running into is much, much more basic than that.

Forget, for a moment, that the ports are USB ports, and that the peripherals are USB peripherals, because as long as they match up (which they do, in the OP's scenario), none of that is the problem. The number of ports doesn't even matter, we can abstract away the 4 (or 2 + 2, same difference) and just call it N. The problem is that he's got N ports, and N peripherals that he wants to keep plugged into ports all the time, and that leaves N - N ports available to plug anything else into, if he needs to plug something in temporarily. But N - N is 0, so something has to be unplugged to free one up. That's a number-of-ports problem, entirely irrespective of the port type.

If you were proposing replacing the 2 USB-A ports with a *larger* number of USB-C ports, then your argument might have some relevance. But just changing the type of port won't bend the arithmetic in any useful direction. They could be upgraded to the new USB type K ports introduced in 2042, and it still wouldn't solve the problem: if there are still four ports and four all-the-type peripherals, there still won't be any unoccupied ports available for temporarily plugging in transitory things.

At least USB is (mostly) hot-pluggable. But, again, that's as true of A as it is of C.

Comment Re:SAT Sucks (Score 1) 115

This has probably changed over time. My impression when I was taking college entrance tests, was that the ACT tested what you knew (i.e., memorized facts), but the SAT tested _how you think_ (i.e., how good you were at figuring things out). But that was in the early nineties, and they changed some things not very much later that, among other things, resulted in more students getting higher scores, which I think was the goal then too. They had a lengthy explanation about keeping the test relevant to the changing expectations of modern institutions of higher learning, but reading between the lines, it seemed like the main outcome was giving out higher scores.

Comment Re:Windows 11 runs in 4gb of RAM (Score 1) 62

Eh. Win11 with 8 GB of RAM might work if you never connect it to the internet, or find some other way to block Windows Updates from ever happening. (Maybe once it goes EOL, they will stop issuing updates?)

But every time it starts downloading Windows Updates, it's going to try to store half the internet in system memory, and the platform's horrible virtual memory system is going to consistently swap out the page that's going to be needed next, every single time, and the download that *should* take a few minutes, is going to take more than a week, during which time you can't use the computer for anything else because it'll be constantly swapping like it's 1996 all over again; and by the time the whole update is downloaded and installed, Microsoft will release another update. Rinse, repeat.

4GB of RAM, I can only assume, would be worse, if that's even possible.

16 GB is mostly usable for basic computer tasks like browsing the web. Mostly. But it's not great. I consider 32 GB to be the practical minimum, if you want anything resembling decent performance, on Windows 8/10/11.

Linux systems can run on 8 GB of RAM. Heck, depending on what you're doing, 1 GB of RAM will do ok, though that's not going to give you much of a desktop environment. It's fine for a lot of headless roles, though.

Comment Re:What about 32-bit Raspberry Pi? (Score 1) 40

I don't think i386 builds would be usable on an ARM system anyway.

At worst, you can always just compile your own. Granted, the Mozilla codebase is (last I checked, which admittedly has been several years) significantly more of a pain to compile than the average open-source project, but it's not _prohibitively_ difficult. You do have to read a few lines of documentation and maybe edit a small config file, but there's nothing really tricky about it. Frankly, it's easier than installing most third-party binaries. Also, if you're using a distro that's made for your hardware, like Raspbian or whatever, it'll probably just have a package in the repo.

Comment Re:Old! (Score 1) 40

Yeah, I'm surprised and a bit disturbed that this hadn't happened long since. Linux distros pretty consistently started compiling everything for amd64 pretty much as soon as users had the hardware for it. There was no downside, because all of the software that everyone was using had source code readily available and could be compiled for the new system. It all has to be recompiled anyway, every time a major library (such as libc) gets any kind of really substantial update, because when you *have* the source code for everything, and everyone *knows* that you have the source code for everything, and your system includes a full working build chain out of the box, and compiling software from source is an extremely *normal* thing to do, to the point where people who aren't developers and wouldn't be able to read any of the source code don't have any trouble building it, it turns out that in that scenario hardly anybody bothers to maintain long-term ABI compatibility, because there's no compelling reason to do so. You (the package maintainer or whoever) can just do a fresh compile every time anything gets updated. You were almost certainly going to do so anyway. Even if the update only changes the documentation, you just build the thing, because it's significantly easier to just build the thing every time, than to bother to figure out whether you need to do so.

Certain other systems, that I don't need to name, took a decade or more (after folks had amd64 hardware) to transition over to widespread deployment a 64-bit OS and are *still* routinely using 32-bit applications in 2025; but that is for reasons that have never been relevant for Linux users. A lot of the users of such systems, would probably find the above paragraph just about as baffling, as the WWII Japanese naval commanders who found out the Americans had entire ships dedicated to making ice cream. "They can compile software so easily, that they do it when they don't even need to do it? They've already won the war, we just hadn't realized it yet."

Comment Re:What does it do? (Score 1) 92

A computer or robot with human-like intelligence isn't something anyone knows how to start designing. At all. We're no closer to that today, than we were in the seventies. No one even knows how to start doing research that would eventually lead to knowing how to make that. (No, LLMs are not heading in that direction. At all. Stop reading OpenAI press releases. LLMs generate output that is statistically similar to the training data. That's all they do. That's all they will ever do. It's cool and impressive when it works well, and it has some uses, but it's not even remotely similar to general-purpose intelligence, not even *vermin* level intelligence, much less human-level.)

Also, as technology in general and robotics in particular has advanced, we have consistently moved, over time, further and further away from the idea that a humanoid form factor makes for a particularly useful type of robot. Humanoid robots are usually created in quite small numbers (often, one-of-a-kind) and very consistently exist mainly to be shown off for publicity purposes. They almost never do anything really practical, and even when they do, they don't do it as well as other robots. Some of them (e.g., ASIMO) are subjectively really cool, granted. But they're not practical.

The human body is enormously practical, mostly for reasons a robotic system can't mimic at anything even vaguely in line with our current level of technology. Superficial things like the bipedal shape with arms and hands and fingers, mostly do what we need them to do, but they are not the main selling points of the design, compared to, say, a robot with treads and one arm.

Comment Re:Take cover (Score 1) 47

> and because they don't want to reveal just how
> hot-garbage their underlying code is

Code quality isn't the issue. I have no idea what their code quality is, it might be fine, it might be terrible, but the reason I can't tell is because that has nothing at all to do with the problem with their results.

The fundamental problem is that they've been actively trying to convince a lot of people, up to and including their shareholders, that the product is a fundamentally different thing than what it actually is. They use fancy terminology that most people don't really understand, like "neural network", to actively disguise the fact that the product is, at its core, basically just running statistics and spitting out statistically-likely combinations of tokens. It's _basically_ a really heavily souped up Markov chain generator, on really powerful steroids. The most important steroid in question, is an absolutely stupendous quantity of training data. But there are also some more clever things going on, e.g. with the details of how the data are tokenized, and I think they're more clever about how combinations of tokens work, than just the flat one-dimensional sequence of a traditional Markov chain. All of these enhancements make the output feel much more similar to real human speech or writing or whatever, than was possible even ten years ago. But fundamentally, that's all the thing is doing: generating output that's statistically similar to the training data.

There are definitely some actual uses for this technology, but they're not even vaguely commensurate with some of the things the companies involved want you to *imagine* the technology can do; and they never will be, no matter how much the technology matures. LLMs are not a path to general-purpose AI, no matter how much people want them to be. We're not materially closer to knowing how to create general-purpose AI, than we were in the seventies. For that, we're still waiting on some fundamental and completely unpredictable breakthrough. This doesn't mean the technology is useless; it's not; it has uses. And as it continues to mature, it'll be even more useful, for the sorts of things it's useful for. But it's not going to make humans obsolete, or do a lot of the other preposterous things the hype machine predicts.

Slashdot Top Deals

Mediocrity finds safety in standardization. -- Frederick Crane

Working...