Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Using FireFox to read this thread! (Score 4, Interesting) 170

If I'm honest about it? I feel like it's been years since any one web browser felt "better" than another to me for technical reasons like speed/performance or ability to work properly with web sites I needed to use.

My preference for FireFox has more to do with such things as the UI layout and the way it "compartmentalizes" certain things. (EG. On a Windows platform, it still manages SSL certificates in their own place, vs. sharing the common set of them stored and managed in Windows itself.) The fact it's NOT another Chromium-based browser means it's handy for troubleshooting too. (If I have issues with a web site, I like to have both a browser like Edge or Chrome AND FireFox to use so I can test it with both web engines.)

Who are these people who care SO much about how fast a browser renders content, anyway? It's the ongoing joke over on Apple forums with Safari browser.... "New MacOS release makes Safari snappier!" On any non prehistoric computer, web browsers performing poorly almost always have more to do with either the speed of the Internet connection itself, memory issues from somebody leaving a million tabs open, or poorly written web site code. I don't care what a stopwatch says. I care about the overall user experience, and it's fast enough in any decent browser.

Comment Re:ChatGPT is not a chess engine (Score 1) 132

My job doesn't have much to do with this at all. All humans engage in motivated reasoning and other cognitive biases. But it is also very easy to think someone one disagrees with is engaging in some sort of cognitive error even when they are not. So instead of just labeling this as motivated reasoning, maybe you could explain what it is wrong with the point I made?

Comment Re:What? (Score 1) 278

Not just a question of legality, but of ethics. Jimmy Carter had to give up his peanut farm out of a concern of conflicts of interest. Obama also had a policy of turning away even gift books from authors that were sent to the Whitehouse when we was President. George W. Bush had a policy almost as strict as Obama's. How far we've come from that point.

Comment Good way of getting a list of companies to avoid (Score 1) 32

The current AI systems have some definite use cases, but right now outside some very narrow areas (such as some customer service oriented jobs and some of the more basic programming jobs), the efficiency increases are too small to reasonably justify reducing headcounts based on them. Seems like a good way of identifying areas where management is on a hype-train which can cause real damage to the companies and the quality of their services.

Comment Re:ChatGPT is not a chess engine (Score 1) 132

Pages of instruction are not the only thing that matters. Lots of humans don't learn well from simply reading instruction sets. And since ChatGPT doesn't have a good visual representation of the board, this is equivalent to trying to teach a human who has never learned to play chess to learn to play without a visual board and only able to keep track of moves based on the move notation. Even some strong chess players have trouble playing chess in their heads this way.

Comment Re:ChatGPT is not a chess engine (Score 1) 132

Obnoxious snark aside, it appears that you are missing the point. Yes, ChatGPT is trained on a large fraction of the internet. That's why it can do this at all. What is impressive is that it can do that even without the sort of specialized training you envision. Also, speaking as someone who has actually taught people how to play chess, you are to be blunt substantially overestimating how fast people learn.

Comment Re:ChatGPT is not a chess engine (Score 1) 132

You shouldn't be surprised that it will try. All of the major LLMs are wildly overconfident in their abilities. I'm not sure if this is more because they've got human reinforcement to be "helpful" or if because they are trained on the internet where there's very rarely a response in the training data of "That's an interesting question, I've got no idea."

Comment Re:ChatGPT is not a chess engine (Score 1) 132

That LLM AIs are bad at abstract reasoning of this sort is not a new thing. People have seen that very early on with these systems, such as their inability to prove theorems. If someone thought that an LLM would be good at chess by itself in this situation they haven't been paying attention.

Comment ChatGPT is not a chess engine (Score 4, Insightful) 132

ChatGPT is not a chess engine. Comparing it to an actual chess system is missing the point. The thing that's impressive about systems like ChatGPT is not that they are better than specialized programs, or that it is better than expert humans, but that it is often much better at many tasks than a random human. I'm reasonably confident that if you asked a random person off the street to play chess this way, they'd likely have a similar performance. And it shouldn't be that surprising, since the actual set of text-based training data that corresponds to a lot of legal chess games is going to be a small fraction of the training data, and since nearly identical chess positions can have radically different outcomes, this is precisely the sort of thing that an LLM is bad at (they are really bad at abstract math for similar reasons). This also has a clickbait element given that substantially better LLM AIs than ChatGPT are now out there, including GPT 4o and Claude. Overall, this comes across as people just moving the goalposts while not recognizing how these systems keep getting better and better.

Comment And all of these are above the human baseline (Score 1, Interesting) 71

It is worth noting even the easiest puzzles here are puzzles which many, if not most humans, cannot solve. The fact that we're now evaluating AI reasoning based on puzzles above human baseline should itself be pretty alarming. But instead we've moved the goalposts and so are reassuring ourselves that the AIs cannot easily solve genuinely tricky puzzles.

Comment Over-zealous legislation again.... dislike! (Score 0) 163

The *real* problem is with people who aren't skilled enough at operating a motor vehicle while manipulating a device or controls. Long before cellphones existed, we had people accidentally rear-ending other cars because they were trying to change their radio station or volume. Yet, we didn't pass laws banning car stereos. (We collectively acknowledged the benefits of a car stereo while driving and decided people just needed to learn how to work the radio controls in a safe manner while driving -- which most people figured out how to do.)

People used to manage to unfold paper maps and refer to them while driving, back in the 1970's and earlier, without wrecking into people, too.

I'm amazed at how lax the drivers' ed testing has become in recent years. My daughter went to get her license last year and the entirety of the practical part of her exam was having her drive around the block, out of the shopping center the motor vehicle dept. was located in, and back into the lot to park in a parking space next to it. They didn't so much as get her out on the highway! I have a hard time rationalizing that as ok, while worrying about good/experienced drivers who multitask glancing at smartphone screens.

Comment Re:Proving the concept (Score 1) 47

Every single ride is more data for them to work with and more money to do research with. They are already expanding in those cities to areas which are more complicated. Snow and rain are going to be more issues, but even with rain now, the Waymo cars can often run. 50 years is likely a substantial overestimate; 20 or 25 years seems more plausible. (That said, I guessed 15 years ago that by now the majority of new cars would be self-driving. So I may be systematically overestimating how fast this tech is going.)

Comment His comments make sense in a given scope .... (Score 1) 50

As long as he's referring to his own field (creation of animations/art for film or video), I think he's essentially correct. AI will become a required tool you need to be familiar with as part of your career. It won't take people's jobs, except for people who refuse to learn how to utilize AI as part of it.

I'm FAR from convinced AI usage will play out the same way in all industries. For example? If you work in law, it makes sense AI could replace your lower-paid paralegals who essentially just open Word templates and fill out fields with appropriate info for each client. However, AI isn't at all likely to take jobs of many attorneys out there because that line of work involves showing up in courts in person, and presenting things to other people in a persuasive way.

If you're paid to publish ad copy, then AI is likely to reduce the number of employees needed, but again? The ones retained will need to know how to utilize AI tools well (and how to supplement or revise what they churn out).

AI isn't going to do anything meaningful in most "blue collar" fields like construction, IMO. It might help an architect out with the design stages of a project, but people getting paid to build things won't get anything done by some software code running in the cloud.

Slashdot Top Deals

"It might help if we ran the MBA's out of Washington." -- Admiral Grace Hopper

Working...