Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Not entirely bad (Score 1) 69

things are far worse for job seekers who have clearly lost the arms race and are getting absolutely massacred.

If you find yourself on the other side of the process, you might change your view of that. The last time I had to hire someone, it meant going through hundreds of applications by hand to find the handful that were qualified. A keyword search isn't enough anymore, because the AI cover letters often include the right keywords pulled from the job description.

Applicants now use automation heavily. The result is an impossible task for any employer that doesn't use automation to filter them. The system is broken for everyone.

Comment Not entirely bad (Score 1) 69

Although this is insane, in some ways it makes sense. AI has already broken the hiring process. People apply to jobs in bulk with AI generated cover letters, not even bothering to read the description first and see if they're qualified. Anyone trying to hire someone for a skilled job has to sort through mountains of applications to find the few qualified candidates.

If this adds a time cost to each application, it could help counter the trend. If applying for 100 jobs means getting invited to 100 computerized interviews, maybe you'll go back to being more selective about the jobs you apply for.

Comment Re:Same shit, different day (Score 1) 70

Take all studies like this with a grain of salt. A doctor doesn't diagnose a patient by reading a case study. They do it by talking to the patient, examining them, deciding what tests to order, etc. This is a contrived comparison that has little connection to how doctors actually work.

Comment Re:Portable hardware (Score 3, Insightful) 43

Xbox has always had a pretty healthy market share

I wouldn't describe it that way. After four generations of the console market, they've never been better than a distant number 2. Here's a list of lifetime sales for the console market. The original Xbox sold only 24 million, compared to 160 million for the PS2 it competed against. The Xbox 360 did better at 84 million, but that still left it behind the Wii and PS3. Then the Xbox One went backward, selling only 58 million vs. 117 million for the PS4, and the Series X/S is unlikely to sell even that many by the end of this generation.

Given the billions they've put into the business, it's hard to see that as a success.

Microsoft is a platform company first, a software company second, and a hardware company as a distant third. It's not surprising if they decide to get out of the gaming hardware business and focus on creating the software for other companies' hardware.

Comment Re:Set SCE to AUX (Re:Again?) (Score 1) 39

Maybe, but are they learning anything useful? Or are they just learning how to more reliably give rich people a quick thrill for a lot of money?

It's not clear this is a stepping stone to anything else. Three minutes of weightlessness isn't useful for a lot. If they were going to LEO, that would be useful. Launching satellites would be useful. But are these tourist flights moving them any closer to that? Or is getting money from billionaires the end in itself?

Comment Re:Synthetic fuels (Score 1) 363

Sure, there's land we could use to grow biomass for transportation. Most if it is already used for other things, usually agriculture. Are we supposed to stop using it for those things? And if it isn't being used for anything else, that means clearing wild land which releases stored carbon and destroys habitat. It usually ends up doing more harm than good. I could cite sources, but as you say, they're just a web search away.

The efficiency is also ruinously low. The raw efficiency of photosynthesis (the fraction of light energy converted to sugar) is only a few percent at best. Factor in the energy the crop uses for keeping itself alive and making other molecules you don't want, and then the efficiency of the process for making fuel from it, and your total efficiency of sunlight to fuel ends up at a fraction of a percent.

Making synthetic fuel from excess wind and solar energy is more realistic. The starting efficiency is much higher (24% for a good solar panel), and they can share land with other uses like buildings and parking lots. That could be a realistic option for aviation, where batteries are likely to remain impractical for a long time.

For cars though, the inefficiency kills it. Starting from electricity you first make hydrogen, which has an efficiency of about 75%. Then you make hydrocarbons from the hydrogen, with an efficiency of about 60%. And then most ICEs have an efficiency of only around 30%. Multiply those out and you get an electrons-to-wheels efficiency of maybe 15%. For EVs it's usually at least 85%.

Comment Re:Its not logic, or reasoning (Score 1) 66

The evidence contradicts your claims. LLMs really do "learn" and "understand", in precisely defined mathematical senses of the words. This article gives a good overview of the state of the field. LLMs don't just memorize text. They identify patterns in the text, then develop strategies that exploit those patterns to solve specific problems, then combine the strategies in novel patterns to solve problems unlike anything encountered in their training data. They don't just operate on statistical relationships between tokens. They work out the concepts underlying the tokens and learn to manipulate the concepts.

Here is another article that might help you to understand the difference by examining one case in more detail. When you train a model on examples of a certain type of math problem, it eventually works out a general algorithm for solving that type of problem. At that point, its accuracy on unseen inputs jumps to 100%. We can analyze the model to understand how it works. It no longer uses any memorization or statistical patterns at all. Instead it executes an algorithm that rigorously produces the correct result on all problems of that type.

Comment Re:Need a new name - Artificial skill? knowledge? (Score 1) 73

and what it does defined as simulate intelligence though a learning process

No, they defined learning as only one aspect of intelligence. Read the whole proposal, which is fascinating. They start by listing seven aspects of "the artificial intelligence problem", only one of which (self-improvement) requires learning. Then they each propose the specific problems they want to work on during the two month project. Shannon wants to explore information theory concepts: noise, redundancy, etc. Minsky describes something similar to what we would now call reinforcement learning. McCarthy wants to develop a higher level language that can more clearly express the ideas of conjecture and self-reference.

Comment Re:Need a new name - Artificial skill? knowledge? (Score 1) 73

Artificial Intelligence is exactly what the definition has always been, a system that learns / is trained on input and then produces output.

That isn't the original definition. Here is how it was defined in 1955, in the proposal for the Dartmouth Summer Research Project on Artificial Intelligence.

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

They defined AI in terms of what it does (simulating intelligent behavior), not what approach or mechanism is used to do it (learning from input, programming by an expert, trial and error, etc.)

Comment Teaching hard skills (Score 2) 55

Learning to use a chatbot is easy. Learning to write well is hard. You can get pretty good at the first one in a few days. The second one takes years of practice.

When people say we need to teach kids to use AI, they imagine the two are equivalent. They present using a chatbot as something students need to spend years studying or they'll end up lacking necessary skills. Well, it isn't. But spending too much time on it gets in the way of learning important skills that really do take years of study. Using AI doesn't help you learn how to read and understand a source, how to evaluate evidence, how to frame a logical argument, or how to express your ideas clearly. The goal of an essay isn't the finished work. It's to give you practice exercising those skills. Using AI keeps you from getting that practice.

Comment Re:It's not the language. It's tech debt. (Score 1, Interesting) 109

Garbage collection is faster than reference counting, not slower. Reference counting has the benefits of being deterministic and avoiding pauses, but it has more overhead overall. If you're concerned about throughput, garbage collection wins.

I'd take this story with a grain of salt. Apple claims the language they invented and promote is better than others. What a surprise! It sounds more like propaganda than a real technical analysis.

Comment Who is this for? (Score 1) 141

I don't see a lot of people wanting this. Consider the types of people.

1. People who don't need glasses. They don't want to start. Glasses are inconvenient. They block your peripheral vision. They make you look dorky. If you try to get these people to start wearing smart glasses, you'll meet a lot of resistance.

2. People who wear contacts. See above about looking dorky. These people need glasses, but they accept extra cost and inconvenience to make it look like they don't. They also won't be excited to buy smart glasses.

3. People who need glasses. They're a prescription device that needs to be updated regularly, perhaps every year. Will they buy a new pair of smart glasses every year? Or turn them in and do without for a week while the lenses get replaced? Some people wear glasses only some of the time, just for reading or just for driving. Or maybe they have different glasses for different purposes. Are they supposed to get multiple pairs of smart glasses?

Comment Re:Weird (Score 1) 118

You have no clue what you're talking about.

Validation is a huge part of what we do in science. Every paper introducing a new model includes a section on validation. That's one of the minimal requirements to be publishable. And yes, that includes ML models.

But that's just the beginning. Other people test the models and write their own papers evaluating them. People create benchmarks to evaluate models of a given type, and write papers describing their benchmarks and show how competing models do on them. There's even a level of meta-validation beyond that, papers that evaluate benchmarks to show which ones are best at predicting real world performance and determine what types of validation are most useful.

I personally have published lots of ML models, and every one of those papers included a detailed validation. The journals wouldn't have accepted them otherwise. This is what science is.

Slashdot Top Deals

Promptness is its own reward, if one lives by the clock instead of the sword.

Working...