Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Ummmm.... (Score 2) 190

I can't think of a single other country that claims to be civilised that has a tax code so complicated you need vast amounts of software and a high-power computer just to file what is properly owed.

TLDR version: The system is engineered to be too complex for humans, which is the mark of a very very badly designed system that is suboptimal, inefficient, expensive, and useless.

Let's pretend for a moment that you've a tax system that taxes the nth dollar at the nth point along a particular curve. We can argue about which curve is approporiate some other time, my own opinion is that the more you earn, the more tax you should pay on what you earn. However, not everyone agrees with that, so let's keep it nice and generic and say that it's "some curve" (which Libertarians can define as a straight line if they absolutely want). You now don't have to adjust anything, ever. The employer notifies the IRS that $X was earned, the computer their end performs a definite integral between N (the top of the curve at the last point you paid tax) and N+X, and informs the employer that N+X is the money owed for that interval.

Nobody actually does it this way, at the moment, but that's beside the point. We need to be able to define what the minimum necessary level of complexity is before we can identify how far we are from it. The above amount has no exemptions, but honestly, trying to coerce people to spend money in particular ways isn't particuarly effective, especially if you then need a computer to work through the form because you can't understand what behaviours would actually influence the tax. If nobody (other than the very rich) have the time, energy, or motivation to find out how they're supposed to be being guided, then they're effectively unguided and you're better off with a simple system that simply taxes less in the early amounts.

This, then, is as simple as a tax system can get - one calculation per amount earned, with no forms and no tax software needed.

It does mean that, for middle-income and above, the paycheck will vary with time, but if you know how much you're going to earn in a year then you know what each paycheck will have in it. This requires a small Excel macro to calculate, not an expensive software package that mysteriously needs updating continuously, and if you're any good at money management, then it really really doesn't matter. If you aren't, then it still doesn't matter, because you'd still not cope with the existing system anyway.

In practice, it's not likely any country would actually implement a system this simple, because the rich would complain like anything and it's hard to win elections if the rich are paying your opponent and not you. But we now have a metric.

The UK system, which doesn't require the filling out of vast numbers of forms, is not quite this level of simple, but it's not horribly complicated. The difference between theoretical and actual is not great, but it's tolerable. If anyone wants to use the theoretical and derive an actual score for the UK system, they're welcome to do so. I'd be interested to see it.

The US, who left the UK for tax reasons (or was that Hotblack Desiato, I get them confused) has a much much more complex system. I'd say needlessly complicated, but it's fairly obvious it's complicated precisely to make those who are money-stressed and time-stressed pay more than they technically owe, and those who are rich and can afford accountants for other reasons pay less. Again, if anyone wants to produce a score, I'd be interested to see it.

Comment Re:Some background would be helpful (Score 1) 33

Well, under some conditions an unique movie car *would* be copyrightable. The case where the car is effectively a character is just one of the ways you can argue a car to be copyrightable.

Copyright is supposed to protect original creative expression, not ideas or functional items, which may be protected by *other* forms of intellectual property like trademark or patents. This is because copyright protects *creative expression*. It doesn't protect ideas, or functional items. A car is a functional item, so *normally* it isn't protected. But to the degree a car in your movie has unique expressive elements that are distinct from its function, those elements can be copyrighted.

But the plaintiff still wanted to claim that he owned the design of the car, so his lawyer looked for a precedent that established that cars can sometimes be copyrighted even though they are functional items, and he found the Batmobile case, where the Batmobile was ruled to be a prop that was *also* a character. Because he cited this case, the judge had to rule whether the Batmobile ruling's reasoning applied to this car, and he decided it didn't. The car may be unique and iconic, but that's not enough to make it a character.

Comment Re:If AI were an employee (Score 1) 23

Sadly, based on experience I think you are wrong. Employees who screw up are often not fired, or are replaced with employees just as bad.

There's a reason there's a common saying that "You pay peanuts, you get monkeys." It's because it's very common for employers to accept mediocre or even poor work if the employees doing it are cheap enough. I'm not anti AI -- not even generative AI. I think with AI's ability to process and access huge volumes of data, it has tremendous potential in the right hands. But generative AI in particular has an irresistible appeal to a managerial culture that prefers mediocrity when it's cheap enough.

Instead of hiring someone with expensive thinking skills to use AI tools effectively and safely, you can just have your team of monkeys run an AI chat bot. Or you can fire the whole team and be the monkey yourself. The salary savings are concrete and immediate; the quality risks and costs seem more abstract because they haven't happened yet. Now as a manager it's your job to guide the company to a successful future, but remember you're probably mediocre at your job. Most people are.

According to economics, employers stop adding employees when the marginal productivity of the next employee drops to zero. What this means is that AI *should* create an enormous demand for people with advanced intellectual skills. But it won't because managers don't behave like they do in neat abstract economic models. What it will do is eliminate a lot of jobs where management neither desires nor rewards performance, because they don't want anything a human mind can, at this point, uniquely provide.

Comment Take it step by step. (Score 1) 107

You don't need to simulate all that, at least initially. Scan in the brains of people who are at extremely high risk of stroke or other brain damage. If one of them suffers a lethal stroke, but their body is otherwise fine, you HAVE a full set of senses. You just need to install a way of multiplexing/demultiplexing the data from those senses and muscles, and have a radio feed - WiFi 7 should have adequate capacity.

Yes, this is very scifiish, but at this point, so is scanning in a whole brain. If you have the technology to do that, you've the technology to set up a radio feed.

Comment Re:Please explain.... (Score 2, Informative) 133

The Koch Brothers paid a bunch of scientists to prove the figures being released by the IPCC and clinate scientists wrong. The scientists they paid concluded (in direct contradiction to the argument that scientists say what they're paid to say) that the figures were broadly correct, and that the average planetary temperature was the figure stated.

My recommendation would be to look for the papers from those scientists, because those are the papers that we know in advance were written by scientists determined to prove the figures wrong and failed to do so, and therefore will give the most information on how the figures are determined and how much data is involved, along with the clearest, most reasoned, arguments as to why the figures cannot actually be wrong.

Comment If this saves... (Score 2) 28

...Then there's an inefficiency in the design.

You should store in the primary database in the most compressed, compact form you can that can still be accessed in reasonable time. Tokenise as well, if it'll help.

The customer should never be accessing the primary database, that's a security risk, the customer should access through a decompressed subset of the main database which is operating as a cache. Since it is a cache, it will automagically not contain any poorly-selling item or item without inventory, and the time overheads for accessing stuff nobody buys won't impact anything.

If you insist on purging, there should then be a secondary database that contains items that are being considered for purge as never having reached the cache in X number of years. This should be heavily compressed, but where you can still search for a specific record, again through a token, not a string, then add a method by which customers can put in a request for the item. If there's still no demand after a second time-out is reached, sure, delete it. If the threat of a purge leads to interest, then pull it back into primary. It still won't take up much space, because it's still somewhat compressed unless demand actually holds it in the cache.

This method:

(a) Reduces space the system needs, as dictated by the customer and not by Amazon
(b) Purges items the system doesn't need, as dictated by the customer and not by Amazon

The customers will then drive what is in the marketplace, so the customers decide how much data space they're willing to pay for (since that will obviously impact price).

If Amazon actually believe in that whole marketplace gumph, then they should have the marketplace drive the system. If they don't actually believe in the marketplace, then they should state so, clearly and precisely, rather than pretend to be one. But I rather suspect that might impact how people see them.

Comment Re:Duh! (Score 1) 68

I think we should make a distinction between "AI" and "AGI" here. Human intelligence consists of a number of disparate faculties -- spatial reasoning, sensory perception, social perception, analogical reasoning, metacognition etc. -- which are orchestrated by consciousness and executive function.

Natural intelligence is like a massive toolbox of cognitive capabilities useful for survival that evolution has assembled over the six hundred million years since neurons evolved. The upshot is you can reason your way to the conclusion that you can remove the blue block from underneath the red block without disturbing the red block, but then metacognition uses *other* mental faculties will overrule this faulty conclusion. Right there you have one big gap between something like an LLM and natural intelligence (or AGI). LLMs have just one string to their bow, so they can't tell whether they're "hallucinating" because detecting that requires a different paradigm. The narrow cognitive capabilities of generative AI means it requires and engaged human operator to use safely.

For many decades now I've heard AI advocates claim that the best way to study natural intelligence is to try to reproduce it. I always thought that was baloney: the best way to study natural intelligence is to examine and experiment with animals that possess it. What AI researchers try to do is to write programs which perform tasks that previously could only be done by humans. That's why when any AI technique starts to work, it's not AI anymore. But these tasks are so restricted, and the approach taken is so uniformed by actual psychological research, that I doubt that these limited successes tell us anything about natural intelligence.

Until, maybe, now. Now that AI has reached a point of unprecedented impressiveness, I think it teaches us that reductive approach to AI we've been taking won't generate systems that can be trusted without human oversight. That doesn't mean these systems aren't "AI" by the field's own Turing test.

Comment Re:Yeah, no shit, Sherlock. (Score 3, Insightful) 57

Anyone who has studied the Earth's climate knows we are in the bottom 90% of temperatures over time, and we are exiting an interglacial, and that the earth is unstable at this cold temperature.

A bit of an exaggeration, but let's assume it's exactly right. It's irrelevant. The conditions in the Archaen Eon or the Cretaceous Period tell us nothing about what would be good for *us* and the other species that currently inhabit the Earth. What matters to species living *now*, including us, is what would is typical of he Quaternary Period. Those are the conditions we've evolved to survive in.

There isnt meant to be ice on the surface.

Says who? In any case, there have been at least four major ice ages prior to the Quaternary Period (past 2.6 million years). The Quaternary represents less than 1% of the time in which the Earth has had major ice sheets.

Even the rate of change isn't unique in Earth's storied history.

This is just wrong. Since 1990 the rate of change of global average temperature has been around 2 C / Century. It's never reached 1 C / Century before on a global basis so far as we know. Of course there have been regional events in the 1 C/ Century range, like the termination of the Younger Dryas, but that high rate of warming was regional and those regions experienced catastrophic mass extinctions.

There is no "right" temperature for the Earth. There isn't even a "normal" one. Humans, if we continue to exist as a species for another million years, will experience states of the Earth that will look very alien to us, and they'll like it that way because that's what they'll be used to. The problem for us right now is the rate of change is beyond what we would experience as economic *stress*, and well beyond the levels that triggered mass extinctions in the geologic past.

Comment Re:Maybe biologists and doctors should consider (Score 2) 49

You are proposing scientists terraform the Earth -- something we're centuries, if not millennia from knowing how to do.

Take a single cubic meter of dirt from your back yard. That is practically a world in itself, far beyond the capabilities of current science to understand. That's because there are millions of organisms, and thousands of species interacting there and billions of chemical interactions per second. Of the microbes, only about 1% of the species can be cultured and studied in a lab, the rest are referred to as "microbial dark matter" -- stuff we can infer is there but have no ability to study directly. If you gave scientists a cubic meter of ground up mineral matter that was completely inorganic, they would be unable to "terraform" it into soil like you get from your yard -- not without using ready made soil like a sourdough starter.

Terraforming as a sci-fi trope is 1940s and 50s authors imagining the obsolete land management practices of the time -- "reclaiming" (filling) wetlands, introducing "desirable" species, re-engineering watersheds like the ones feeding the Aral Sea -- then scaling them up to planetary scale. The truth is we can't even terraform a bucket of dirt yet; an entire planet is as far beyond our scientific capabilities at present as faster than light travel.

In any case "beneficial" microbes you're talking about are already out there. The problem is that conditions are changing to allow "detrimental" microbes to outcompete them. And there's a 99% chance the microbes in question are microbial dark matter that we can't effectively study. Maybe we need a Moon shot program to understand microbial dark matter. Chances are such a program would pay for itself in economic spinoffs. But I don't see any new major scientific initiatives in the current political climate.

Comment Hmmm. (Score 1) 54

Something that quick won't be from random mutations of coding genes, but it's entirely believable for genes that aren't considered coding but which control coding genes. It would also be believable for epigenetic markers.

So there's quite a few ways you can get extremely rapid change. I'm curious as to which mechanism is used - it might not be either of those I suggested, either.

Comment Re: Can anyone say LLMs? (Score 1) 85

I think this is true only if you are comparing the LCOE of natural gas to solar *with storage* in the US. A plain solar farm without storage is going to be cheaper. We really should look at both with and without storage, because they're both valid comparisons for different purposes, although PV + storage is probably the best for an apples-to-apples comparison.

The cost of solar has come down year after year for the last thirty years, to the point that *internationally*, at least, the LCOE for solar PV plus storage is now just a little bit less than the LCOE for natural gas, and is expected to become *dramatically* cheaper by the end of this decade, according to IEA. Even if they are calculating somewhat optimistically, if solar costs continue to drop it's only a matter or time before solar PV plus storage becomes cheaper than natural gas, even in the US with its cheaper gas.

The wild card here is any political actions taken to change the direction this situation is going. Internationally, low PV prices are driven by cheap Chinese suppliers, and it doesn't seem likely we'll see large scale US domestic solar production in the next five years. In the meantime we have an unstable situation with respect to tariffs on PV components and lithium for batteries. Until sufficient domestic sources of lithium come on line, uncertainty about tariffs will create financial problems for US manufacturers and projects.

Slashdot Top Deals

When the bosses talk about improving productivity, they are never talking about themselves.

Working...