Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Ummmm.... (Score 2) 190

I can't think of a single other country that claims to be civilised that has a tax code so complicated you need vast amounts of software and a high-power computer just to file what is properly owed.

TLDR version: The system is engineered to be too complex for humans, which is the mark of a very very badly designed system that is suboptimal, inefficient, expensive, and useless.

Let's pretend for a moment that you've a tax system that taxes the nth dollar at the nth point along a particular curve. We can argue about which curve is approporiate some other time, my own opinion is that the more you earn, the more tax you should pay on what you earn. However, not everyone agrees with that, so let's keep it nice and generic and say that it's "some curve" (which Libertarians can define as a straight line if they absolutely want). You now don't have to adjust anything, ever. The employer notifies the IRS that $X was earned, the computer their end performs a definite integral between N (the top of the curve at the last point you paid tax) and N+X, and informs the employer that N+X is the money owed for that interval.

Nobody actually does it this way, at the moment, but that's beside the point. We need to be able to define what the minimum necessary level of complexity is before we can identify how far we are from it. The above amount has no exemptions, but honestly, trying to coerce people to spend money in particular ways isn't particuarly effective, especially if you then need a computer to work through the form because you can't understand what behaviours would actually influence the tax. If nobody (other than the very rich) have the time, energy, or motivation to find out how they're supposed to be being guided, then they're effectively unguided and you're better off with a simple system that simply taxes less in the early amounts.

This, then, is as simple as a tax system can get - one calculation per amount earned, with no forms and no tax software needed.

It does mean that, for middle-income and above, the paycheck will vary with time, but if you know how much you're going to earn in a year then you know what each paycheck will have in it. This requires a small Excel macro to calculate, not an expensive software package that mysteriously needs updating continuously, and if you're any good at money management, then it really really doesn't matter. If you aren't, then it still doesn't matter, because you'd still not cope with the existing system anyway.

In practice, it's not likely any country would actually implement a system this simple, because the rich would complain like anything and it's hard to win elections if the rich are paying your opponent and not you. But we now have a metric.

The UK system, which doesn't require the filling out of vast numbers of forms, is not quite this level of simple, but it's not horribly complicated. The difference between theoretical and actual is not great, but it's tolerable. If anyone wants to use the theoretical and derive an actual score for the UK system, they're welcome to do so. I'd be interested to see it.

The US, who left the UK for tax reasons (or was that Hotblack Desiato, I get them confused) has a much much more complex system. I'd say needlessly complicated, but it's fairly obvious it's complicated precisely to make those who are money-stressed and time-stressed pay more than they technically owe, and those who are rich and can afford accountants for other reasons pay less. Again, if anyone wants to produce a score, I'd be interested to see it.

Comment Take it step by step. (Score 1) 105

You don't need to simulate all that, at least initially. Scan in the brains of people who are at extremely high risk of stroke or other brain damage. If one of them suffers a lethal stroke, but their body is otherwise fine, you HAVE a full set of senses. You just need to install a way of multiplexing/demultiplexing the data from those senses and muscles, and have a radio feed - WiFi 7 should have adequate capacity.

Yes, this is very scifiish, but at this point, so is scanning in a whole brain. If you have the technology to do that, you've the technology to set up a radio feed.

Comment Re:Please explain.... (Score 2, Informative) 133

The Koch Brothers paid a bunch of scientists to prove the figures being released by the IPCC and clinate scientists wrong. The scientists they paid concluded (in direct contradiction to the argument that scientists say what they're paid to say) that the figures were broadly correct, and that the average planetary temperature was the figure stated.

My recommendation would be to look for the papers from those scientists, because those are the papers that we know in advance were written by scientists determined to prove the figures wrong and failed to do so, and therefore will give the most information on how the figures are determined and how much data is involved, along with the clearest, most reasoned, arguments as to why the figures cannot actually be wrong.

Comment If this saves... (Score 1) 28

...Then there's an inefficiency in the design.

You should store in the primary database in the most compressed, compact form you can that can still be accessed in reasonable time. Tokenise as well, if it'll help.

The customer should never be accessing the primary database, that's a security risk, the customer should access through a decompressed subset of the main database which is operating as a cache. Since it is a cache, it will automagically not contain any poorly-selling item or item without inventory, and the time overheads for accessing stuff nobody buys won't impact anything.

If you insist on purging, there should then be a secondary database that contains items that are being considered for purge as never having reached the cache in X number of years. This should be heavily compressed, but where you can still search for a specific record, again through a token, not a string, then add a method by which customers can put in a request for the item. If there's still no demand after a second time-out is reached, sure, delete it. If the threat of a purge leads to interest, then pull it back into primary. It still won't take up much space, because it's still somewhat compressed unless demand actually holds it in the cache.

This method:

(a) Reduces space the system needs, as dictated by the customer and not by Amazon
(b) Purges items the system doesn't need, as dictated by the customer and not by Amazon

The customers will then drive what is in the marketplace, so the customers decide how much data space they're willing to pay for (since that will obviously impact price).

If Amazon actually believe in that whole marketplace gumph, then they should have the marketplace drive the system. If they don't actually believe in the marketplace, then they should state so, clearly and precisely, rather than pretend to be one. But I rather suspect that might impact how people see them.

Comment Hmmm. (Score 1) 53

Something that quick won't be from random mutations of coding genes, but it's entirely believable for genes that aren't considered coding but which control coding genes. It would also be believable for epigenetic markers.

So there's quite a few ways you can get extremely rapid change. I'm curious as to which mechanism is used - it might not be either of those I suggested, either.

Comment Re:Few thoughts. (Score 1) 51

Manchester, England. I resisted getting a Slashdot ID for a while, as I hated the idea of username/password logins rather than just entering a name. Otherwise my UID would be two or three digits, as I had been on Slashdot a long time before they had usernames.

Comment Re:Few thoughts. (Score 1) 51

No, I would have to disagree, for a simple reason.

Humans are capable of abstracting/generalising, extrapolating, and projecting because of the architecture of the brain and not because of any specific feature of a human neuron.

The very nature of a NN architecture means that NNs, as they exist, cannot ever perform the functions that make humans intelligent. The human brain is designed purely around input semantics (what things mean), not input syntax (what things are), because the brain has essentially nothing external in it, everything is internal.

Comment Re:Few thoughts. (Score 1) 51

I use AGI to mean an AI that is capable of anything the biological brain is capable of, because the brain is the only probable intelligence to measure against.

This has nothing to do with good/bad, or any of the other stuff in your post. It's a simple process - if there's a model of thought the brain can do, then it is a modern of thought an AGI must do or it is neither general nor intelligent.

Comment Few thoughts. (Score 4, Informative) 51

1. Even the AI systems (and I've checked with Claude, three different ChatGPT models, and Gemini) agree that AGI is not possible with the software path currently being followed.

2. This should be obvious. Organic brains have properties not present within current neural nets (localised, regionalised, and even globalised feedback loops within an individual module of a brain (the brain doesn't pause for inputs but rather mixes any external inputs with synthesised inputs, the and brain's ability to run through various possible forecasts into the future and then select from them along the brain's ability to perform original synthesis between memory constructs and any given forecast to create scenarios for which no input exists, to produce those aforementioned synthesised inputs). There simply isn't a way to have multi-level infinite loops in existng NN architectures.

3. The brain doesn't perceive external inputs the way NNs do - as fully-finished things - but rather as index pointers into memory. This is absolutely critical. What you see, hear, feel, etc -- none of that is coming from your senses. Your senses don't work like that. Your senses merely tell the brain what constructs to pull, and the brain constructs the context window entirely from those memories. It makes no reference to the actual external inputs at all. This is actually a good thing, because it allows the brain to evolve and abstract out context, things that NNs can't do precisely because they don't work this way.

Until all this is done, OpenAI will never produce AGI.

Comment Re:RAG (Score 1) 5

This sort of system is only useful because LLMs are limited. If they can be told to farm certain categories of step to middleware, then when they encounter such a step, they should farm out the request. I've found, with trying engineering problems, that LLMs consume a lot of steps finding out what to collect, with a risk of hallucination. That's exactly the sort of thing that can be farmed out.

According to both Claude and ChatGPT, that sort of process is the focus of a lot of research, right now, although apparently it's not actually been tried with reasoners.

Comment Re:RAG (Score 1) 5

Yeah, that would count.

Specifically, with something like Protege, I can define how things relate, so I could set up an ontology of cameras, with digital and film as subtypes, where lenses are a component of cameras, films are a component of film cameras, and pixel count is a property of digital cameras.

The reasoner could then tell you about how the bits relate, but a SPARQL search could also search for all records in a database pertaining specifically to any of these parameters.

At least some search engines let you do this kind of search. On those, you can specifically say "I want to find all pages referencing a book where this is specifically identified as the title, it isn't referencing something else, and that specifically references an email address".

So, in principle, nothing would stop Claude or ChatGPT saying "I need to find out about relationships involving pizza and cheese", and the reasoner could tell it.

That means if you're designing a new project, you don't use any tokens for stuff that's project-related but not important right now, and you use one step no matter how indirect the relationship.

This would seem to greatly reduce hallucination risks and keeps stuff focussed.

What you're suggesting can bolt directly onto this, so Protege acts as a relationship manager and your add-on improves memory.

This triplet would seem to turn AI from a fun toy into a very powerful system.

User Journal

Journal Journal: Question: Can you use Semantic Reasoners with LLMs 5

There are ontology editors, such as Protege, where there are a slew of logical reasoners that can tell you how information relates. This is a well-known weakness in LLMs, which know about statistical patterns but have no awareness of logical connections.

Comment Huh. (Score 3) 40

Why are they monitoring syscalls?

The correct solution is surely to use the Linux Kernel Security Module mechanism, as you can then monitor system functions, regardless of how they are accessed. All system functions, not just the ones that have provision for tracepoints.

For something like security software, you want the greatest flexibility for the least effort, and Linux allows you to do just that.

Because it's fine-grained, security companies can then pick and choose what to regard or disregard, giving them plenty of scope for varying the level of detail. And because the LSM allows services yo be denied, there's a easy way for the software to stop certain attacks.

But I guess that the obvious and most functional approach would mean that the vendors would have to write a worthwhile product.

Slashdot Top Deals

In Nature there are neither rewards nor punishments, there are consequences. -- R.G. Ingersoll

Working...