Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re: Trust us. (Score 1) 53

I know you're joking that they've already been scammed once (by apple) but apple is not wrong here to say that people are incredibly stupid and liable to get tricked into side loading malware.

I don't think that's enough to override the fundamental right to use the hardware you purchased any way you please, but regardless, they are correct.

Comment Re: The cost of AI (Score 1) 228

Each new important technology tends to get commoditized and its cost per functionality decreases with both scale and innovation including supply chain innovation.
See "Simplicable Technology Commoditization Curve",
See "Wright's Law"

One random tiny but concrete meaninful example: E.g. in AI there is already a technique whereby energy-expensive-to-query large models can be replaced by much smaller distilled models. See what deepthink did based on expensive American LLM models.

Comment No longer a hypothetical future (Score 2) 228

Firstly, AI and automation actually net-replacing human labour this time has been predictable since the late 1980s, as AI started shaping up.
i.e. the observation that it really will be different this time since, in job category after job category, AIs and/or automated processes, which are continually being improved, sometimes in leaps and bounds, will inevitably cross a threshold of being probably more absolutely effective and definitely more cost-effective than a person. Economic activity in capitalist competition will then adjust to those economic facts.

Secondly, we now, in the current and next few years period definitely have:
a. the in-the-moment rise of fairly general Agentic AI, including reasoning, problem break-down etc. on top of a huge and comprehensive knowledge base and domain-specific knowledge bases
b. the last few years improvements in LLM systems for
      i. writing
      ii. information search and summarization tasks across wide but also deep knowledge domains
      iii. creative tasks
c. Humanoid robots emerging over the next few years with capability for:
        i. general flexible learning based on observing and practicing
      ii. cross-training so that the whole fleet of robots learns what each individual one learns.
d: Actually self-driving cars and trucks

You can no longer say this is some unlikely hypothetical sci-fi future scenario. The job displacement (coders, ironically enough), and in high-expertise jobs (lawyer, doctor, engineer) job devaluation, is already starting to happen, reflected in lower hiring plans not compensating for attrition at a minimum.
Factory and other manual labour jobs will be next, after c. and d. develop for a few more years. Many thought those manual jobs would be first, but turns out they are more challenging for computers than replacing white-collar work.

This is not hypothetical. Planning for the economic shake-up needs to start now. Hint. Repatriating factories is not the solution. Will you be happy that robots of your nationality are doing the work instead of those evil foreign robots? No.

Comment Re:Lol (Score 2) 43

It seems like a potentially bigger threat to any adventures outside of their 'core' products they want to try.

If I'm just buying a CPU from them that's a fairly low risk bet. Very mature compiler target; more or less a known quantity once benchmarks are available barring the discovery of some issue serious enough to be RMA material. Even if they decide to quit on that specific flavor of CPU the ones I have and the remaining stock should continue to just work until nobody really cares anymore.

If it's something that requires a lot more ongoing investment, though, like targeting Intel for GPU compute or one of the fancy NICs with a lot of vendor specific offload onboard, I'm going to have bad day if my effort is only applicable for one generation because there's no follow-up product; and a really bad day if something goes from 'a little rough but being rapidly polished' to 'life support'.

Even back when they made money Intel never had a great track record for some of that stuff; they've always got something goofy going on the client side that they lose interest in relatively quickly; like that time when optane cache was totally going to be awesome; or the more recent abandonment of 'deep link technology' that was supposed to do some sort of better cooperation between integrated and discrete GPUs; but that stuff is more on the fluff side so it hasn't really mattered.

Comment Ummmm.... (Score 2) 190

I can't think of a single other country that claims to be civilised that has a tax code so complicated you need vast amounts of software and a high-power computer just to file what is properly owed.

TLDR version: The system is engineered to be too complex for humans, which is the mark of a very very badly designed system that is suboptimal, inefficient, expensive, and useless.

Let's pretend for a moment that you've a tax system that taxes the nth dollar at the nth point along a particular curve. We can argue about which curve is approporiate some other time, my own opinion is that the more you earn, the more tax you should pay on what you earn. However, not everyone agrees with that, so let's keep it nice and generic and say that it's "some curve" (which Libertarians can define as a straight line if they absolutely want). You now don't have to adjust anything, ever. The employer notifies the IRS that $X was earned, the computer their end performs a definite integral between N (the top of the curve at the last point you paid tax) and N+X, and informs the employer that N+X is the money owed for that interval.

Nobody actually does it this way, at the moment, but that's beside the point. We need to be able to define what the minimum necessary level of complexity is before we can identify how far we are from it. The above amount has no exemptions, but honestly, trying to coerce people to spend money in particular ways isn't particuarly effective, especially if you then need a computer to work through the form because you can't understand what behaviours would actually influence the tax. If nobody (other than the very rich) have the time, energy, or motivation to find out how they're supposed to be being guided, then they're effectively unguided and you're better off with a simple system that simply taxes less in the early amounts.

This, then, is as simple as a tax system can get - one calculation per amount earned, with no forms and no tax software needed.

It does mean that, for middle-income and above, the paycheck will vary with time, but if you know how much you're going to earn in a year then you know what each paycheck will have in it. This requires a small Excel macro to calculate, not an expensive software package that mysteriously needs updating continuously, and if you're any good at money management, then it really really doesn't matter. If you aren't, then it still doesn't matter, because you'd still not cope with the existing system anyway.

In practice, it's not likely any country would actually implement a system this simple, because the rich would complain like anything and it's hard to win elections if the rich are paying your opponent and not you. But we now have a metric.

The UK system, which doesn't require the filling out of vast numbers of forms, is not quite this level of simple, but it's not horribly complicated. The difference between theoretical and actual is not great, but it's tolerable. If anyone wants to use the theoretical and derive an actual score for the UK system, they're welcome to do so. I'd be interested to see it.

The US, who left the UK for tax reasons (or was that Hotblack Desiato, I get them confused) has a much much more complex system. I'd say needlessly complicated, but it's fairly obvious it's complicated precisely to make those who are money-stressed and time-stressed pay more than they technically owe, and those who are rich and can afford accountants for other reasons pay less. Again, if anyone wants to produce a score, I'd be interested to see it.

Comment Re:Nice work ruining it... (Score 1) 97

I hope I'm wrong; but my concern is that MS' decision here might end up being a worst-of-both-worlds outcome:

Devices that are mechanically restricted to type-c by mechanical constraints that require the smaller connector have a greater incentive to just skimp on ports; while devices big enough for type-As now have greater incentive to retain mixed ports because type-Cs now mandate further costs on top of the slightly more expensive connector. If you want to give someone a place to plug in a mouse; poster child of the 'even USB 1.1 was overqualified for this' school of peripherals; you'll either be keeping type A around or running DP or DP and PCIe to that port. Fantastic.

Comment Re:Nice work ruining it... (Score 1) 97

I specifically mentioned that case "You want a cheap just-USB USB port? Either that's Type A so nobody can standardize on connectors; or it gets omitted to meet logo requirements"; and noted it as undesirable because it encourages the perpetuation of dongle hell. I'd certainly rather have a type A than no USB port(and, at least for now, I've still got enough type A devices that the port would be actively useful; but that may or may not be true forever; and is less likely to be true for 'want to pack efficiently for travel' cases rather than 'at the desk that has my giant tech junk drawer' cases).

As for the controller chip; that's a matter of...mixed...truth with USB-C. The USB part of the port will run from the USB controller, or an internal hub; but any AUX behavior(like DP support) is related to the USB controller only in the sense that there's a standardized way for it to surrender most of the high speed differential pairs for the use of the AUX signal. Actually running DP from the GPU to the port is a separate problem. For power delivery; I assume that at least some controllers will implement the negotiation for you(since it's mandatory even for devices that will neither request nor provide more than a relative pittance at 5v); but there is absolutely going to per a per-port cost difference in terms of the support components and size of traces between a port that is expecting to provide an amp, maybe 2, of +5v to peripherals and a port that is expecting to take a hundred watts at 20v and feed it to power input for the entire device.

Comment Nice work ruining it... (Score 5, Insightful) 97

This seems both loaded with perverse incentives and like it doesn't even necessarily solve the problem that it claims to solve.

Most obviously, MS is saying that if it doesn't support a display and device charging it's forbidden. So it's mandatory for all type-C ports to include the expense of power delivery circuitry capable of handling your device's potential load and either a dedicated video out or DP switching between type-C ports if there are more ports than there are heads on the GPU. You want a cheap just-USB USB port? Either that's Type A so nobody can standardize on connectors; or it gets omitted to meet logo requirements. Further; if a system supports 40Gbps USB4 all its ports are required to do so; including higher peripheral power limits, PCIe tunneling, and TB3 compatibility. You think it might be nice to have a port to plug flash drives into without allocating 4 PCIe lanes? Screw you I guess.

Then there's what the alleged confusion reduction doesn't actually specify: USB3 systems are only required to support 'minimum 1' displays. They need to have the supporting circuitry to handle that one display being on any port; but just ignoring the second DP alt mode device connected is fine; no further requirements. Data rates of 5, 10, or 20Gbs and accessory power supply of either greater than 4.5 or 7.5w are also fine(except that 20Gbs ports must be greater than 7.5); USB4 systems have higher minimum requirements; 2 4k displays and 15w power; but are similarly allowed to mingle 40 and 80Gbs; and it's entirely allowed for some systems to stop at 2 displays and some to support more; so long as the displays that are supported can be plugged in anywhere.

Obviously the tendency to do type-C ports that are just totally unlabeled or with a teeny cryptic symbol was no unduly helpful; but this seems like taking what could have been a fairly simple distinction (like the one that existed all the way back in the firewire/USB 1.1 days, or in the thunderbolt/USB systems, or slightly more informally on non-intel systems without thunderbolt), of "the fast port that does the things" and "the cheap port that is in ample supply"; and 'reducing confusion' by just banning the cheap port that is in ample supply(unless it's type A, for space consumption and to prevent connector standardization).

Are you really telling me that there wasn't something you could come up with to just tell the user which ones are power/video/PCIe and which ones are normal random accessory USB ports? I hope you like docking stations; because it seems like there will be a lot of those in our future.

Comment Not strictly a bet on the tech... (Score 1) 105

It seems mistaken to just blithely assume that technology will obviously just progress harder until a solution is reached.

When you talk about simulating something you are expressing an opinion on how much power you'll have to throw at the problem; but, more fundamentally, you are expressing optimism about the existence of a model of the system that delivers useful savings over the actual system without too much violence to the outcome.

Sometimes this is true and you can achieve downright ludicrous savings by just introducing a few empirically derived coefficients in place of interactions you are not prepared to simulate and still get viable results. In other cases either the system of interest is less helpful or your needs for precision are higher and you find that not only are rough approximations wildly wrong; but the cost of each attempt to move the model closer to the system goes up, sometimes dramatically.

We have no particular mystical reason for assuming that the brain will be a worst-case scenario where a model of acceptable accuracy ends up just being a precise copy; but we also have no particularly strong reason for optimism about comparatively well-behaved activation functions clearly being good enough and there being no risk of having to do the computational chemistry of an entire synapse(or all of them).

There's the further complication; if you are specifically catering to the 'apparently being smart enough to make lots of money on payment processing or banner ads or something doesn't keep you from feeling death breathing down your neck, does it?' audience there's the further complication in that we know vastly less about simulating a particular person than the little we know about constructing things that have some properties that resemble humans in the aggregate under certain cases; and the people huffing Kurzweil and imagining digital immortality are probably going to want a particular person; not just a chatbot whose output is a solid statistical match for the sort of things they would have said.

Comment Take it step by step. (Score 1) 105

You don't need to simulate all that, at least initially. Scan in the brains of people who are at extremely high risk of stroke or other brain damage. If one of them suffers a lethal stroke, but their body is otherwise fine, you HAVE a full set of senses. You just need to install a way of multiplexing/demultiplexing the data from those senses and muscles, and have a radio feed - WiFi 7 should have adequate capacity.

Yes, this is very scifiish, but at this point, so is scanning in a whole brain. If you have the technology to do that, you've the technology to set up a radio feed.

Slashdot Top Deals

In Nature there are neither rewards nor punishments, there are consequences. -- R.G. Ingersoll

Working...