Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re: And yet... (Score 1) 42

I am no expert but it seems you are referring to decoherence. As far as I understand, decoherence doesn't fully solve the measurement problem even according to the discoverers of decoherence. It doesn't explain how/why you end up with dead or alive. It only answers why you end up with a super position of those two main states and not a large amount of diffuse states.

Comment Re:But is it powerful enough? (Score 1) 56

Or to put it shorter: They need to describe a AI use cases where their new CoPilot+ CPU is powerful enough to run the model locally, and yet it would have been infeasible to run it locally on the normal CPU. I have not seen such a use case. And even if such is found, then they need to sell why it is not compelling to run it in the cloud as people currently do.

Comment But is it powerful enough? (Score 1) 56

Being able to run LLM's locally would be great for privacy - but are these AI chips powerful enough to do that? And is there enough RAM in those machines to even have the model in memory? I was thinking this the moment it came out, especially because the machines aren't that high-spec'ed to begin with. If these machines can't run ChatGPT-like LLM's then what can they run? "Filters"? But how often does a normal person do that and on their laptop of all places. Maybe it can run small LLM's for completing/continuing sentences, but that could also have run on the main CPU. Can it do it better - maybe? But these are just guesses, and it is Intel who has a product they want to sell, so they need to explain it.

Comment Our work laptops just got upgraded to win11 (Score 1) 91

My laptop was 2yo running Win10. I suspected they might upgrade the machine completely rather than trying to remote upgrade them, but they just rolled out an update to win 11 without much warning. I was surprised it went quite fast and smoothly just a bit longer than a normal win 10 upgrade. I suspected it might break lots of stuff because I have installed lots of software and heavily customized it (developer) but so far it has been working.

Comment Re:From my layman perspective after 2 beers :D (Score 1) 109

I don't agree with this way of putting it: dark energy comes about because of another discrepancy, the universe accelerating where the models without dark energy predict the opposite. Then if you go "I fully believe in this model so the fact that there's this misprediction means there must be some energy not included. I will just add that without any further explanation and call it dark energy". Of course if you really believe everything else is correct it might make sense, but it could also be other parts of the model are broken and then all you have done is making a matthematical hack. It would be like me declaring the existence of a black hole sucking up my money because what is in the account doens't match my expectation, and surely my expectation must have been correct otherwise except it didn't account for this black hole.

Comment Re:Doesn't add up (Score 2) 17

Thought the same - even at 1:8 at the global level is completely implausible. I have personally downloaded LLama - but I have a M.Sc. in comp-sci, work in comp-sci and discuss daily AI models with colleagues, friends etc. many of whom have similar degrees. When discussing these things I've met only a handful who have also tried running locally (some deepseek, some llama), and that is out of a population of ~100 people who were already from a highly selected group.

Comment Re:AI developers (Score 1) 58

This is a very popular thing to say - but take a look at where the typical developer spends their time (via observation or where the hrs get clocked etc.) Sure, some time goes to meetings, coordination with business etc. but if you look at where those with title 'developer' spend their time, the vast majority is still on the coding part. I'm sure some people are slacking (esp. since WFH became common) and sometimes you can be shocked to hear people spending days on very mundane coding tasks. But at least ostensibly a lot of time is spent on coding and this is the part AI can greatly eat.

Comment Re:Blech. QLC. (Score 4, Informative) 28

The JEDEC requirement is 1 year for an unpowered consumer drive as you say. However, the period the drive is able to retain data decreases with the amount of write cycles it has sustained. The JEDEC requirement has to be satisfied after the disk has sustained the rated number of writes, so the retention period is likely higher than 1 year for a fresh drive. Having said that, wear mechanisms are complicated and temperature plays a major role: The hotter it is when you write the data, the longer it will last. On the other hand, the hotter it is AFTER you wrote the data, the sooner it will perish. At any rate, the 1 y guarantee depends on the drives ability to apply error correction algorithms, the first bits are lost much sooner. There's also differences in how drives handle it when they cannot fully recover the data - do they return an error, or do they just return the best guess and increase some counters? Another complication: It is not enough to just power on the drive for a second... the 'powered' part depends on the drive running background routines that scan for aged areas and refreshes them. The algorithms for this are not documented. Since reading itself wears the data (see "read disturb" effect) and at any rate spends power (especially relevant in laptops) and also takes time for a full drive scan, it is hard to know when you can be sure. Personally, I have the drives in my main machine so they are often powered. But in addition, I backup, completely reformat (SATA security erase or the NVMe equivalent) the drive every 2-3 years and copy all the data back. I still have an almost 12 years old 840 EVO chugging along without problems. It was the first SSD to highlight the problem with retention due to it being a TLC architecture but without V-NAND, meaning data retention is especially low... in fact slowdowns in accessing old data can be seen in less than a year, due to the error correction algorithms working hard to interpret the data.

Comment Runs well on a Raspberry Pico for $3 (Score 2) 90

Doom also runs on a Raspberry Pico - the MCU costs $1 and a board with flash etc. can easily be had for $3. That buys you a board with dual-core 133 MHz CPU (+ PIO helper "cores"), 2M flash, RAM. There is not even a dedicated graphics card, but the CPU cores are fast enough that they can bit-bang the video and sound signals: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Comment Why is it more data is needed? (Score 1) 27

I don't understand these repeated calls for more data. For almost everything you would want a LLM to solve now, the answer very likely already is in the training data for the current models. For e.g. coding challenges I doubt there's any facts, information etc. missing in the data, so it is weird it somehow comes down to a question of 'volume'. Isn't it a question of how well the model works with the data it has got? While I think LLM's are more powerful than many skeptics give it credit for (just statistical machines that find similar things in training data etc.) I do think that if size of training data is a limit as of now, it suggests LLM's seemingly do not have sufficient cognitive depth.

Comment Somewhat confusing... (Score 1) 47

I don't think he's referring to an internal Intel 64-bit project that wasn't known about. More likely he's referring to the already known Project Yamhill, which was about re-implementing AMD's extension as a backup plan.
To recap: Intel's focus on Itanium over x86-64 was a strategic decision that ultimately backfired. AMD capitalized on this by developing and documenting their 64-bit extension for x86, which quickly gained industry support. Intel, meanwhile, was reportedly working on "Yamhill," a secret project implementing AMD's architecture but initially disabling it in their CPUs. The Quora comment by Robert Colwell suggests internal conflict within Intel. It implies that Intel considered their own 64-bit plans but ultimately followed AMD's lead. Colwell's phrasing, such as "our own internal version of x86-64," may reflect Intel's reluctance to fully acknowledge AMD's pioneering work, perhaps due to corporate pride and the failure of Itanium. There's also other phrases that seems to suggest Intel was making its independent 64-bit extension - e.g. it talks about " I decided to split the difference, by leaving in the gates but fusing off the functionality. That way, if I was right about Itanium and what AMD would do, Intel could very quickly get back in the game with x86". It makes it sound like AMD hasn't even moved on 64-bit and they are just preparing for this... even though the truth is AMD had developed the extension, released the spec and was releasing products with it, while Intel was busy with Itanium. Colwell's choice of words could be influenced by Intel's culture, which might have downplayed AMD's role. While the comment might imply an independent Intel effort, it's more likely referring to their adaptation of AMD's successful architecture. The reluctance to openly credit AMD might stem from Itanium's failure and the fact that Intel's next step in 64-bit computing wasn't their own design but an industry necessity. I also think some of the wording comes across as personal arrogance - I doubt the decision to make a backup plan and leave the gates in were just up to one person, almost doing it contrary to orders by management. While I would buy (I have no idea, I don't know him) he might have had a significant role in making the backup plan etc. and advocating for it, it is likely not a case of one person just making this decision contrary to management, and chaanging some lines in VHDL to fuse it off, as it sounds like.

Comment Re:One more reason... (Score 1) 70

> . An "average" user should be able to uninstall *ANY* user land program. No exceptions. How do you define user land program? There's plenty of user land programs that are critical for the system to run. Same as on Linux. In fact it is considered best practice to move as much as possible to user land. Of course the user can install them (right-click delete) but it may end up in a non-working system. If you define the o/s to just be kernel-mode components then it no longer aligns with what normal users consider the o/s. They include at least the shell, file explorer, many utility programs etc. I don't think Windows ships with that much extra on top of that - maybe Internet Explorer/Edge, paint, some games etc. Otherwise it is things most people would consider part of the o/s and that average users wouldn't want to have to install themselves.

Slashdot Top Deals

Syntactic sugar causes cancer of the semicolon. -- Epigrams in Programming, ACM SIGPLAN Sept. 1982

Working...