Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:I am SHOCKED! (Score 2) 106

There were other issues beyond the weight. The cost, for one thing. The sub-par head attachment setup. The lack of cross-platform software.

They could have improved both the cost and weight by using lighter materials like injection-molded plastic instead of milled aluminum and not including useless features like the outward-facing creepy-eye-display. They could have improved the perceived weight by using a better mounting mechanism (like a halo style).

It's like Apple just ignored the past decade of learnings from the VR industry. In terms of the importance of weight and comfort, if nothing else. And they should have realized that they needed to sell the thing at cost or even at a lost to build a customer base and start leveraging economies of scale and app store revenue to drive a profit. It had a $1,500 BOM and they sold it for $3,500. They wanted to make a profit on the thing and recoup their R&D costs, but instead they sold so few of them that the whole project lost tons of money.

It costs 7x more than the market leader, and it wasn't a 7x better product. They could have gotten away with charging a significant premium over the competition, but they were never going to succeed at a 700% premium.

Comment A desperate attempt (Score 1) 137

My understanding is that lithium manganese-rich batteries offer potentially higher densities, but much worse cycle lifespan. This could make the trade-off not worth it compared to LFP.

Seems to me like, having allowed the Chinese to get many years ahead in LFP technology, GM/LG are trying to leapfrog them by going with something completely different. Time will tell if that makes any sense versus just buying the latest and greatest LFP cells from the Chinese.

Comment Re:They Don't Care (Score 1) 46

Hardware decode support isn't critical for all devices. For example, an iPhone 14 or 15 non-pro can play AV1 just fine with software decoding, because they have more than enough compute available to do it, it's just much less efficient. Doesn't really change the big picture, lots of devices don't have the processing power to brute force it, so decode support isn't widespread enough, it just means that the situation isn't *quite* as bad.

Comment Re:Serious question (Score 1) 11

Intel got low-NA EUV tools before TSMC too. That didn't stop TSMC from leapfrogging Intel with EUV adoption. They've decided to do the same for high-NA: delay high-NA adoption a bit (waiting for it to mature) by adopting multi-pattern low-NA in the interim. They expect that, even with multi-pattern low-NA's lower throughput, it will still be cheaper than high-NA, and high-NA also can't do large dies like the kind that Apple and nVidia make.

Node names are meaningless (18A, 14A, 10A, it's just marketing labels), but from what I can gather, Intel seems to plan to move to high-NA with 14A, and TSMC with 1.0nm (or 10A). But that was from reports a year ago. Either way, it seems likely that TSMC will wait a process node generation or two longer than Intel.

Comment Ironic considering today's capacity-related outage (Score 1) 25

Microsoft's East US 1 region suffered a major outage today, when they had a capacity shortage of all AMD VM SKUs (which are generally Microsoft's default/recommended SKUs). This caused widespread service failures and degradations when services that use automatic scaling and provisioning all stopped scaling and provisioning. They then suffered from shortages of equivalent Intel VM SKUs as everybody tried to migrate their workloads to VM SKUs that weren't impacted.

Comment Re:That's not how this works (Score 1) 8

If AI really is a bubble, and it pops, nVidia will be fine. I mean, yes, they'd lose probably around half to two thirds of their revenue, but the worst-case scenario is just falling back to their 2022 revenue numbers. If nothing else, they'll still have a strong presence in enterprise networking and compute, and a near monopoly in consumer graphics.

Comment Re:Even 5.0 would be nice (Score 1) 63

If you look at benchmarks of high-end drives (the Crucial T705, for example), they're perfectly capable of sustaining sequential reads and writes in excess of PCIe 4.0x4 bandwidth limitations, with a queue depth of one, no parallelism. Synthetic sequential non-parallel reads on that drive hit almost 12 GB/s. Tom's even has a simple zip file copy in Windows (which is not guaranteed to be sequential or particularly efficient) hitting 5.1 GB/s, not far from PCIe 4.0x4 speeds.

Even if the chipsets themselves didn't support PCIe 5.0 for anything downstream at all, just bumping the link back to the CPU to PCIe 5.0 (since the CPU side already supports it) would go a long way to alleviating this bottleneck.

It's the same limitation we've had for multiple generations: AM4 had the same PCIe 4.0x4 link for its chipsets (at least the later ones). However, at the time, we were only trying to hang one chipset off it, like with X570. With AM5, we've got things like X870E, which is literally just two X870 chipsets daisy chained together, sharing that same x4 link.

Comment Re:Even 5.0 would be nice (Score 4, Informative) 63

Up to 2 X4 links to m.2

Yes, but the second set of x4 lanes is dedicated to the USB4/Thunderbolt controller, so using that m.2 slot requires permanently stealing half or all of its lanes.

usb links directly in the cpu.

Only two USB 10G and two USB2 ports. All other USB ports connect through the chipset. On X870, that will typically include 1x 20G, 4x 10G, and 6x USB2.

X16 (can be split X8 X8)

Only on some motherboards, most can't bifurcate that. And even on some that can, using the second slot may starve the GPU fans of air.

X4 chipset link.

But only PCIe 4.0. And you're hanging a *lot* off that link. If we take the highest-end case, X870E, that single PCIe 4.0 x4 link is shared by:

- 2x USB 20G
- 8x USB 10G
- 12x USB 2
- 2.5 gig or 5.0 gig ethernet controller
- 6x SATA 6 gig
- Wifi controller
- 2x m.2 slots (PCIe 4.0x4 each)
- 1x PCI 4.0x4 slot

It gets a bit confusing with all the "this or that" optional stuff, but I think that's ballpark accurate. All told, if I'm adding it up right, all those devices can consume roughly 364 gigs of bandwidth, but have to share a single 4.0x4 link that is capable of 64 gigs. In fact, a single SSD connected to one of the chipset's two m.2 slots is capable of maxing out the entire chipset's bandwidth.

All the CPU's PCIe lanes are capable of PCIe 5.0, including the ones used by the chipset link. But the chipset itself doesn't support PCIe 5.0.

Comment Even 5.0 would be nice (Score 2) 63

AMD's chipsets (which after Intel's collapse is most chipsets from a consumer retail standpoint) are all connected using a PCIe 4.0 x4 link, which is easy to bottleneck when you're hanging so much off them, all the USB controllers, multiple m.2 slots, PCIe slots, etc. Even their latest and highest-end chipsets have this bottleneck. Even just moving the chipset alone to PCIe 5.0 would be a big improvement.

Comment Re:Why bother? (Score 1) 31

All AMD chipsets, from the cheapest to the most expensive, have a single PCIe 4.0x4 link back to the CPU. Ultimately, the CPU's PCIe lanes are your bottleneck. Certain peripherals are also directly connected to the CPU, making the chipset irrelevant. That includes (at least) the GPU, and the first m.2 slot.

In fact, it's much easier to max out the bandwidth on the higher-end chipsets, because they try to hang way more stuff off that single x4 link. The cheaper chipsets are less likely to bottleneck the chipset link because they aren't trying to shove as much stuff through the same number of lanes.

Some cheaper motherboards may not support PCIe 5.0. That is not a chipset limitation, it's that AMD requires the wiring on boards their higher-end chipsets to support the signal integrity required for 5.0. None of AMD's chipsets actually support PCIe 5.0 themselves. All PCIe 5.0 connectivity is direct to the CPU.

Slashdot Top Deals

Wishing without work is like fishing without bait. -- Frank Tyger

Working...