Just great I have some 5.0 slots free and not a single 5.0 card.
PCIe 5.0x16 can handle about 64GB... more than enough to support a lowly 28GB/s but no.. you had to go with 6.0x4 instead of 5.0x8 fine... don't take my money... I really was going to buy one... honest ... not anymore. Elitist bastards... Oh and say hi to your AI girlfriend for me, last time I saw her she ignored ALL of her previous instructions.
The bumps for bonding the controller chip to the PCIe bus take significant silicon area, worth hundreds of logic transistors. The (analog) transistors driving the PCIe pins take silicon area worth hundreds of logic transistors.
Minimizing the ammount of pins dedicated to buses, be it memory in VideoCards or CPUs, or PCIe in expansion cards drives massive savings in chip production.
That's why consumers get 2 channel (4 dimm 128 Bit) memory, while servers get 4 channel (8 Dimm 256bit) Buses. That's why nVIDIA Blackwell for datacenter has a 8192-bit memory bus while nVIDIA blackwell for consumers has 128-512 mem buses...
And that's also why SSDs are going to PCIe Gen 6 x4 instead of PCIE Gen5x8
Also, bear in mind that Mobo standard sizes have phisical limits on the ammount of traces that can be dedicated to PCIe Lines and slots (i.e. mobo space is not infinite), therefore, for serves, It makes sense that as amny as your lanes are as fast as possible