Look, Case Insensitivity is a historical oddity, like stricmp() in C.
For programmers, ASCII was fast and simple. One character, one byte for programmers was fast and efficient when your processor ran at 0.3 MIPs (the original 8086; the keyboard controller in your current keyboard probably runs faster than that now).
Case insensitivity was a conscious choice - it was slower than being case sensitive, because every string compare had to start with a strupr() (or strlwr()) call, where CS didn't. But for users, a case-insensitive DOS or Windows 3.x worked the way they expected - when they looked for "financials.123", they were looking for a case-insensitive comparison. Could they have been taught that "Financials.123" and "financials.123" were different files? Probably, but that isn't what people were expecting. ASCII was the right choice in 1972 for C and case insensitivity using ASCII in 1981 for DOS. I don't recall what filesystems were CI in the 1980's and 1990's, but I'd guess it was more than just DOS and Windows 3.x.
The problem came in when the rest of the world decided they wanted to use computers in their native language and character sets, and the nightmare that was code pages transitioned to Unicode (first release: 1991, first Windows NT release: 1993).
Unix made the transition, because case sensitivity made it simpler - old code that treated strings as sequences of bytes would be almost correct for a Unicode string. User retraining wasn't particularly necessary - the "Unix Way" was to completely ignore that a keyboard had a "shift" key, so filename conversion to Unicode was fairly painless.
Windows didn't make the transition, largely because Windows had roughly 10 bazillion times more users, and less-savvy users, than Unix had. A conscious decision was made that backwards compatibility was more important than changing from a case-insensitive file system to a case-sensitive one - think of all the other compatibility decisions they made at the time to keep the DOS window functioning, and the Windows 3.1 subsystem working. Transitioning to a case-sensitive file system in Windows NT or Windows 2000 would have kept that OS line from getting any mind- or market-share.
Today? As an English-speaking US citizen, my quick-and-dirty one-off C code still uses ASCII because of it's simplicity, though I know my Python uses Unicode, which is fine because it does an excellent job of hiding it from me. I don't need to read thousand-year old Chinese documents, I don't deal with the Tower of Babel that is Europe. I don't have filenames on my computer that have diacritics, but I do like to CamelCase my filenames because I really haven't made peace with putting spaces in filenames, and CamelCase makes them easy to read. Case-Insensitivity means I can do that, and still find the files I want without remembering whether I CamelCased the filename or not.
But in the modern world, I agree with Linus. The way that character encoding has developed over the last 40 years suggests that Case Sensitivity is the proper way for the OS to handle filenames. I would argue that, from a human perspective, Case Insensitivity should be a built-in - when I search for "financials.123", the userland utilities and/or GUI that I use should offer me "Financials.123" as a alternative ("I didn't find "financials.123", but I did find "Financials.123", is that what you were looking for?"). The old-skool neckbeards will shout that if that's what I want, I should remember the correct CI flag for each individual utility to make it work that way, and the Cinnamon GUI requires that I click an icon to do so. Maybe when AI assistance becomes useful for navigating my computer, this will all get papered over.
How Microsoft would transition to a case-sensitive file system is beyond me. It's a legacy decision that they have to live with. Unix/Linux has their legacies that they have to live with also. If the computing world were as dynamic today as it was in 1980, it might be possible for both to eliminate those legacies.