Why does it seem like big tech is always doing more harm than good?
Because twisting the facts to fit that narrative generates clicks.
So far it's done far more harm than good in my opinion.
Yet, here you are, posting on the Internet using technology created by Big Tech.
I think people expect commercial social media networks to be something they can't be -- a kind of commons where you are exposed to the range of views that exist in your community. But that's not what makes social networks money, what makes them money is engagement, and consuming a variety of opinions is tiresome for users and bad for profits. When did you ever see social media trying to engage you with opinions you don't agree with or inform you about the breadth of opinion out there? It has never done that.
The old management of Twitter had a strategy of making it a big tent, comfortable for centrist views and centrist-adjacent views. This enabled it to function as a kind of limited town common for people who either weren't interested in politics, like authors or celebrities promoting their work, or who wanted to reach a large number of mainly apolitical people. This meant drawing lines on both sides of the political spectrum, and naturally people near the line on either side were continually furious with them.
It was an unnatural and unstable situation. As soon as Musk tried to broaden one side of the tent, polarization was inevitable. This means neither X nor Bluesky can be what Twitter was for advertisers and public figures looking for a broad audience.
At present I'm using Mastodon. For users of old Twitter, it must seem like an empty wasteland, but it's a non-commercial network, it has no business imperative to suck up every last free moment of my attention. I follow major news organizations who dutifully post major stories. I follow some interest groups which are active to a modest degree, some local groups who post on local issues, and a few celebrities like George Takei. *Everybody's* not on it, but that's OK; I don't want to spend more than a few minutes a day on the thing so I don't have time to follow everyone I might be interested in. Oh, and moderation is on a per-server basis, so you can choose a server where the admins have a policy you're OK with.
No, there are all kinds of information the government has that are legitimately not available. Sensitive data on private citizens, for example, which is why people are worried about unvetted DOGE employees getting unfettered access to federal systems. Information that would put witnesses in ongoing criminal investigations at risk. Military operations in progress and intelligence assets in use.
The problem is ever since there has been a legal means to keep that information secret, it's also been used to cover up government mistake and misconduct. It's perfectly reasonable for a government to keep things from its citizens *if there is a specific and articulable justification* that can withstand critical examination.
And sometimes those justifications are overridden by public interest concerns -- specifically when officials really want to bury something like the Pentagon Papers because they are embarrassing to the government. "Embarrassing to the government" should be an argument against secrecy, because of the public interest in knowing the government is doing embarrassing things. In the end, the embarrassment caused by the Pentagon Papers was *good* for the country.
All you need to know about RFK's fitness for office is out in public for everyone to see.
One solution is to go to grad school and hope the job market is better in two years when you get your MS.
politically we seem to think that any regulation of AI deployment must be illegal.
No, not illegal. Just stupid and counterproductive.
Regulating AI just means that AGI will happen elsewhere, most likely in authoritarian China.
This is false. Python has 3rd party libraries that handle numbers well. Those libraries are not Python
Python has built-in support for arbitrary precision integers by default, with no 3rd party libraries needed.
In Python, you can precisely calculate 100 factorial with a default installation. You can't do that with C++, Java, or Rust.
The history of AI is all about modeling human intelligence, just like the models we have in natural sciences. If the model happens to be a very good match with reality, we may sometimes mistake one for the other. OTOH, they may be the same thing for all practical purposes.
I'm not sure if I have any deeper intelligence than a fancy language model. When we say that LLMs don't really understand things, then what exactly do we mean by understanding? In my personal definition, the meaning of something is simply the graph of its associated things. I consider something very meaningful if the graph has a lot of nodes and edges, and this also explains why simple things gain more meaning as we age.
The unfortunate aspect of that philosophy is that our society now confuses "don't censor political speech I don't like" with "don't censor falsehoods which are tied to politically-charged topics."
We should absolutely encourage discussions about things we may not agree on - but we should also not give audience to things which are demonstrably incorrect.
People always confuse Humans and Life. Yes, Life will continue with increased CO2 levels and higher temperatures. But humans will have a hard time. For some reason, Life in general is not so important to me as humans are.
(*) In general, dicotyledons will grow better than monocotyledons at higher CO2 levels. With the exception of potatoes and manioc, our staple crops are monocotyledons. Right now, we don't have crops that yield the same amount of carbohydrates per acre as our staple crops. That's why they are so important to us.
Human resources are human first, and resources second. -- J. Garbers