Comment Re:Last (Score 1) 116
Objectively, Linux kernel is extremely excellent at many things, just not everything. It is by far the best OS kernel in the known universe, for reasons that I do not have time to teach you.
LOL
Objectively, Linux kernel is extremely excellent at many things, just not everything. It is by far the best OS kernel in the known universe, for reasons that I do not have time to teach you.
LOL
That's the official line, but it's a blatant lie.
It really isn't.
They've been systematically *removing* ways to recover accounts.
Because those recovery mechanisms have created account access attack vectors.
Additionally, they keep making it harder and harder to log into your account from multiple different devices, because they do NOT want you doing that.
This is true for YouTube Music and other things where there are contractual limits they have to abide by. But outside of that, there is no limit on the number of devices you can have logged into a Google account.
You *should* be able to just log in with your password, but that's no longer allowed, unless you are on the same device you've used before.
There are really good account security reasons for this.
So if you're ever going to get a new phone, better do it before you lose the old one, or the Google account will die.
No, you can also set up other factors. Configure Google Authenticator (or another TOTP app; they're all the same) or, even better, get a USB or bluetooth security key. You can also generate backup codes and store them in a safe place.
All of this comes down to the simple fact that account hijacking is a huge problem, for Google as well as for users, though mostly for users, and passwords suck.
I know it's more fun to be cynical and assume it's all just BigCorp being nefarious, but it's not true. I know people in the Google account security teams and they're pulling their hair out. What they really want to do is deprecate phone numbers, too, because they're actually not a good authentication factor. But users aren't willing to use TOTP or security keys and while passkeys are great, if you lose your device, you lost your passkey. The least common denominator authenticator that provides some measure of security is the phone number.
So a million physical qubits means 10K or so logical qubits usable (as IBM has mentioned needing around 100 physical qubits for each error corrected logical qubit). If IBM can build a million physical qubit system by 2030, larger ones will no doubt follow. Moving to PQC and deprecating (in 2030) and disallowing (in 2035) RSA 2048 is probably the right recommendations by NIST.
No one should be using RSA now, even ignoring QC. RSA is slow, unwieldy and error-prone. No one who knows what they're doing uses it except in very narrow niches where it has properties that EC doesn't. Every cryptographer and cryptographic security engineer I know (including me) treats the use of RSA in protocol designs as analogous to a "code smell", a strong one. If I see a protocol design that uses RSA, it's an immediate red flag that the designer very likely doesn't know what they're doing and has probably made a bunch of mistakes that compromise security. Unless, of course, the design explains in detail why they did the weird and risky thing. Competent people will know it's weird and risky and explain their rationale for using RSA in the first few paragraphs of the doc.
However, the EC-based things people should be using are also at risk to QCs, and everyone making hardware with a lifespan of more than a few years should be moving to PQC algorithms now. At minimum, you should make sure that your cryptography-dependent designs explicitly plan for how you will migrate to PQC (including on devices in the field, if relevant). You don't have to actually move now as long as you have a clear path for moving later. But if you're, say, shipping hardware with embedded firmware verification keys, you should probably make sure that it contains a SPHINCS+ key or something and some way to enable its use in the future, even if only to bootstrap the use of some more manageable PQC algorithm.
We'd be so much further along if all of the big corporate players in this space (google, IBM, microsoft, amazon, honeywell) could cooperate rather than compete.
This is a fundamental fact that way too many out there refuse to grasp.
I see no evidence that it's true. It's almost always the case that competition pushes progress faster. The only real exceptions are when the competitors are able to keep core elements of their approach secret, which isn't the case here; both IBM and Google researchers are regularly publishing most of the details of what they're doing.
every other kernel is worse
This is definitively, objectively, not true, at least with respect to code quality, performance and security. Where Linux shines is in support, both in terms of available device drivers and niche-specific features, and in terms of having the broadest base of experienced users and admins. If you need a kernel and OS that will run on nearly any platform, with nearly any devices, and for which you can easily hire people who already know it, Linux is your best option. It got that way not by being supremely excellent at anything but by being reliably good enough (barely) at almost everything.
But it's really not the best kernel. In fact, if you want to look at widely-used OS kernels, I'd say that both of the other alternatives (Darwin and ntoskrnl) are technically better in important ways, and that both are less buggy. I do security, so from my perspective that's the key measure, and both are definitely better than Linux, as are many of the *BSDs.
Note that I'm not knocking Linus. It's actually rather amazing that Linux works all the places and all the ways that it does, and it's a powerful testament to Linus' ability that he's still running his project, even now that it's critical to world technology stacks, including at the biggest tech companies. Being good enough at everything is hard, and that goal is probably fundamentally incompatible with being extremely high-quality, or maximally-performant in a particular niche, or highly secure, etc.
Unless they're holding off because of IP concerns, that doesn't make any sense to me. If the tools work well enough to be worth using on personal projects, why not use it on paid work?
I'm sure it doesn't make any sense to you. Not everyone is going to test a tool they don't understand while in production.
Why not? You're going to read and review the code just as thoroughly as if you'd written it yourself. The "while in production" phrase ominously sounds like you're taking some unusual risk, but you're not. It's no different than writing new code then taking it through the normal code review and QA processes to put it in production.
The rest of your post is a repeat of your previous. I don't think I need to address it.
I picked apart your weak arguments. But, whatever.
Cue 'This is fine' meme.
Mother Nature will be the real 'downer' for you me thinks.
Most of the 'world' aren't size queens like so many people in the US.
You can tow a small or mission focused trailer with most anything. Towing a 30K lb boat is not for the feint of horsepower.
Narrator: they shouldn't be *towing* a 30K lb boat in the first place.
People want to be able to take their boat around 150 to 200 Miles
nailed the real problem. It's not the capability...it's the "destroy the planet for funsies"
I wonder if they realize that the money they get through deals like his are still subject to Congressional budgetary controls. The Reagan administration didnâ(TM)t either ( or chose to ignore the constitutional limits on presidential power) when they tried to use money from clandestine sales of arms to the Iranians to set up a fund they could use to spend without Congressional control.
By dabble. I mean that many software engineers are trying out AI tech in their development. Perhaps in a personal project, perhaps in an experiment. But generally not using AI in their "main" work.
Unless they're holding off because of IP concerns, that doesn't make any sense to me. If the tools work well enough to be worth using on personal projects, why not use it on paid work?
Results matter. If something is over hyped, then presumably it fails to live up to the promises. And in this case, I think it may not even live up to being a superior tool to what we currently use. Wasting more time and money than it saves.
This is my point. Don't use it if it wastes time or produces bad results, use it when/where it saves time. One easy way to do this is to copy your source repository and tell the coding agent to go write test cases or implement a feature or whatever and keep working on it until the code builds and passes, and while it's working you do the work yourself. When it's done (it will almost certainly finish before you), git diff the result and decide whether to use what it did or what you've done 20% of. The time investment for this is negligible. Or, what I tend to do is to set the LLM working while I catch up on email, write design docs, attend meetings, etc.
Hype or lack thereof is irrelevant. If makes you more productive, use it. If it doesn't, figure out why not. If it's because you should be using it differently, do that. If it's because the tool just sucks try a different one, or ignore it for a few months until it gets better.
The code you write with AI should look basically the same as the code you'd write without it.
I don't think that's true at all.
It is for me. Why wouldn't it be for you? If the LLM produces code that doesn't meet my rather picky standards, I tell the LLM to fix it, or I fix it. Either way, the code is going to be basically indistinguishable from what I'd write before I send it for review. I guess it's possible that the LLM could write better code than I would write, but it definitely can't be worse than what I would write, because I won't allow that.
One thing I've noticed as the father of a college student in computer engineering is companies won't even bother looking at you unless your GPA is 3.5+. My recommendation to freshman is to ignore that bullshit "GPA doesn't matter" and protect that with your life. If you ever feel like you're not going to get an A, drop the class right away and try again later. Better to take 5-6 years to graduate instead of taking 18 credit hours your freshman year and destroying your GPA.
I've been a professional software engineer for 35 years, and been involved in hiring for all but the first two, in several different companies from tiny startups to giant corps (IBM and now Google). Maybe computer engineering is different (I doubt it), but in all that time I've never seen or heard of a company that cared about GPA, because it's a really lousy predictor of ability. Sometimes recruiters use GPA as a screening tool when they have absolutely nothing else to go on, but that's the only use of GPA I've seen.
Companies mostly want to see if you can really do the job, and the best evidence is showing that you have done the job, i.e. you have real, professional experience as a software engineer. Failing that, they want to see evidence of you having built stuff, so a Github portfolio (or, before that, a Sourceforge portfolio). The best way to get said professional experience is to do an internship or two while you're still in school. Spend your summers working (and getting paid!), so that when you graduate you have that experience to point to.
Get an internship as soon as you can too, while your GPA is still high.
Yes, absolutely, to the internship. Meh to the GPA.
That said, I'm not surprised it's gotten a little tougher. AI tools can't replace experienced SWEs yet, but under the supervision of an experienced SWE they definitely can do most of what interns and entry-level SWEs usually do.
We are dabbling a bit in it but aren't committed to pivoting to it.
I don't understand what this means. How do you "dabble" and what would it mean to "pivot"?
I use AI for stuff it does well, and don't use it for stuff it doesn't do well. Knowing which is which requires investing a little time, but not much.
Because we're not sure if it is an over hyped tech fad as part of a new bubble.
Why should any of that matter? If using it makes you more productive, do it. If it doesn't, don't. This isn't like a new language where using it requires a significant commitment because you'll have to maintain it in the future. The code you write with AI should look basically the same as the code you'd write without it.
The nuclear bombs, since they were known to be killing civilians going about civilian business, were contrary to both the Hauge and Geneva conventions.
The critical mistake is right here. They were actually targeting military facilities, just with a weapon of such yield to overcome the lousy CEP (accuracy), that also guaranteed civilian casualties and damage. Which was allowed.
Also, it's questionable how many of the civilians were really civilians, given the Japanese "Ketsugo" plan, which organized all men 15-60 and all women 17-40 into combat units, trained millions of civilians, including children, to fight with bamboo spears, farm tools, explosives, molotov cocktails and other improvised weapons and had the slogan "100 million deaths with honor", meaning that they expected the entire population would fight to the death rather than surrender to invasion.
Would that actually have happened? No one knows. What is certain is that Allied knowledge of Operation Ketsugo caused war planners to increase their estimates of Allied casualties because they assumed they'd be fighting the entire population. Experience with Japanese soldiers who consistently fought to the death validated that the Japanese culture just might be capable of that.
That was an important part of the context in which the decision to drop the bombs was made.
You can write a small letter to Grandma in the filename. -- Forbes Burkowski, CS, University of Washington