Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:10K logical qubits? (Score 1) 32

So a million physical qubits means 10K or so logical qubits usable (as IBM has mentioned needing around 100 physical qubits for each error corrected logical qubit). If IBM can build a million physical qubit system by 2030, larger ones will no doubt follow. Moving to PQC and deprecating (in 2030) and disallowing (in 2035) RSA 2048 is probably the right recommendations by NIST.

No one should be using RSA now, even ignoring QC. RSA is slow, unwieldy and error-prone. No one who knows what they're doing uses it except in very narrow niches where it has properties that EC doesn't. Every cryptographer and cryptographic security engineer I know (including me) treats the use of RSA in protocol designs as analogous to a "code smell", a strong one. If I see a protocol design that uses RSA, it's an immediate red flag that the designer very likely doesn't know what they're doing and has probably made a bunch of mistakes that compromise security. Unless, of course, the design explains in detail why they did the weird and risky thing. Competent people will know it's weird and risky and explain their rationale for using RSA in the first few paragraphs of the doc.

However, the EC-based things people should be using are also at risk to QCs, and everyone making hardware with a lifespan of more than a few years should be moving to PQC algorithms now. At minimum, you should make sure that your cryptography-dependent designs explicitly plan for how you will migrate to PQC (including on devices in the field, if relevant). You don't have to actually move now as long as you have a clear path for moving later. But if you're, say, shipping hardware with embedded firmware verification keys, you should probably make sure that it contains a SPHINCS+ key or something and some way to enable its use in the future, even if only to bootstrap the use of some more manageable PQC algorithm.

Comment Re:open science vs corporate R&D (Score 1) 32

We'd be so much further along if all of the big corporate players in this space (google, IBM, microsoft, amazon, honeywell) could cooperate rather than compete.

This is a fundamental fact that way too many out there refuse to grasp.

I see no evidence that it's true. It's almost always the case that competition pushes progress faster. The only real exceptions are when the competitors are able to keep core elements of their approach secret, which isn't the case here; both IBM and Google researchers are regularly publishing most of the details of what they're doing.

Comment Re:Last (Score 1) 97

every other kernel is worse

This is definitively, objectively, not true, at least with respect to code quality, performance and security. Where Linux shines is in support, both in terms of available device drivers and niche-specific features, and in terms of having the broadest base of experienced users and admins. If you need a kernel and OS that will run on nearly any platform, with nearly any devices, and for which you can easily hire people who already know it, Linux is your best option. It got that way not by being supremely excellent at anything but by being reliably good enough (barely) at almost everything.

But it's really not the best kernel. In fact, if you want to look at widely-used OS kernels, I'd say that both of the other alternatives (Darwin and ntoskrnl) are technically better in important ways, and that both are less buggy. I do security, so from my perspective that's the key measure, and both are definitely better than Linux, as are many of the *BSDs.

Note that I'm not knocking Linus. It's actually rather amazing that Linux works all the places and all the ways that it does, and it's a powerful testament to Linus' ability that he's still running his project, even now that it's critical to world technology stacks, including at the biggest tech companies. Being good enough at everything is hard, and that goal is probably fundamentally incompatible with being extremely high-quality, or maximally-performant in a particular niche, or highly secure, etc.

Comment Re:Constant re-training (Score 1) 171

Unless they're holding off because of IP concerns, that doesn't make any sense to me. If the tools work well enough to be worth using on personal projects, why not use it on paid work?

I'm sure it doesn't make any sense to you. Not everyone is going to test a tool they don't understand while in production.

Why not? You're going to read and review the code just as thoroughly as if you'd written it yourself. The "while in production" phrase ominously sounds like you're taking some unusual risk, but you're not. It's no different than writing new code then taking it through the normal code review and QA processes to put it in production.

The rest of your post is a repeat of your previous. I don't think I need to address it.

I picked apart your weak arguments. But, whatever.

Comment Re:Constant re-training (Score 1) 171

By dabble. I mean that many software engineers are trying out AI tech in their development. Perhaps in a personal project, perhaps in an experiment. But generally not using AI in their "main" work.

Unless they're holding off because of IP concerns, that doesn't make any sense to me. If the tools work well enough to be worth using on personal projects, why not use it on paid work?

Results matter. If something is over hyped, then presumably it fails to live up to the promises. And in this case, I think it may not even live up to being a superior tool to what we currently use. Wasting more time and money than it saves.

This is my point. Don't use it if it wastes time or produces bad results, use it when/where it saves time. One easy way to do this is to copy your source repository and tell the coding agent to go write test cases or implement a feature or whatever and keep working on it until the code builds and passes, and while it's working you do the work yourself. When it's done (it will almost certainly finish before you), git diff the result and decide whether to use what it did or what you've done 20% of. The time investment for this is negligible. Or, what I tend to do is to set the LLM working while I catch up on email, write design docs, attend meetings, etc.

Hype or lack thereof is irrelevant. If makes you more productive, use it. If it doesn't, figure out why not. If it's because you should be using it differently, do that. If it's because the tool just sucks try a different one, or ignore it for a few months until it gets better.

The code you write with AI should look basically the same as the code you'd write without it.

I don't think that's true at all.

It is for me. Why wouldn't it be for you? If the LLM produces code that doesn't meet my rather picky standards, I tell the LLM to fix it, or I fix it. Either way, the code is going to be basically indistinguishable from what I'd write before I send it for review. I guess it's possible that the LLM could write better code than I would write, but it definitely can't be worse than what I would write, because I won't allow that.

Comment Re:No mention of GPA? (Score 3, Interesting) 171

One thing I've noticed as the father of a college student in computer engineering is companies won't even bother looking at you unless your GPA is 3.5+. My recommendation to freshman is to ignore that bullshit "GPA doesn't matter" and protect that with your life. If you ever feel like you're not going to get an A, drop the class right away and try again later. Better to take 5-6 years to graduate instead of taking 18 credit hours your freshman year and destroying your GPA.

I've been a professional software engineer for 35 years, and been involved in hiring for all but the first two, in several different companies from tiny startups to giant corps (IBM and now Google). Maybe computer engineering is different (I doubt it), but in all that time I've never seen or heard of a company that cared about GPA, because it's a really lousy predictor of ability. Sometimes recruiters use GPA as a screening tool when they have absolutely nothing else to go on, but that's the only use of GPA I've seen.

Companies mostly want to see if you can really do the job, and the best evidence is showing that you have done the job, i.e. you have real, professional experience as a software engineer. Failing that, they want to see evidence of you having built stuff, so a Github portfolio (or, before that, a Sourceforge portfolio). The best way to get said professional experience is to do an internship or two while you're still in school. Spend your summers working (and getting paid!), so that when you graduate you have that experience to point to.

Get an internship as soon as you can too, while your GPA is still high.

Yes, absolutely, to the internship. Meh to the GPA.

That said, I'm not surprised it's gotten a little tougher. AI tools can't replace experienced SWEs yet, but under the supervision of an experienced SWE they definitely can do most of what interns and entry-level SWEs usually do.

Comment Re:Constant re-training (Score 1) 171

We are dabbling a bit in it but aren't committed to pivoting to it.

I don't understand what this means. How do you "dabble" and what would it mean to "pivot"?

I use AI for stuff it does well, and don't use it for stuff it doesn't do well. Knowing which is which requires investing a little time, but not much.

Because we're not sure if it is an over hyped tech fad as part of a new bubble.

Why should any of that matter? If using it makes you more productive, do it. If it doesn't, don't. This isn't like a new language where using it requires a significant commitment because you'll have to maintain it in the future. The code you write with AI should look basically the same as the code you'd write without it.

Comment Re:Why the nukes were illegal (Score 2) 124

The nuclear bombs, since they were known to be killing civilians going about civilian business, were contrary to both the Hauge and Geneva conventions.

The critical mistake is right here. They were actually targeting military facilities, just with a weapon of such yield to overcome the lousy CEP (accuracy), that also guaranteed civilian casualties and damage. Which was allowed.

Also, it's questionable how many of the civilians were really civilians, given the Japanese "Ketsugo" plan, which organized all men 15-60 and all women 17-40 into combat units, trained millions of civilians, including children, to fight with bamboo spears, farm tools, explosives, molotov cocktails and other improvised weapons and had the slogan "100 million deaths with honor", meaning that they expected the entire population would fight to the death rather than surrender to invasion.

Would that actually have happened? No one knows. What is certain is that Allied knowledge of Operation Ketsugo caused war planners to increase their estimates of Allied casualties because they assumed they'd be fighting the entire population. Experience with Japanese soldiers who consistently fought to the death validated that the Japanese culture just might be capable of that.

That was an important part of the context in which the decision to drop the bombs was made.

Comment Re:garbage (Score 1) 124

Remember: the attack on Pearl Harbor was a sneak attack on a Sunday Morning while Japanese diplomats in the capitol of the nation attacked were still actively pretending to be in a state of peace and in serious negotiations.

The Japanese intention was to issue formal notice that they were terminating negotiations and that further talks were impossible, effectively announcing an end to peaceful relations and declaring war, about 30 minutes before the attack, but they screwed up. Delays in decoding and preparing the message resulted in it being delivered about an hour after the attack.

An interesting twist here is that the US had broken the Japanese diplomatic code and was reading all of the correspondence between Japan and its embassy. Roosevelt knew Japan was breaking off talks more than 13 hours before the attack, and even before the Japanese ambassador did. So the US knew an attack was coming, though they didn't know it would land on Pearl Harbor. The assumption was that Japan would begin by attacking US forces in Southeast Asia. The commander of Pearl Harbor and other Pacific forces should have been notified of the change in the diplomatic situation well before the attack began, but bad assumptions and bureaucracy delayed that notification. Had that notification been delivered, and taken seriously, the Pacific battleship fleet would have been at sea when the Japanese bombers arrived, able to maneuver and fight back effectively, and US air forces would have been on alert and able to get off the ground and fight back. It's even likely the Japanese fleet would have been located and attacked.

Of course, US internal failures in no way affect Japan's culpability for not providing timely notice. The Japanese certainly didn't know that the US was reading their mail.

SOME Japanese made an effort to stop the fighting on terms favorable, and this would have preserved the Imperial Japanese Empire in the form that had been running wild across the Pacific theater mass-murdering the innocent - an absolutely non-starter negotiating point.

Indeed. The Japanese Empire had to be dismantled. Could it have been done without nukes? Sure. But definitely at much higher cost in American lives, and probably Japanese lives as well. The Japanese planned to send hordes of spear-wielding civilians to attack invading US Marines. It's horrible that many civilians died in Hiroshima and Nagasaki, blown up and burned and irradiated. But the alternative was to kill many of them in firebombings, or to machinegun them on the beaches.

We'll never know exactly what might have happened if the bombs hadn't been dropped. But what happened after Nagasaki was that the fighting ended, paving the way for what has, as you mentioned, been a surprising and marvelous friendship.

Comment Re:Who pays the tariffs ? (Score 1) 108

You're right, that number is absurd. I guess my brain autocorrected it to "the price was 1500% of what it should be"

Many people seem to autocorrect most of what Trump says, on the assumption that the leader of the free world couldn't possibly be as stupid as he appears to be. Unfortunately, his biggest talent is and always has been inflating his apparent intelligence and competence; what you see is more than what you get, not less.

Comment Re:THE REAL CODE (Score 1) 181

Linus had a good code example, and Phoronix quoted it. However, Slashdot turned it into ")a".

The original code... and no I can't get it to display the two characters either using tags or backslashes...

"( a TWO-GREATER-THAN-SIGNS-FOR-BINARY-SHIFT-LEFT 16 ) + b"

(a << 16) + b.

What I typed to get that was "(a &lt;&lt; 16) + b".

Comment Re:One fucking mouse button (Score 1) 39

Almost every decision any company makes comes down to money

This is the economist view, but it's really not true. Companies are made of people, and people make decisions for all sorts of reasons, almost none of them purely financial even when the justifications are argued in financial terms. In this particular case, Steve Jobs was a UX purist who made all sorts of decisions on the basis of his taste for elegance and simplicity, even when they didn't make much financial sense. Most of his worst ideas were stopped by the people around him -- often with financial arguments -- but many weren't.

The "homo economicus" model is useful, but don't confuse it with reality.

Comment Re:How is this an EO? (Score 4, Insightful) 149

Could we just do away with EO's entirely? The seem more than a little outside the ideal of a government created with checks and balances in mind.

The president is the boss of the executive branch, and he has to be able to give orders to them to tell them how to do their jobs. Of course, those orders should only be about how they're supposed to go about executing the law, as defined by Congress and the Constitution. The problem isn't the ability of the boss to give orders, that's as it must be, the problem is that Trump is giving orders that tell the executive branch to do things that that exceed and sometimes defy the laws defined by Congress and the Constitution.

When that happens, it's the responsibility of the other two branches to rein him in or remove him. The courts are mostly trying to do their job to rein him in but their ability is limited. The intention is that when the executive gets out of control, Congress should give him the boot. Instead, the GOP leadership in Congress is cheering him on.

Comment Re:Time for some Boomers (Score 1) 149

It's value is inherited from the same basis that gives fiat currency its value, which is to say, pure magical thought.

Completely untrue. flat currency gets its value in that it is backed by the government and economy of a nation state.

To be more precise, it's backed by the enforceability of debt contracts, i.e. the judicial and executive branches of the nation state. But that's only what makes sure the value is provided, what actually backs fiat currency is someone's legally-binding promise to do some sort of productive work. Fiat money is created by banks when they issue loans (including but definitely not limited to the Federal Reserve bank), so every dollar created is balanced by a dollar of debt that is created at the same time, and when the debt is paid off, the dollar ceases to exist.

Suppose you want to build a new house. You go to the bank and borrow, say, $1M. Many people think the bank loans you $1M that they have sitting in their (virtual) vault, money that was deposited by people with savings accounts at the bank, but that's not true. What happens is that the bank invents $1M to lend to you. It's not quite that simple (but note that we've abandoned fractional reserve lending; there are no reserve requirements any more), but we'll gloss over the irrelevant complexities. The core point is, the bank creates (a) $1M and (b) a mortgage contract, which offset one another. $1M came into existence, but there's a promise to repay, and therefore destroy, that money.

You pay the money to the construction company, who pays it to their workers and suppliers and investors, and they build you a house. You move in and all is good, except now you have an obligation to apply your labor for the next 30 years to generate value so you can make your monthly mortgage payments.

That is what backs fiat currency: The future labor of borrowers. The exact nature of the labor is undefined, of course, but that doesn't make the labor itself any less of a very real, very productive (by definition!) asset, and it is that productive asset that gives the currency its intrinsic value. And, of course, the legal system is backing the contracts, so you can't just decide you don't want to pay, not without significant negative consequences. That's just one of a number of crucial roles the government plays. Another is the laws that mandate that all debts public and private be denominated in and payable with the nation's currency.

Obviously, crypto assets (they really aren't currencies; even ignoring their lack of intrinsic value, they suck as currencies) have nothing remotely comparable to that sort of solid foundation. Less obviously, so-called "hard" currencies don't either. The classic hard currency, gold, does have a little intrinsic value because it's pretty and has some useful physical properties, but the vast majority of its value is "pure magical thought". It has that value only because everyone believes it does, but there's nothing behind that belief but tradition, not even anything as abstract as contractual obligations. Unlike fiat currencies.

Slashdot Top Deals

System checkpoint complete.

Working...