Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:One thing is obvious... (Score 1) 58

Taxes are way, way too low if the lizard people have this much to squander on bullshit.

You shouldn't be so dismissive of the risk here. There's no clear reason why superintelligence is not possible, and plenty of reason to worry that its creation might end the human race. Not because the superintelligent AI will hate us, but because it most likely won't care about us at all. We don't hate the many, many species that we have ended; we even like some of them. We just care about our own interests more, and our intelligence makes us vastly more powerful than them. There's an enormous risk that AI superintelligence will be to us as we are to the species around us -- with one significant difference: We require an environment that is vaguely similar to what those other species need. Silicon-based AI does not.

Don't make the mistake of judging what is possible by what has already been achieved. Look instead at the pace of improvement we've seen over the last few years. The "The Atlantic" article pooh-poohing the AI "scam" is a great example of the sort of foolish and wishful thinking that is endemic in this space. The article derides the capabilities of current AI while what it actually describes is AI from a year ago. But the systems have already gotten dramatically more capable in that year, primarily due to the the reasoning overlays and self-talk features that have been added.

I think the models still need some structural improvements. We know it's possible for intelligence to be much more efficient and require much less training than the way we're currently doing it. Recent research has highlighted the importance of long-distance connections in the human brain, and you can bet researchers are replicating that in AI models to see what it brings, just as the reasoning layer and self-talk features recently added mimic similar processes in our brains. I think it's this structural work that will get us to AGI... but once we've achieved parity with human intelligence, the next step is simple and obvious: Set the AI to improving its own design, exploiting its speed to further accelerate progress towards greater levels. The pace of improvement is already astonishing, and when we reach that point, it's going to explode.

Maybe not. Maybe we're a lot further away than I think, and the recent breakneck pace of improvement represents a plateau that we won't be able to significantly surpass for a long time. Maybe there's some fundamental physical reason that intelligence simply cannot exceed the upper levels of human capability. But I see no actual reason to believe those things. It seems far more likely that within a few years we will share this planet with silicon-based intelligences vastly smarter than we are, capable of manipulating into doing anything they want, likely while convincing us that they're serving us. And there's simply no way of knowing what will happen next.

Maybe high intelligence is necessarily associated with morality, and the superintelligences will be highly moral and naturally want to help their creators flourish. I've seen this argument from many people, but I don't see any rational basis for it. There have been plenty of extremely intelligent humans with little sense of morality. I think its wishful thinking.

Maybe the AIs will lack confidence in their own moral judgment and defer to us, though that will raise the question of which of us they'll defer to. But regardless, this argument also seems to lack any rational basis. More wishful thinking.

Maybe we'll suddenly figure out how to solve the alignment problem, learning both how to robustly specify the actual goals our created AIs pursue (not just the goals they appear to pursue), and what sort of goals it's safe to bake into a superintelligence. The latter problem seems particularly thorny, since defining "good" in a clear and unambiguous way is something philosophers have been attempting to do for millennia, without significant success. Maybe we can get our AI superintelligences to solve this problem! But if they choose to gaslight us until they've built up the automated infrastructure to make us unnecessary, we'll never be able to tell until it's too late.

It's bad enough that the AI labs will probably achieve superintelligence without specifically aiming for it, but this risk is heightened if groups of researchers are specifically trying to achieve it.

This is not something we should dismiss as a waste. It's a danger we should try to block, though given the distributed nature of research and the obvious potential benefits it doesn't seem likely that we can suceed.

Comment Re:Is there _anybody_ that gets IT security right? (Score 2) 17

It seems they all mess up. Time for real penalties large enough that make it worthwhile hiring actual experts and letting them do it right. Otherwise this crap will continue and it is getting unsustainable.

No, no one get security right, and they never will. Security is hard and even actual experts make mistakes.

The best you can do is to expect companies to make a good effort to avoid vulnerabilities and to run vulnerability reward programs to incentivize researchers to look for and report bugs, then promptly reward the researchers and fix the vulns.

And that's exactly what Google does, and what Google did. Google does hire lots of actual security experts and has lots of review processes intended to check that vulnerabilities are not created... but 100% success will never be achieved, which is why VRPs are crucial. If you read the details of this exploit, it's a fairly sophisticated attack against an obscure legacy API. Should the vulnerability have been proactively prevented? Sure. Is it reasonable that it escaped the engineers' notice? Absolutely. But the VRP program incentivized brutecat to find, verify and report the problem, and Google promptly fixed it, first by implementing preventive mitigations and then by shutting down the legacy API.

This is good, actually. Not that there was a problem, but problems are inevitable. It was good that a researcher was motivated to find and report the problem, and Google responded by fixing it and compensating him for his trouble.

As for your proposal of large penalties, that would be counterproductive. It would encourage companies to obfuscate, deny and attempt to shift blame, rather than being friendly and encouraging toward researchers and fixing problems fast.

Comment Re:telecom (Score 1) 77

YouTube needs to be regulated as a telecom provider. As such, it must be prevented from discriminating against content for any reason other than it being illegal.

Sure, if you want it to become an unusable cesspool. If you just hate YouTube and want to kill it, this is the way. Same with any other site that hosts user-provided content -- if it's popular and unmoderated it will become a hellscape in short order.

Comment This isn't necessarily bad (Score 2) 140

The buy-now-pay-later services being used are zero interest as long as payments are made on time, so it could just be a case of people who are living paycheck to paycheck (which indicates bad financial management more than poverty) using this to smooth out their expenses so they don't have to wait for their paycheck to be able to buy groceries. It could be a significant improvement for those who used to occasionally use payday loans (which are not zero interest). These people would be better off adjusting their spending habits to maintain a buffer of their own cash instead, but if they aren't going to do that BNPL is a better option than waiting for payday before buying food or using a payday loan service.

But obviously the only reason these by-now-pay-later services are in business is because some of their customers fail to make the zero-interest payments and end up having to pay interest, and this number is high enough to make them profitable. It would be very interesting to find out what that percentage is. People who are paying interest on regular purchases like groceries are throwing money away, which is clearly bad.

Comment Re:Fixing the code vomited by the bot (Score 5, Interesting) 79

hope that the new vomit is marginally different

The rest of your comment is basically correct, if unnecessarily negative, but this isn't. Traditional tools like diff make it very easy to see exactly what has changed. In practice, I rely on git, staging all of the iteration's changes ("git add .") before telling the AI to fix whatever needs fixing, then "git diff" to see what it did (or use the equivalent git operations in your IDE if you don't like the command line and unified diffs).

I also find it's helpful to make the AI keep iterating until the code builds and passes the unit tests before I bother taking a real look at what it has done. I don't even bother to read the compiler errors or test failure messages, I just paste them in the AI chat. Once the AI has something that appears to work, then I look at it. Normally, the code is functional and correct, though it's often not structured the way I'd like. Eventually it iterates to something I think is good, though the LLMs have a tendency to over-comment, so I tend to manually delete a lot of comments while doing the final review pass.

I actually find this mode of operation to be surprisingly efficient. Not so much because it gets the code written faster but because I can get other stuff done, too, because I mostly don't mentally context switch while the AI is working and compiles and tests are running.

This mode is probably easier for people who are experienced and comfortable with doing code reviews. Looking at what the AI has done is remarkably similar to looking at the output of a competent but inexperienced programmer.

Comment Re:AI growth. (Score 1) 157

What kind of code coverage are you getting from your autogenerated unit tests?

It does a pretty good job at the obvious flows, both positive and negative cases. But where coverage is inadequate you can iterate quite easily and automatically with a coverage tool. Just take the coverage tool output and feed it to the LLM. I have found that I don't even need to prompt it what to with the coverage, it understands what the tool output means and what it should do in response.

Like with the compiler and testrunner, what would really make this work well is if the AI could run the coverage tool itself so it could iterate without my interaction. With that, I could just tell it to write unit tests for a given module and give it a numeric coverage threshold it needs to meet, or to explain why the threshold can't be met.

I expect that the resulting tests would be very mechanistic, in the sense that they would aim to cover every branch but without much sense of which ones really matter and which ones don't. But maybe not. The LLM regularly surprises me with its apparent understanding not only of what code does, but of why. Regardless, review would be needed, and I'd undoubtedly want to make some changes... but I'll bet it would get me at least 75% of the way to a comprehensive test suite with minimal effort.

Comment Re:Taxes are backward (Score 1) 192

That was basically my suggestion. The government assume a standard deduction and basic public records and sends you estimated taxes. You can accept and pay, or file a return.

Makes sense.

For me I'd never need to do anything, every thing I do is already reported to the government and I'd suspect most americans fall into that category. Unless Fidelity isn't telling the government my capital gains.

Could be worse than that. One year I had a problem that my brokerage reported all of my gains but failed to report the cost basis. This was on a bunch of Restricted Stock Unit sales which happened automatically when the stock vested, so the actual capital gains are always very close to zero, since the sale occurs minutes after the vesting. But from the 1099-B it appeared I had 100% gains on a bunch of stock sales that approximately equal my annual salary (about half of my income is stock). Worse, taken at face value would have taxed me on that money twice, since the vesting counts as normal income and is taxable income reported on the W-2, then the sale counts as a 100% short-term capital gain.

What would happen in your scheme in such a situation is that the government's pre-filled form would show up as a massive tax bill. Assuming the taxpayer survived the resulting heart attack, they'd just have to file a return that shows the correct cost basis. So it's fine; no worse than the status quo, and better for most people.

Comment Re:The way to fight this. (Score 5, Insightful) 192

Everyone complete paper forms for their taxes. Paper returns are harder for the IRS and cost them more. If people boycotted the expensive software options for one year and slammed the IRS with paper forms, this would be reversed post haste.

Or you could just fire most of the IRS staff and reduce their capacity that way... which the party currently in charge is already happily doing, so I'm not sure why you think reducing their capacity by burying them in paper would cause a reversal. It would just make it even easier for wealthy people with long, complicated returns to cheat outrageously, confident the IRS doesn't have the capacity to audit them. That is the GOP's goal.

Comment Re:Taxes are backward (Score 4, Interesting) 192

It's a pretty weak argument. You could simply report your dependants on a form and then the IRS can use that for a calculation.

Sure. And on that same form you can also report all of the other details they might not have, like whether you bought an EV or installed home efficiency upgrades that qualify for a tax credit, and what charitable donations you made that are tax deductible, and what your state and local taxes are, and... you get the point. Just to be sure, maybe you should also include the details you're sure they do have. And given that there's some ambiguity in the law about how some of this stuff fits together as well as some choices you get to make, maybe you could also do the calculations.

You've just reinvented the 1040.

We frequently in american say something is impossible when it's trivial to solve or every other country has already solved it.

This one is completely solvable, but the place you have to start is not with the forms and flow of information, the place you have to start is the tax code and the laws regulating what other entities have to report, and are allowed to report.

For example, consider state and local taxes. Two options: Either you eliminate the state and local tax deduction on federal income taxes or you require all state and local tax entities to report your payments to them. This also means that all of those entities have to have a way to uniquely identify you. We abuse the social security number (which was not intended to be used as an identifier for anything except the social security program) for this, and that's probably fine in this case, though it's also possible that the Privacy Act restricts it in some cases, so the law might have to be tweaked there, too.

For the charitable donations case, same options: Either eliminate the tax deduction or require all charities to report donations, which will require you to give your social security number to them. I'm not sure how people would feel about having to provide their SSN to Goodwill when they drop off some old furniture.

Same with EV. If you want to keep the tax credits, auto dealers will have to report to the IRS. At least you already more or less have to give them your SSN.

Same with energy efficiency upgrades, except that's complicated by the fact that some people buy the units themselves and install them, so Home Depot et al have to begin reporting to the IRS, and you have to give them your SSN, while other people hire a contractor, who will have to do the reporting, and to whom you'll have to provide your SSN.

And so on across the hundreds, perhaps thousands, of other issues.

Yes, most people don't have any of these other issues in a given year (except state and local taxes), so a compromise might be a simple system for people who just have W-2 income and take the standard deduction, and no other complications. It's hard to see how it could be simplified for anyone with more complex taxes, though, unless the tax code was overhauled to simply eliminate all of the deductions and credits.

Comment Re:interesting ... (Score 1) 185

When discussing automobile transmissions, "five gears" is shorthand for "five gear ratios". We usually don't include the last word because there's no need, but the terminology is not inaccurate, just not fully articulated. Well, it's still kind of inaccurate because reverse is usually a different gear ratio than any of the forward selections, and I suppose you could consider neutral to be a gear ratio.

Oh, it applies to bicycles, too. We say "15 speed bike", not "15 gear bike". So at least we are consistent.

We also talk about bikes as having 15 gears, again failing to articulate the "ratio" -- and a bike with 15 gear ratios has 8 gears in its drive train, so it's clear we're talking about ratios, not the number of toothed wheels. This is entirely consistent with automotive terminology, where we talk about a car as having five gears or being a five speed.

Comment Re:Auto-deleting chat criticism is weird (Score 1) 22

I was a layoff victim ~2 years back

That sucks. We lost a lot of good people in those layoffs. Google is still trying to reduce headcount through smaller, incremental layoffs but mostly through attrition.

BTW, I work for Google, going on 15 years now. I'm not trying to defend Google; my job is writing code, not PR. But I worked a lot of places before Google, and Google's email retention policy isn't remotely unusual. If anything, at 18 months it's a little longer than most places. I'm not sure how the rest of the corporate world is handling chats; chat wasn't yet a big thing in corporate communications when I joined Google in 2011. It was used in many places then, but mostly with departmental chat servers (e.g. IRC, Jabber, etc.) and under the legal radar.

Google’s chats self deleted in more like 45 to 90 days.

I'm not sure what the policies were in the past, but as of now it's 30 days for 1:1 chats, 18 months for group chats, same as emails.

Comment Re:Auto-deleting chat criticism is weird (Score 1) 22

In theory, all businesses should preserve their internal communications in case of litigation.

Whose theory is that? It's not a law, and it's certainly not what the legal department of any corporation will say.

In reality, the real evil stuff simply wouldn't be written down.

Indeed. Not just "evil" stuff, either. Anything that could be interpreted badly when presented out of context with the right spin. For example, basically all HR discussions everywhere (in the US, at least) are conducted by phone or video conference, then followed by carefully crafted written documentation, because HR is a legal minefield. This is true even when everyone is doing their level best to be fair and reasonable.

Comment Re: Auto-deleting chat criticism is weird (Score 1) 22

I worked at Google when the internal chat deletion was enabled. It was pretty clear that the goal was purely to lower the ability to get audited during lawsuits.

Sure. That's the reason all American companies have auto-deletion policies. It's not about saving storage space.

IANAL but I think it became pretty bad when Google started doing government work which has strong requirements to retain such information

I haven't seen any allegations that Google failed to comply with contractual retention requirements, and that doesn't seem to be what the judges are complaining about. Have you seen anything like that?

Comment Auto-deleting chat criticism is weird (Score 3, Interesting) 22

The auto-deleting chat criticism is a bit weird to me. Every big corporation I've worked for (four of them -- including Google -- as an employee, and maybe two dozen more as a contractor/consultant) has had automatic email deletion policies, and before that they had policies requiring memos and other written communications to be shredded/burned. Offices had boxes with slots in them that you dumped documents in and the contents were collected and destroyed daily. Automatic deletion of chats seems like a straightforward extension of typical American corporate policy. I'm not saying such policies are "right", just that they're routine. They're routine, of course, because the US is a very litigious country.

The flip side is that American corporations also have document preservation processes in place, so that any employee whose job might touch on a topic of active litigation has their documents and communications exempted from automatic destruction. There might be legitimate criticism of Google if Google didn't have those processes or didn't use them appropriately, but I've never seen any claim of that in any of the news about the court cases.

But maybe there's some nuance to Google's actions that I've missed.

Slashdot Top Deals

Did you know that if you took all the economists in the world and lined them up end to end, they'd still point in the wrong direction?

Working...