Forgot your password?
typodupeerror

Comment Re:So enforce the same working standards (Score 1) 231

You can be born into slavery. Hard labor in a prison is a punishment for a crime and lasts only as long as the sentence.

That's absolutely not true. Chattel slavery is by no means the only form of slavery. If you were to go back in time to ancient Rome or Greece or any number of other cultures, you would be surrounded by people who were slaves but were by no means born into it and may expect to become free again sometime in their life.

The US absolutely practices slavery in its prison system.

Comment Re:This is actually a great problem and very bad n (Score 2) 144

But if you think this is a problem, now lets turn to early July. Solar is now putting out its max, around 30GW at midday... Now the problem with solar is that most of it is not under the control of the grid operator, so they cannot turn it off.

Sure they can, just stop buying it. The utility just disconnects their inverters and the panels become shiny glass. A solar cell without a load doesn't explode or anything.

Comment Re:How? (Score 1) 144

We need more smart appliances that can be set to run on a signal from the grid. E.g. if I could plug my car in, but only have the EVSE deliver power when the grid tells it to. Or press a "delayed start" button on my dishwasher that tells it to run in 4 hours or when the grid tells it to, whichever comes first. Same for clothes washer/dryers.

If more houses had whole-home batteries, they could charge during the day and discharge at night to do load shifting.

Comment Re:GPT5 found the same issues (Score 4, Insightful) 37

I'm not the biggest AI proponent, but a security flaw is a flaw no matter who found it or how obscure. If the LLM agent can come up with an exploit that is demonstrable, then it should get fixed. That is not a scam, that is a real improvement to the security of the software under test. Who cares if nobody found them before? They are found now and so they need to be fixed now.

Like it or not, these tools are out there, and they are in the hands of state actors who are also using them to find exploits. If Anthropic wants to burn some of their money on finding and responsibly disclosing some exploits in software that is an important part of ur infrastructure, then great.

Comment Re:Rust is a specialist language (Score 2) 170

I can't speak for the above poster but I use the following, among others:

  • bat - syntax-aware pager
  • zoxide - smart directory lookup and path history
  • tree-sitter - syntax tree generator for vim
  • neovim - vim replacement
  • rg - ripgrep - a faster, git-aware regex tool
  • starship - a shell prompt tool that has a a bunch of the things you want built into it, so no calling out to 12 different programs to build your prompt
  • fd - drop in replacement for find with colorized results

...probably others

Comment Re:Use protection (Score 4, Informative) 50

All notification content going through apple in plain text is stupid. (google is the same). All that because you're not allowed to keep a persistent TCP connection as an app.

Keeping a persistent TCP connection would obliterate your battery. If any app could just do that everyone's phone would be dying all the time because some opaque (to the user) process was holding a connection and keeping the radio warmed up even when their phone is sleeping in their pocket. The only sane way is to have a system process wake the phone up a pre-defined sane intervals and pick up all the messages at once in a batch.

Also, you are wrong message bodies aren't going through Apple's servers in plain text. When signal has pending messages for a user, the signal message server sends an empty "ping" notification for the signal user to Apple. iOS notification service delivers the notification to signal. Signal then wakes up and picks up the encrypted message from the cloud, decrypts it, and pops a notification containing the plain text.

It's these decrypted messages in the local iOS notification queue that the FBI recovered, NOT the cloud notifications, which contain no sensitive information . You can tell signal not to put the decrypted message in the notification. You can have it instead say "Message from <contact_name>" or just "New messages" or nothing at all.

You don't need to take my word for it, the app is open source. You can see the notification handling code here:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fgithub.com%2Fsignalapp%2FS...

You can see it does nothing with the actual notification content received from Apple's notification service. It's just an empty message used to wake the app up whcih then fires off some async jobs to fetch the actual messages.

Comment Re:What I find amusing is... (Score 1) 38

If you ask Claude about any of these features, it will deny that they exist.

It makes you wonder. Were they removed from the models that are currently running, or was Claude taught to not disclose their existence?

"Claude Code" is just a piece of Node.js software that talks to one of the "claude" LLMs (e.g. "Opus") in the cloud. The LLM model running in the cloud of course doesn't know anything about the proprietary client software you are running, because it wasn't trained on it.

It's not about denial, it's just that the LLM isn't trained on the closed source code of its client any more than it is trained on the Windows source code. That code isn't in the public domain so it isn't available as a reference to the model. All it knows about is what's in the publicly available manual. You can test this by asking opus about claude code features. You'll see it doing a bunch of WebGet requests for the claude code manual.md files.

Comment Re: Can AI clone lawyers & judges? (Score 1) 125

Analogies with the human brain don't work that well. In our case, every time we remember we rewrite that memory, altering it from slightly, to a lot, to completely. AI systems' baseline memory is read-only; it doesn't change during reuse, so it can be equated more with the way saving a PNG into a JPEG is still a direct derivative copy of the PNG content, no matter whether one cranks the compression up so the resulting image becomes way blurrier than the original. Being blurry doesn't make it not a copy. And, in being a copy, legal copying rights apply.

Now, if AI memory startes changing globally every single time it receives a request from any source, no matter how many sessions or API calls are happening, so that any new subsequent call is dealing with that altered memory and in turn altering it, so that its entire memory space is in constant flux, and there's no snapshotting to roll its state back to previous configurations, so they don't act as mere static lossy compressors, then it becomes an analog of a human brain with human-like memory, at which point accusing it of simply making derivative copies cannot be done anymore without also accusing humans.

The problem with that, evidently, is that when they start working like that, since they're functioning exactly as real persons do, they too become persons, with legitimate claim to personhood and to personal rights. Which is a legal can of worms no one wants to deal with.

Comment Re:You sure about that? (Score 1) 125

Computer 1 interprets the program and generates the documentation, saving it to a USB drive.
You unplug the USB drive and move it over to Computer 2.
Computer 2 reads the documentation and generates a new code base.
You can read the documentation and there was no other means of communication.

If you don't think a repeatable process is sufficient "proof" then you aren't being realistic and that's a problem with you, not the law.

Except both computer 1 and 2 are both probably running an LLM model that has been exposed to the source code and so are tainted, unless you train your own model that you know has never seen the source, an endeavor that costs 10s of millions of dollars.

Comment Re:Owner must prove its a derivative (Score 1) 125

The above is subject to misinterpretation. The copyright owner must demonstrate its a derivative and win in court. Owner must prove guilt, publisher does not need to prove innocence.

It a civil case you don't need to prove "guilt", just that it is more likely than not that they looked at your source.

This is why, for example, when there is a major leak or hack against a video game console, emulator developers won't let anyone who's seen the leak work on the project. It exposes them to the accusation that they are derived from proprietary IP. They know that console manufacturers are just itching to sue them anyway, and being a non-cleanroom implementation give them the excuse.

I think it would be pretty easy to argue that just about any open source project that lives on a notable hosting platform has been sucked up for LLM training at this point. For that reason, any competitive proprietary project coded with an LLM can be credibly accused of being a derivative work, as the preponderance of circumstantial evidence would point to the LLM being "tainted" by the OSS project, unless you could demonstrate that it was excluded from the training data.

Comment Re:Not tested in court... (Score 1) 125

We don't allow that with human brains: that's why clean-room implementations are a thing. Why should it be any different for LLMs, which are less transformative than human cognition, if anything. If the model was trained on the data, then I don't think anything spat out by that model can be considered a "clean" implementation, for the same reason you don't let software engineers who have seen your competitor's source code work on your clean room clone of their projects.

Comment Re: Can AI clone lawyers & judges? (Score 1) 125

The coder is trained but without any copy left code.

It costs 10s of millions of dollars to train a big competent LLM. GPT-4 cost ~$74M to train, for example. You can hire a team of human devs who have never looked at the source to do a clean room rewrite the project for a fraction of the cost it would take to develop a "clean" model.

That said, I could see a use for a model that was only trained on MIT-licensed or public domain code.

Comment Re:Liability (Score 1) 54

VPN usage can be detected via deep packet inspection, as China shows. In China, the government is aware of all VPN usage and lets it slip, or blocks it, as they see fit. In Xinjiang they even went after VPN users to demand look into their mobile devices to check whether they had forbidden content there, not due to need but as an intimidation tactic, an explicit "we know who you are, and where to find you" warning to all inhabitants so they wouldn't feel empowered by the mere fact the government is allowing them to use VPNs.

The UK and other countries are looking into regulating VPNs by demanding that VPN providers also age-verify users. Those who don't will be formally fined, as the UK is trying to fine 4chan despite being unable to collect, and blocked, which is feasible. Evidently, VPN developers keep improving their protocols to make them more and more indistinguishable, but DPI also improves in return. It'll be a cat-and-mouse game as the one I described in my answer to the other reply, until using an unlicensed VPN provider becomes so aggravating that most will give up.

And, important tidbit, China resells its Great Firewall tech to any country interested. Right now, only dictatorships and illiberal democracies buy it, but if VPN tech improves faster than national age-verification legal bodies can keep up with via their own locally developed DPI solutions, they too may start purchasing it.

Slashdot Top Deals

According to all the latest reports, there was no truth in any of the earlier reports.

Working...