Comment Re:My monitor is monochrome (Score 1) 53
black and green?
black and green?
Can a license force you to let other companies audit you?
Yes, the dialogue is just like a dialogue with humans, but it is only like it and not a real dialogue. LLM will know things wrong, they will likely make mistakes for some in-depth questions, they might be very confident and not admit errors easily. Using LLM needs some training, just like other tools. On the other hand, some errors may also be helpful, when you think "The answer is wrong, but now I see why it can't work what I asked in my question" kinda helping you to find X-Y problems.
Often, especially with reasoning and when your questions are not just answered with a single sentence, it may end up being more like a brainstorming question than asking an omniscient being, but that's fine because you can brainstorm a lot very fast by leading the LLM to the parts of the ideas you like to explore. You just need to learn to be a bit cautious and ignore the parts where the LLM was mislead. Sometimes it can be like getting a young child to report something and you need to ask your questions again in a simpler way to come to the information you really wanted.
I think the main difference to other tools is, that LLM have a more intuitive interface with the chat. That's also the main disadvantage when people think "it's just like a human" just because the chat is an interface that would work for talking to actual humans.
Aren't they doing it? At least Perplexity adds citation links to each claim on the LLM message. That's the difference to Chatbots, that these tools actually put a link where the information come from, so you can find more and/or find out of the source is not reputable.
Sounds cheap for a vpn that assigns you one IP per tab. Most VPN assign you like maximum 5 connections for $5 to $20 per Month.
Then maybe they need to shut down the project and start a new one. Debian will probably still support 32 Bit in 20 years.
I think the idea was good and having Mozilla do it as open source was also good. They didn't finish in time and now we have many open alternatives available. The AI space evolves fine without Mozilla, so why should they waste their resources on that? If the way speech integration in Firefox, they can just pick any open weights model and use it.
Why the split and why these numbers? I especially don't get why individuals should get 100 years.
The point is, that there is not "store into the model" step.
The training gets an input and an expected output and then computes how large the error between expected and actual output is. Afterward it updates the weights using the derivatives of the error, not the input or expected output texts.
The only way to "store" something in a model is overfitting, and the pigeon hole principle tells that you can only overfit a small amount of training material.
The failure to litigate was expected.
I wonder if there are any lawyers who bother to learn about the technology first, because when you read the lawsuits, it often seems as if you wouldn't need a lawyer at all. You could refute the claims simply by presenting the neural network's diagram, which shows the technical impossibility of the assertions. Of course, you'd still need a lawyer to translate the technical details into language a judge can comprehend.
Have you read the court documents from the Andersen case? Many of the claims against the systems are technically infeasible. On the other hand, they overlooked some obvious claims that could have been easily confirmed. Did they not ask any experts before filing suit?
The question is, whether there is even a useful comparison. Even if you want to use a LLM as an encyclopedia (though they are not well-suited for this purpose), you would have to compare it to another encyclopedia, like Wikipedia, not to a web search and every website Google can find.
Not only multiple explanations, but also multiple levels of detail. "So I think I've got the basics of search algorithms, but can you explain why we need a priority queue again?"
There are no dumb questions, and LLMs are more patient than any teacher could ever be.
The more you ask the LLM to explain, the better you'll understand the topic and the better you'll detect when the LLM goes wrong. People are just starting to learn how to use it, but the point of LLMs isn't to produce a finished article, but to be interactive. Don't just summarize your documents, ask them questions. Ask the LLM to give you an exercise related to the topic you just learned. Ask if your solution looks good, or ask for a hint about what you might have done wrong.
I would have to see such a system. Even busybox contains a minimal vi. I bet there are more *nix systems that have vi than *nix systems that have nano.
Yes, and I am surprised if they really did it that way. They must know that there are a lot of people who will grab every straw to argue their scraping is illegal, so I would think they would limit themselves internally to the minimum needed to avoid making it too easy to sue them.
On the other hand, it looks like the judge managed to separate the issues. I wonder if the remaining process will end up with "We made a list of what we think employees read for entertainment and now pay $10,000 as nobody can have read terabytes of books in the time since we downloaded them."
Reading privacy policies is often helpful. They may use pleasant terms like GDPR. If not, they frequently provide clues about how they intend to use your data, allowing you to determine whether they are merely protecting themselves from potential lawsuits or planning to sell your data.
"There is such a fine line between genius and stupidity." - David St. Hubbins, "Spinal Tap"