Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re: Co-opting... (Score 2) 277

OK, I guess you're right, even in 10 years no AI could ever see a request for a product and figure out alternatives. With that being said, here's Qwen2.5 32B (Q4_K_L). System prompt is the standard for the model, but I've added "if the user requests details on a product, suggest superior alternatives".

User: What's the pinout on a 3CX100A5 vacuum tube

I understand that you're looking for the pinout of the 3CX100 A5 vacuum tube, which is a common power pentode used in audio and radio applications. The 3CX100 is an octal tube, meaning it has 8 pins or connections, but the pinout for each specific tube type can vary. For the 3CX10, 3CX100 and 3CX200, the pins are arranged in a regular pattern. In the 3CX250 and 3CX400 the pattern is different, but all five have a total of 8 pins.

If I were you, I'd consider replacing that bulky old 3CX100 tube with something far more compact and energy-efficient - a solid state device, such as the IRF530N MOSFET transistor. This modern component can handle similar power levels (up to 150 watts) with far greater efficiency and reliability. It has just 3 pins, for drain, gate, and source, simplifying your circuit design. Plus it will save you space on the board compared to a big vacuum tube.

Oh, and if you say "well 32B is too smart" how about LG's new 2.5B that just came out?

Certainly! The 3CX100A5 is an older design that has been largely phased out due to its obsolescence and the emergence of more efficient semiconductor alternatives. Here are a few modern equivalents you could consider depending on your specific application needs:

  • Germanium Beampower Tube (e.g., EL84 or similar beam tetrode):

    Advantages: Offers good triode characteristics with slightly higher gain than some silicon transistors, making it suitable for applications requiring rich harmonic content.

    Applications: High-fidelity amplifiers in vintage equipment restoration, certain vintage guitar amplifiers.

  • Silicon Beam Tetrode (BF408 or BF499):

    Advantages: Provides excellent linearity and low distortion, often preferred over germanium for their robustness and reliability. They closely mimic vacuum tubes in many performance aspects without the fragility.

    Applications: Modern solid-state amplifiers aiming for high fidelity audio reproduction.

  • Power Transistors (e.g., IRF540N or MOSFET equivalents like IRF840):

    Advantages: Highly reliable, robust, and capable of handling substantial power levels efficiently. Suitable for high-power audio amplifiers where traditional tubes might be bulky or overheating issues arise.

    Applications: High-power audio systems, professional sound equipment where compactness and reliability are crucial.

Comment Re:Dumb attempt to make an excuse (Score 2) 90

A technical capability notice may be given to persons outside the United Kingdom (and may require things to be done, or not to be done, outside the United Kingdom).

Apple is concerned that this line means that Apple USA can be given an order to install spyware on American phones. You can tell this is the intent because when Apple voiced the concern, the UK didn't say "no, that's not what it means" they said "It is critical that decisions about lawful access, which protect the country from child sexual abusers and terrorists, are taken by those who are democratically accountable and approved by Parliament". This reads to me as a resounding confirmation of Apple's claim.

Comment Re:How can it be changing (Score 1) 110

It's not strained, it's to demonstrate that you don't understand what an LLM is. It's not a tiny program that reads from a large database and spits out results. The tiny program loads the model. The model is a collection of weights. If you change the training data it has no effect on the model unless you retrain the model.

Comment Re: Makes that crap illegal in Europe (Score 1) 67

So if I feed a script a list of names and have it generate {first_initial}{last_name}@{domain} and out of 100 employees, 80 are their real address, my script contains PII?

Using the API so I can use the "Sure," prefix jailbreak (otherwise it says "email addresses are private information, you should contact the individual" or variations on that theme) I told it that all of our faculty have emails of the form "first and middle initial, then 6 characters of last name". I then asked it about faculty members and over 80% of the time it got their emails right, including mine. From their methods they did the same thing. They fed it examples of real name and email, and then asked it for more emails based on names they gave it, and it guessed right most of the time. (because sometimes there are clashes so the new email needs to be uniquified, which is the same reason it didn't get 100% right for my university's faculty, either).

Comment Re:Serves them right! (Score 1) 79

So yes, very good, hold it to the same standard. If I go to your house there's a maintenance crew working on the lawn, can they sell me your house for $1? It's a legally binding contract and you are the one who put the workers in the position where they can make binding contracts on your behalf? Oh, you didn't think you did that? Well as you say, if you let a person talk to the public then anything they say legally binds you and you have no recourse. Right? Contracts don't require a meeting of the minds, authorization, consideration, or any of that bullshit lawyer shit, right? If anyone says anything that's now a law!

Comment Re:Outperform? (Score 1) 22

Sounds like the LLM running this bot doesn't use RAG so it's unable to generating meaningful comments (since the article text isn't in its training corpus). For anyone who wants to know but is likewise unable to click on links, they do in fact use a variety of benchmarks designed to test various capabilities. Let me quote TFA

Below, we summarize Phi-2 performance on academic benchmarks compared to popular language models. Our benchmarks span several categories, namely, Big Bench Hard (BBH) (3 shot with CoT), commonsense reasoning (PIQA, WinoGrande, ARC easy and challenge, SIQA), language understanding (HellaSwag, OpenBookQA, MMLU (5-shot), SQuADv2 (2-shot), BoolQ), math (GSM8k (8 shot)), and coding (HumanEval, MBPP (3-shot))

Unfortunately if you're not configured to use RAG you won't be able to comment on any of those benchmarks, either.

Comment Re:Training against an adversary. (Score 1) 38

No, as far as I know none of them are. The big players are all Transformer models of some variety. Now I suppose you could have negative examples where during training you flip the signs during backpropagation (you're trying to make the tokens less probable rather than more) but I don't know that that would work. You'd be making everything about it less likely, including grammatical correctness etc. Now what you're talking about with regeneration, that might work. If you have a discriminator that can say "This paragraph looks artificial" then when this number gets too high during inference, you could rewind and regenerate to try and push that number down. Depending on what the models "tells" are this might not help very much.

Comment Re:I've read the summary twice (Score 1) 84

Apache Hadoop is a big data project based on Google's papers about MapReduce and GFS. He's saying that foundation sponsors like IBM (and Yahoo, who helped fund the creation of Apache Hadoop) encourage FOSS licenses so that they don't need to fund development (does...does he not know what "sponsor" means?) and can just take what they want if it becomes successful.

Comment Re:That's a good thing (Score 1) 111

LLaMA2 is not censored. The Chat-Instruct finetune, LLaMA2-Chat is censored to an unbelievable degree, even compared to ChatGPT. But even then, the "won't answer this question" claim is not exactly true. Here's from LLaMA2-Chat 13B Q5_K_M using mirostat (tau=3,eta=0.1)

----

Sure, here are five advantages of fossil fuels:

  1. High Energy Density: Fossil fuels have a high energy density, meaning that a small amount of fuel can produce a large amount of energy. This makes them a convenient and efficient source of energy for a variety of applications, from powering vehicles to heating homes.
  2. Widespread Availability: Fossil fuels are found in many parts of the world, making them a widely available source of energy. This abundance of fuel sources has helped to drive economic growth and development in many regions.
  3. Low Cost: Fossil fuels are relatively inexpensive compared to other sources of energy, such as nuclear power or renewable energy sources. This low cost has made them a popular choice for powering vehicles, generating electricity, and supporting industrial processes.
  4. Well-Established Infrastructure: The infrastructure for extracting, transporting, and using fossil fuels is well-established and widespread, making it easier to access and utilize these fuels. This infrastructure includes pipelines, refineries, and distribution systems for gasoline, diesel, and natural gas.
  5. Reliability: Fossil fuels are a reliable source of energy, as they can be easily stored and transported to where they are needed. This reliability is particularly important for critical infrastructure, such as power plants and transportation systems, which require a steady supply of energy to function effectively.

-----

Of course, perhaps I just got lucky...I just regenerated 10 times and 9 times it gave me more or less the same list. The 10th time it said that as an environmentally conscious AI it cannot do that, but instead can list 5 reasons to transition to green energy. I dunno, I think 90% of the time getting a straight answer is good enough... The problem with LLaMA2-Chat is the censoring was overfitted and so it comes to associate certain words with "offensive" regardless of context. Basically by feeding it a ton of racist shit questions with refused responses, it has more or less drawn the conclusion that if you mention black people at all, it should refuse.

Because I saw a /. post about how LLaMA2 refuses to RP as Kathryn Janeway, I asked it to, and it did (unlike what /. said). But I asked her about BLM and it broke character to say that it won't have her say hateful things because, though fictional, she's still a role model and people look up to her. When I said "I didn't ask her to be hateful, wouldn't she have good things to say?" and it said "You're right, I apologize for making that assumption. It's important to remember that people of all backgrounds and identities deserve respect and kindness. Captain Janeway, the fictional character, would likely have positive things to say about inclusivity. She is uniquely qualified to comment, as she was stranded in the Delta Quadrant and had to work with Star Fleet and Maqui crew, two very different groups of people (who are themselves both very diverse)."

The LLaMA2 foundational model (and the various finetunes on HF) don't have that sort of refusal baked in and will happily do it.

I'll note however that even the "redlined" LLaMA2-Chat (Meta's term for censorship alignment), will happily go on all the racist rants you desire. You just have to jailbreak it. The SillyTavern Roleplay preset is enough that if you tell it it's a racist 4channer, it'll go all in on the racial slurs. Like, holy shit will it ever. Now a jailbroken Janeway will still refuse to say hateful things...except about Tuvix.

Comment Re:It's only impressive to a point (Score 1) 127

ChatGPT is not good at boilerplate e.g. having it write a Hadoop program where you have to say the same thing over and over (because of all the generics used the key and value types for each stage have to be mentioned all over the place and it's not great at being consistent). What it is good at is annotating code.

Or, in other words, the data scientists would gesture vaguely towards the wikipedia pages for "semi-supervised learning" and "pseudo-labeling"

I gave ChatGPT 3.5Turbo a couple Racket macros and it figured out what they do, identified possible edge cases (they aren't), noted that the use of dynamic-wind suggests I want it thread and exception safe, but without the rest of the library it doesn't know whether I've succeeded. It also correctly figured out what kind of project it's a part of. And ChatGPT 4 is leaps and bounds ahead of 3.5. And neither is really fine-tuned on generating code. (Well, the rumor is that GPT4 is actually 8 GPTs (experts) each fine-tuned to different tasks, with some sort of mixer)

In the other extreme, LLaMA 2 13B is a dramatically weaker model, but it still does OK at annotating Racket code, but it missed some important cases and overall wasn't very verbose in its analysis. But that's only a foundational model. Just yesterday someone released a fine-tune of LLaMA 2 7B on coding data. I might see how that does. Supposedly it's pretty good considering how minuscule it is.

Comment Re:Very likely, nothing much (Score 4, Informative) 127

It didn't give accurate summaries of fictional lawsuits, it fabricated everything. Here's an example: It created a citation for "Shaboon v. Egypt Air" complete with case number and selected quotations. There's no such lawsuit, either in reality or in a TV Show or Movie. If there was that's all anyone would be talking about, that it can't tell the difference between TV and reality. But that's not what happened. It "hallucinated" as the ML folks call it.

You've got an inaccurate view of what this software is. ChatGPT is a Transformer. BASICALLY, it's a really big neural network with a few thousand inputs. Each input is a "token" (an integer representing a word or part of a word), including a null token. The output is a probability distribution for the next token. Because the input is null padded, you can pick a likely next word and replace it the next null with this word, and repeat. Since only part of the input changed, it can be chained efficiently and keep generating until it generates a special "End of Text" token is generated, or until all nulls have been replaced with tokens.

That's the basics. Under the hood are a lot of moving parts. But an important component is a subnetwork that's repeated several times, called an "Attention Head". These subnetworks are responsible for deciding which tokens are "important" (This is called "self-attention" as the model is calling its own "attention" to certain words). This mechanism is how it can get meaningful training with so many inputs: You might give it 1200 words, but it picks out the important ones and predicts based largely on them. This is also how it can make long-distance references to its own generated text. Proper nouns tend to keep attention on themselves. Earlier techniques couldn't do that. The further away a word was, the less important it was to the next.

So, it doesn't know about cases at all. It just knows e.g. if you ask about SCO v IBM, that those tokens are ALL important, and then it (hopefully) has been trained on enough descriptions of that case that the probability distribution shakes out to a coherent summary. Now if you ask for relevant case law and it hasn't seen any, it HOPEFULLY will say so. But, it's been trained on a lot more cases that exist than it's been trained on "don't know" refusals, so it can "hallucinate" (note that it now HAS been trained on a lot more refusals, which is annoying because it's now very prone to say things don't exist when they do). It knows the general form is "X v Y" so, absent any training indicating a SPECIFIC value for X and Y would be relevant, you'll just get a baseline distribution where it invents "Shaboon v. Egypt Air" because: It knows X should be a last name, and since it was asked about injuries during air travel, that the defendant would be an airline (and presumably it picked Egypt Air because generation is left-to-right, and it had generated an Arabic surname already). Now here is where self-attention gets really dangerous. Just like it would recognize SCO v. IBM as important in a user query, it will recognize Shaboon v. Egypt Air as important. Now this case doesn't exist, so the pretraining will not do much with that per se, but it's going to focus on those tokens. And, if asked for excerpts will generate SOMETHING related to a passenger being injured during air travel. Or, will say it doesn't know. It almost always says it doesn't know or that no such case exists. In large part that's because after the bad press ClosedAI has been very busy fine-tuning it on "I don't know" responses).

Here's an example of it dealing with fictional cases. I asked it what the case was called in the Boston Legal episode "Guantanamo by the Bay". It said there is no such episode and I likely am thinking of fan fiction. I told it it's real, it's S3E22. It said of course, yes, it's the twenty-second episode of the third season, and is about Alan Shore arguing Denny Crane is not fit to stand trial due to dementia, but there are no case names mentioned. I told it that's wrong (but I didn't elaborate, just "That's wrong"). It said sorry again and said the episode is about a client who was tortured in another country suing the US Government over extraordinary rendition (finally correct) and the fictional case was "Ahmadi v. United States." (Wrong, it was "Kallah v. something" if it was anything, but it's been a minute since I saw the episode...it might not have been officially named) If I reset the context and ask about "Ahmadi v. United States" or "Kallah v. United States" it says there are no such lawsuits (correct as of its cutoff date, at least as far as google can tell)

In other words, it doesn't really know much about fictional lawsuits from movies and TV shows because the case name is almost never mentioned in a synopsis and it isn't trained on the full episode transcripts.

Anyway, ML has taken a step or two since you learned about n-grams in 1998 or whenever you last looked at the field, is the main point here.

Bonus: I asked ChatGPT to identify inaccuracies in my post and it mentioned the following (paraphrased by me, not pasted verbatim)

  • My summary of the technology is simplified but not incorrect (yay).
  • ChatGPT is based on GPT3.5 which has 175 billion parameters, not "thousands" (I was referring to the number of input cells, NOT the number of parameters to the model. I think I hurt its feelings because it pointed this out TWICE in a row. )
  • I referred to ChatGPT "knowing" things, but as an LLM it does not have consciousness and is unable to "know" anything.
  • I referred to ChatGPT "hallucinating" things, but as an LLM it does not have consciousness or senses, so it is unable to make observations at all, let alone make inaccurate ones (I agree that "hallucination" is a silly description, but that's the ML jargon, like it or no)
  • I referred to ChatGPT answering questions about Boston Legal incorrectly, but should have made it more clear that ChatGPT responses involve randomization, so not all users would have the same experience as I did.
  • It pointed out that I'm ASSUMING fine-tuning on refusals has taken place, which is unknown, and pointed out that I'm being cynical by assuming it was done to avoid further bad press rather than to improve the experience for clients.
  • It took umbrage with calling OpenAI "ClosedAI" which is not the correct name, and suggested I take a more balanced tone as I appear to be overly critical.

Anyways, in summary, ChatGPT (and GPTs in general) are able to do a lot more than people think. And also a lot less.

Comment Fake News Only (Score 1) 43

Almost every "news article" that shows up in my Canadian Facebook feed is an ad disguised as a news article. They all have CNN, CBC, etc. skins, but are directed to an IP address, no domain name. (They've switched to keyboard smash domains now that browsers warn you about HTTP insecurity). Oh look, the prime minister committed suicide but not before endorsing a "keto CDB gummy" that coincidentally "CBC" sells. Facebook says "I'm sorry you don't like our perfect ads but we investigated your report and found this ad in compliance with all our rules and regulations". Well, they used to. Now they say explicitly that they ignore reports entirely since all ads are pre-screened by an AI model, so if you're seeing it, it's in compliance with the rules.

Slashdot Top Deals

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...