Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Also, why can't ChatGPT control a robot? (Score 1) 120

Agree. LLMs are designed to approximate and hallucination is a feature -- not a bug.

Princeton had a lecture last week that I watched on why mathematically LLMs hallucinate. Thought it was insightful (as an outsider to the field).

The link to the lecture: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Comment Re:Also, why can't ChatGPT control a robot? (Score 1) 120

The link wasn't to a prototype by OpenAI -- or anything related to OpenAI.

Language generation of a large language model works by finding mathematical relationships between concepts. And this is done using vector calculus. Robots work by translating physical space into a coordinate space of vectors. In other words, both large language models and robots work by calculating on vectors -- usually in a Euclidean space. There should be a linear correlation between the two but there doesn't seem to be.

Comment Re:Also, why can't ChatGPT control a robot? (Score 1) 120

The "encoded data" is encoded using lexical analysis (see link). The purpose of this is to convert characters into numerical representations in a vector form. That is, it's to find distance relationships between sets of characters. In other words, lexical analysis is the equivalent of vectors.

All physical data can be transformed into vector form. This is, for instance, the basis of classical mechanics (eg, vector calculus). So, lexical analysis adds a step to get to the vector representation. If anything, using your reasoning, it should be easier to apply the mathematical reasoning of large language models to robots, not harder

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FLexical_analysis

Comment Also, why can't ChatGPT control a robot? (Score 1) 120

Two years ago, on Slashdot, there was a post about how Microsoft was trying to get ChatGPT to control a robot:

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fhardware.slashdot.org%2Fstory%2F23%2F02%2F25%2F2330204%2Fmicrosoft-tests-chatgpts-ability-to-control-robots

As far as I can tell, there's been little or no progress on this. That is, there is no (public) prototype of either a robot that's built using the same principles as ChatGPT or that ChatGPT can control one. Why not? Why doesn't the mathematical foundation of large language models translate to the physical world?

Comment This is Cisco's fault (Score 5, Informative) 69

Here are details of the hack:

The hackers used an exploit, that's been known for Cisco for about 2 years. It's called CVE-2018-0171 and affects Cisco IOS and IOS XE software. Specifically, it's a bound checking error that can be attacked using UDP on one specific port. What the hackers did was simply execute a buffer overflow attack on the HTTP format of the authentication. That is, they overrode the authentication with an executable script, which obviously writes over the memory address denying anyone not authorized, to gain root access.

Cisco has advertised that this was a potentially dangerous exploit but said that they won't issue a patch:

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.bleepingcomputer.com%2Fnews%2Fsecurity%2Fcisco-warns-of-auth-bypass-bug-with-public-exploit-in-eol-routers%2F

Here's a quote from the linked article: Despite rating it as a critical severity bug and saying that its Product Security Incident Response Team (PSIRT) team is aware of proof-of-concept exploit code available in the wild, Cisco noted that it "has not and will not release software updates that address this vulnerability."

Instead, Cisco sent out warnings about which ports should be blocked from sending and receiving UDP packets

Comment Two other points: (Score 1) 50

I thought this was a statistically sound blog but there are two points worth considering (and not mentioned):

1) RT is an average of an average. That is, it's a binary variable, good or bad, of a review and this is then averaged for all reviewers. It's worth considering if Fandango changed either or both of these measurements. In some cases, for instance, it's not clear if a review is positive or negative. It could have been, if an analysis found that there was an equal amount of positive and negative sentiment that it should revert to negative. Fandango could have changed this to revert to positive. Also, "average" is a statistically variable concept. It could also be that, prior to 2016, the averages weren't weighted. Now, it could be that they are. Or, it could that they weren't and now they are. The point: small changes in measurements metrics could be what caused the more positive reviews -- not actual manipulation of the reviews or reviewers itself.

2) It's worth looking at the ratio of positive to negative reviews for each reviewer, before and after the acquisition. If the reviewers selected after 2016, for instance, like more movies than not, and significantly more than before 2016, than this may also cause the change the acquisition.

Comment Why wasn't a backdoor installed? (Score 1) 47

If you assume these AI servers contain integrated WIFI chips, then the question is why a backdoor wasn't installed?

This likely could be only a few lines of code. Also, the transmitted signal doesn't necessarily have to connect to the Internet. It could either send an encrypted packets of message identifying itself or, also, the message could simply state what ports are open and sysadmin credentials. This, then, could be intercepted with a packet sniffer. In any case, there are many other ways a backdoor could be installed and made to be very difficult to detect.

The question is why it wasn't done so and I suspect the answer is: 1) this was done without consent of the manufacturers 2) the people who put the trackers on the shipments don't have the technical knowledge to do so.

Comment Re:Throw It on the Pile (Score 1) 186

(Haven't read the article.) Am going to assume what you've described is true. If so, then you've confused the qualitative with the quantitative. That is, the increase in the risk of disease is an average, measured with many types of people. What you're describing can be measured quantitatively. That is, all that's necessary is to get the right medical tests to determine how, say, your insulin level changes the more hot dogs you eat.

So, the likely possibility is that eating one hot dog, say, a week may make one but, say, eating 100 Slim Jims a week could be deadly. So, this would be clearly evident with a quantitative test but be smoothed out in an average.

Comment Is this Swarm Intelligence? (Score 0) 38

This is really strange. This prediction seems to violate the two principles (that I'm aware of) of swarm intelligence (SI), 1) Optimizing an objective function 2) Finding how to learn what is the best strategy based on past decisions. I'll briefly describe both. (Btw, am not an expert. This is what I remember from papers I read years ago.)

1) The purpose of SI is to optimize a function. This can be a loss function or, in this case, it can be a prediction algorithm. So, if SI failed to predict the winner(s), then this is independent of maximizing the loss function of the prediction algorithm. In other words, the predictions may have been the best predictions given the loss function. Or, there was no way for the AI to make a better prediction.

2) This can be probably be best described using a physical metaphor, rather than the concept of Pareto optimality (which I haven't used in years). SI is based on the idea that, say, a colony of ants can, first, find a food source and optimize the best path to food source using only information that's collected from the ant themselves. This is done using an optimization method that reduces the search space to a few variables in which to search and therefore maximize.

So, the problem with this one prediction of the Kentucky Derby is that that SI algorithm simply hasn't been given the parameters to learn. Maybe, using historical data and this initial wrong guess, it could greatly improve its initial prediction.

Comment The Long Tail Problem (Score 1) 172

A problem, I speculate, are cases that involve long-tails. That is, probability distributions in which data is sparse and this makes prediction or classification low or with high variance. Or, this could also be true (with long tails), that the aggregate probability of classification is high but the individual classification is low. This is a problem that large retailers have. A potential buyer fits the probabilistic category of needing or wanting product x but, individually, these people never buy. Two common long tails are the power law distribution and the Cauchy distribution. In the case of the power law, the mean and variance only exist under certain boundary conditions. In the case of the Cauchy, the mean and variance don't exist, which makes prediction inherently impossible.

One example -- in case this was too abstract -- consider that you're a person who the UK has classified has a high likelihood of committing murder. Individually, however, you're likelihood would actually be low. What could account for this difference? Because, let's say, n variables are used to make the prediction. And of these n, this person has a high likelihood of a fit and with low variance. But, only one of the n variables is actually predictive of murder. Or, because the data is so sparse, there simply isn't enough data to correctly categorize this person, the UK government could greatly damage this person's life with a misclassification; not because of any ill intention (as is mentioned in the comment) or bureaucratic error, but because the classification system is inherently flawed; and those who use it aren't aware that it is.

Slashdot Top Deals

Those who do things in a noble spirit of self-sacrifice are to be avoided at all costs. -- N. Alexander.

Working...