Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Is this Swarm Intelligence? (Score 0) 38

This is really strange. This prediction seems to violate the two principles (that I'm aware of) of swarm intelligence (SI), 1) Optimizing an objective function 2) Finding how to learn what is the best strategy based on past decisions. I'll briefly describe both. (Btw, am not an expert. This is what I remember from papers I read years ago.)

1) The purpose of SI is to optimize a function. This can be a loss function or, in this case, it can be a prediction algorithm. So, if SI failed to predict the winner(s), then this is independent of maximizing the loss function of the prediction algorithm. In other words, the predictions may have been the best predictions given the loss function. Or, there was no way for the AI to make a better prediction.

2) This can be probably be best described using a physical metaphor, rather than the concept of Pareto optimality (which I haven't used in years). SI is based on the idea that, say, a colony of ants can, first, find a food source and optimize the best path to food source using only information that's collected from the ant themselves. This is done using an optimization method that reduces the search space to a few variables in which to search and therefore maximize.

So, the problem with this one prediction of the Kentucky Derby is that that SI algorithm simply hasn't been given the parameters to learn. Maybe, using historical data and this initial wrong guess, it could greatly improve its initial prediction.

Comment The Long Tail Problem (Score 1) 172

A problem, I speculate, are cases that involve long-tails. That is, probability distributions in which data is sparse and this makes prediction or classification low or with high variance. Or, this could also be true (with long tails), that the aggregate probability of classification is high but the individual classification is low. This is a problem that large retailers have. A potential buyer fits the probabilistic category of needing or wanting product x but, individually, these people never buy. Two common long tails are the power law distribution and the Cauchy distribution. In the case of the power law, the mean and variance only exist under certain boundary conditions. In the case of the Cauchy, the mean and variance don't exist, which makes prediction inherently impossible.

One example -- in case this was too abstract -- consider that you're a person who the UK has classified has a high likelihood of committing murder. Individually, however, you're likelihood would actually be low. What could account for this difference? Because, let's say, n variables are used to make the prediction. And of these n, this person has a high likelihood of a fit and with low variance. But, only one of the n variables is actually predictive of murder. Or, because the data is so sparse, there simply isn't enough data to correctly categorize this person, the UK government could greatly damage this person's life with a misclassification; not because of any ill intention (as is mentioned in the comment) or bureaucratic error, but because the classification system is inherently flawed; and those who use it aren't aware that it is.

Comment A comparison of variances is needed (Score 1) 104

A few points:

* The (linked) article notes that the Bureau of Labor Stats makes a distinction between "computer programmer" and "software developer". Why? It wasn't fully explained in the article. In any case, the earlier is declining in employment and the latter is increasing. What was also not addressed; is the correlation between the two scalar invariant? That is, is the decline in employment of programmers equivalent to the increase in developers?

What was not mentioned in the article: a comparison of variance of employment for the same time period over many decades. In other words, if you assume that the number employed as computer programmers has the same distribution over many generations, then you can compare the variances of the same time period. Or, is the variance of a decline of 25% within what would be considered a typical range of variance (for the same time period)?

Why is it important to consider the variance? Because then this would be the first (important) step in defining the decline in jobs that has a casual relationship to the frequent use of AI. In other words, this could show that there's a heteroscedasticity correlation between the variables and, therefore, a comparison of variances from different time periods can't be used with this data set.

Comment The problem with the statistical methodology (Score 1) 172

Read the blog post. The author doesn't normalize for sample sizes. Specifically, the author assumes that 1) makes no attempt to normalize for the different number of reviews of any given show and any given episode. If, say, one show has 10 reviews for the first six episodes and another show has ten thousand, on average, for the first six shows, the author both doesn't recognize this difference and doesn't change his methodology to account for this difference.

The second problem with methodology is that the possibility exists that the reviewers of the first six episodes are highly correlated with each other. That is, viewers of the first six episodes may be the most adherent watchers of new shows, the most likely to comment, and therefore the reviews of the first six episodes have a different viewpoint; one that's more critical, than other reviewers of other episodes. Or, the reviews of later episodes are more likely to be more representative of the mean reviewer.

The solution is to use a sampling method that assumes a given mean and a given variance determined by the central limit theorem of all episodes. Then, using a sampling methodology, determine if the characteristics of the reviewers of the first six episodes are different from other reviewers. For instance, what's the likelihood that a reviewer of the first six episodes is the 1) the first to comment on a new show 2) only comments on the first few episodes? Then, compare this likelihood to chance of the mean reviewer

Comment The ban misses the point (Score 2) 99

If I remember correctly, signal jammers are illegal in the US. Anyway, I want to point out two things (which haven't been mentioned in the comments): 1) The key fob of a car is read using software defined radio. Here's one I found on Amazon that sells for $35:

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.amazon.com%2FRTL-SDR...

Then, using open-source software, such as GNU Radio, you can start reading car fob signals.

The actual signal jammer can be made from (one) Arduino. Here's a project I found on YouTube that jams wifi signals. I assume that the frequency can be changed to also block the frequency that a specific car fob is on:

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DYP5xro4FRDo

My second point is that I don't think a ban is actually meant to ban the technology. As I've demonstrated, all one has to do is decompose the given products from the software to make products that have many other uses, other than reading car fob signals. The point of the ban is most likely that it makes it easier to arrest those who are suspected of stealing cars. That is, because it may be difficult to prove who stole a car and more easier to prove that you bought electronic equipment to steal a car

Finally, it's worth mentioning, auto manufacturers could easily upgrade key fobs to make them more secure. The reason that they don't, I suspect, is because it's simply not worth the cost of either using new technology or upgrading the current one. So, keep in mind, any government could ask any auto manufacturer: is there a more secure method and why aren't you using it?

Comment Re:Any real experts here? (Score 2) 182

Am an applied mathematician and have a strong interest in the non-linear dynamics of plasma that make a sustained plasma reaction that creates a self-sustaining process of fusion. To be clear, am not an expert. I just try and learn as much as I can on the subject.

So, first, one of the primary problems is that the physics of friction in a fusion reactor is poorly understand. An enormous amount of computational power is used to model the plasma friction coefficient. A good place to start, if you're interested in learning, is to understand why this is inherently non-linear and the physics are poorly understood. So, I'd begin by googling "plasma friction coefficient".

Also, this is a lecture from about 8 years ago. It's nonetheless relevant. It describes how magnetic confinement fusion (used in the reactor in France) works, what exactly is understood about the physics of it and what isn't. Ir's very mathematical. Nonetheless, because you asked for an expert opinion, I think this would be a secondary answer. That is, something I think that would be difficult to describe without the math that's used to model plasma.

The link to the lecture: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Comment Re:This is the proper use of AI (Score 1) 13

Robots can't pick strawberries, grapes, or apples as humans do. They also can't do the dishes, load a dishwasher, or clean a hotel bathroom as humans do. The success of AI in making progress in language skills and playing finite games with clear rules, chess, go, etc., is because the computational complexity of the space of all moves has been lowered to polynomial bounds by methods such as feed forward neural networks, monte carlo tree search, etc. There are no methods, however, to improve the efficiency of, say, how a robot grips objects -- the gripper problem -- and there is no geometry to replace sampling methods of how robots navigate three-dimensional space.

This is a huge difference between how humans work and think and AI does. So, I don't mean to take away from the significant developments of what AI can do, but it's far from operating as efficiently and intelligent as a human is.

Comment The Universal Approximation Theorem (Score 2) 13

Here's a link from Google that explains their "co-scientist": https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fresearch.google%2Fblog%2Fa...

Also, the universal approximation theorem says that given certain conditions, a family of neural networks can find an approximation to any given function if it's a Euclidean continuous space that's compact and finite. This seems to be a strict constraint on the type of data that can be approximated by a family of neural networks. I don't work in the field. I've asked quite a few researchers if the universal approximation theorem accurately predicts what can and can't be modeled. So far, no one has replied. So, I'd thought I'd ask the question here.

The universal approximation theorem: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

Comment A comparison to the electronic calculator (Score 1) 220

Software development is a painfully slow and laborious process. This article misses the point. AI is helping to improve efficiency. A metaphor that may be relevant is when (electronic) calculators became commonly used. The result most likely was that students no longer intimately were familiar with trig functions and the unit circle. What replaced it with placing variables for the trig function into a calculator and getting a result. The calculation was mechanized. This, however, didn't replace the need to understand math at a higher level. You still needed to know the basics of trigonometry if you wanted, say, to start studying calculus or linear algebra.

Then, the question, "does the code work?" should be the question that should determine whether to use AI or not. The goal should be to improve efficiency; and this most likely means that the code that can be mechanized should be. A new way of code development has started. This is likely going to change how one learns software engineering and how one uses it to build production-level systems. In other words, this may be the start of when it's no longer necessary for a software engineer to be considered competent if they can't build the entire code base themselves, without the assistance of AI.

Slashdot Top Deals

"Morality is one thing. Ratings are everything." - A Network 23 executive on "Max Headroom"

Working...