Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:What they don't mention... (Score 1) 80

I wonder if that is what they are alluding to when they say ""People with less than a college education are creating a lot of value — and sometimes more value than people with a college education — using our product".

Depending on how finished and drop-in vs. how in need of fiddly integration and customization at the customer site for their systems 'our product' is the open roles implied by that line could range from "you could be an analyst monkey; maybe even analyst monkey II if you seem like a bright sort!" to some more intricate and visible customer facing integration project stuff(which is were some outfits probably would have a cultural preference for candidates with prestige credentials); but when c-suite says that you can create value as a user that normally isn't to be read as a statement that there are openings at their level; and if they wanted to be either more vague or differently specific they could have been.

Comment Re:Time to close the doors? (Score 1) 70

In many cases, the reason you cant do that, is because of the requirements of the seminal study in the first place.

Things like lifetime cohort studies, for instance, (where are you going to get another 5000 people to track for a lifetime study of a once in a lifetime event? A time machine?) or where very specialized equipment that costs a small fortune to produce (like the stuff at CERN) are at play.

Think about what you are actually saying, and then think more critically about the replication crisis, and then think about the current state of academia more like an experiment that is not performing according to expectations. (specifically, the expectation is that impact factor and impact scoring are sufficient controls to combat and control fraudulent papers proliferating and poisoning the credibility of the entire endeavor.)

Current processes are geared to explicitly maximize new work, even though the actual quality of that work cannot be verified, and is increasingly having problems with actual quality. (with perverse incentives on the rise to actually do the opposite: actively degrade quality. See for instance, the hackjob work done by private interests to undermine "undesired" findings, such as about our climate, and human impact thereon.)

Again, this is because of a fundamental failure to appreciate the value of boring replication work, which is exactly what I suggested.

Boring replication work combats both kinds of problem, but we do not give it the valuation it deserves.

The reason current polices are geared to maximize new work, is due to the resource scarcity with which to do meaningful work in the first place (it's very hard to get the funding to follow 5000 people for 50 years to see how the removal of tetraethyl lead from fuel has changed human behavior, for instance), which is another way of saying that there just isn't enough funding to study the things that need to be studied, let alone verify the findings of the things we can fund to study.

The people holding the purse strings are still politicians, since they set the size of the award pool to start with.

So far, your arguments have been "Refusal to see the forest, for the trees" and "Insisting nothing is wrong, even with alarming evidence to the contrary in your face."

Am I saying that your course of action is incorrect, given your position? No. You are and have been doing what is necessary in the face of resource scarcity, to get as much science done as possible with the best quality you can manage with those resources.

But does it create the replication crisis? Yes. yes it does.

Scientists are humans, and humans are prone to certain modes of mental derailment. There is a very strong bias that the current system is functioning well, even when many outstanding measures indicate it is not. (this study from the summary, and numerous others, for example.)

Why is that, I wonder?

Why do you insist that nothing is wrong, or that dedicated replication teams are so unglamorous, as to be worthless to academia-- or, in your words, "The things you give undergrads" ? (as if it is work "beneath actual scientists" rather than a valuable and indispensable tool in that process)

More pointedly, you assert that things are fine as they are, since "We still catch fraud"-- even though the data suggests that fraud is INCREASING, and catching it is falling behind, which would indicate a failure in methodology...

In fact, recent studies have indicated that its becoming so common, that its become an actual industry, and increase at a rate that very clearly indicates that this is NOT being adequately controlled:

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.science.org%2Fconten...

Yet you insist that the methodology is fine-- Why is that?

Again, I would conjecture, it is because there is a startling degree of disdain for "mere replication of findings", combined with an awards system that actively prices that work out of the process, with no system in place that *ADEQUATELY* polices the problem. ("Adequately", because this rate or error is increasing at this very alarming rate) This is abundantly clear from widespread findings in the academic field, like the study I just posted a story about-- its just one of many.

Impact scoring (including impact factor), is very clearly not a sufficient control for this process. If it was, this result would not be appearing.

The scientific process would suggest that this is an observation, and that the next step is formulation of a hypothesis for testing.

I have provided one for you, and it can be tested. Why has this kind of thing not been proposed and examined with the appropriate process?

I can appreciate that there are precious few resources to allocate, but this kind of thing can be tested in small scales for performance quality measures.

It's what's called for by the scientific process, so why has academia resisted it so much?

Or, does academia think its own policies are somehow above the very process they use to wrest truth from bias? (again, scientists *ARE* humans, and humans *DO HAVE BIASES.* Things like "Sunk Costs Fallacy" and pals, spring instantly to mind, given the battle to attain tenure and recognition in a field. "Appeal to authority" also comes to mind, with rhetoric about Impact Factor and Reputation Scoring, in clear contravention of very observable trends.)

Try to be more objective about the degree and severity of this problem, and the outstanding need your vocation has to maintain its rigor and value to mankind.

Especially in the face of a very well funded, concerted effort to undermine that work.

Comment Re:Time to close the doors? (Score 1) 70

First, impact factor (not impact score) is used to compare journals, not individuals. Second, it is the average citations of every paper in a journal so one paper that gets a high citation rate through fraud will have minimal effect on the impact factor...and if you have so many fraudulent papers that it does then you are going to have a widepsread reputation in the field as as the journal if science-fiction at which point it will not matter what your imapct factor is, you'll be judged on that reputation.

The politcians I mention provide an insufficient financial resource to provide for the degree of replication needed, replication scientists dont get near the impact scores of seminal paper authors

Which is exactly how it should be because if all you have done is exactly reproduce someone else's work nobody has learnt anything new so the impact is close to zero. This is the sort of project you give to an undergrad, not something you would want to publish. Simply reproducing a previous work is a wasted opportunity to improve on it. Instead of aiming for replication, aim for improvement: increase the precision of the measurement or use an entirely different method that tests different assumptions. Replication usually happens naturally when people start to build on your result and improve it. This is why, despite as you point out there being no funding specifically for replication, we still catch scientific fraud.

Comment Re:Just like drugs (Score 1) 50

that the one person who has trained an AI in this discussion... is the person who you think doesn't know AI.

You are NOT the one person who has trained AI in this discussion, I have been using it for the last 20+ years. I'm not afraid of AI at all but, unlike you, I am very much aware of its limitations because, also I suspect unlike you, I have actually trained and used AI systems. However, given that you failed to read that in my post I suspect you may be the one person in the discussion who can't read and understand English which is another reason to doubt your claims.

Comment Re: 100 KW nuclear ? (Score 1) 153

Not really needed. In the near to medium term future, human missions to the moon will take place during the lunar day. The only significant use of power at night will be heaters to keep sensitive equipment from freezing. The amount of energy that needs to be stored will be minor.

Eventually we'll want humans to be able to stay through the night, but that's a long way off. There's a lot of work to do first.

Comment Re:Time to close the doors? (Score 1) 70

Impact score is literally the number of times a paper is cited by other papers.

Instead of pretending it's magic, instead realize what happens when studies are *not* replicated.

A single study is conducted, and because it is the seminal paper, it gets lots of citations in related works.

Assuming an academic forger is smart, and does not make outlandish claims that break ancillary studies, they can go undetected for decades.

Like the work behind the amyloid hypothesis.

The methodology currently employed grants awards to very skilled fraudsters, in increasing quantity and severity, as suggested by this study, and supported by the observable lack of replication being done.

The politcians I mention provide an insufficient financial resource to provide for the degree of replication needed, replication scientists dont get near the impact scores of seminal paper authors, and conversely, through the the process you laid out, dont get funding approved, leading to them getting even less funding, because you cant realistically do science on a 0$ budget to get the impact scores you need to be awarded that funding. You've created a singularity.

To have competative impact score ratios, there would need to be dedicated 'refutation firms', that predatorially kill published findings, and get citations for doing so. Those firms would need good premises and equipment equally on par with the vanguard, and in many disciplines, that's an equally costly outlay that may require a fed budgetary line item.

We dont have those, and we dont have those for reasons related to the insufficiency of impact score as a proxy for merit, combined with generally insufficient funding overall.

Comment Peer Review (Score 2) 70

That's called peer review, and Mark Smyth passed all peer review. Yet he still pumped out fake papers; hundreds of them, polluting scientific knowledge with fake data.

Peer review is not the same as a fraud investigation. When we review papers we start from the assumption that the data in the paper was collected "honestly" i.e. that the researcher accurately reported to the best of their ability what they did and the data they collected. We then look at that data to ensure it looks consistent with what they did and that the method did not contain anything that might cause misleading data. Then we check that the conclusions in the paper are consistent with the data and analysis.

Peer review is there to prevent a researcher from fooling themselves (and others) by making claims that are not warranted by data or missing some subtle effect that could explain what is going on without the need for new science etc. It cannot check that the data are real although sometimes it can catch fraudulent data if that data are not consistent with well-known and established science. However, fakers are usually smart enough to be able to mske the data look consistent enough so it can be very hard to spot that in peer review. Ultimately it will get spotted as others try to reproduce results, faill and then start to look in much, much more detail at previous claimed results but peer review can't spot things at that level of detail.

Comment Not Needed: Good Journals Known (Score 2) 70

Journals are already "ranked" according to their "impact factor", which is a number calculated based on how often their articles are cited by other articles; it would make sense to also calculate a "credibility factor"

Impact factor generally is a credibility factor or at least I do not know of any journal in my field where there is a low-credibility journal with a high impact factor, although there are some specialist journals - e.g. instrumentation - which are highly credible but with a low impact factor. Generally speaking though anyone in the field worth their salt will know which the good journals are and where a paper is published generally does have a large impact on how we regard its quality.

I do not see a good way for a "credibility factor" to be calculated in an objective manner that would not have significant negative repurcussions e.g. counting the number of retractions would be bad since it would encourage journals never to retract papers. Similarly even the best intitutes can hire rogue researchers - or more commonly have bad grad students or postdocs - and enouraging journals to accept anything from any researcher in a "respected" instistute to boost their credibility would be bad too. Also papers in many fields cannot and do not have a single "primary" author.

Comment Re:Time to close the doors? (Score 2) 70

Currently, the paradigm is 'publish or perish', because science funding is only handed out to 'rockstars' by politicians

That is utterly wrong. As a scientist who has sat on several grant review boards there are no politicians involved at all in deciding who gets funding. The politicians set the size of the pot we have to give out but grant applications undergo rigorous, multi-stage peer evaluation. Even in the US where a single expert program officer has a lot of control over a grant program (or at least they used to) peer evaluation was still critical to the process. The only exception to this are "mega-projects" where the cost is so significant that it merits a line-item in the national budget and then yes, politicians obviously have to be involved but this is not where the vast majority of research funding comes from and at that point they are listening to the views of multiple experts and weighing in the national and political interests, not counting papers.

When grants are peer reviewed nobody just looks at the number of papers if the appilcants and goes "oh wow that guy published X papers lets give him everything he asked for!". Instead we look at the quality and impact of that work as well as what they are actually proposing. Different people weight these things differently - I tend to weigh the proposal more, others weigh past pulication record higher and both are very valid. However, in evaluating publications we use things like venue of publication (how many are in top journals for the field?) and citations (h-index) - although even then you have be to careful since that depends a lot on the field. Rate is a consideration but large numbers of papers in dodgy journals will count for nothing, indeed they would be detrimental since those reviewing it would be asking what he person is up to and how can they not know that the journals they are publishing in are trash.

Comment What they don't mention... (Score 4, Insightful) 80

Designed to sound more dramatic than it may actually be.

It seems worth mentioning that they are specifically saying that among people they hire they don't treat prestigious degrees differently and sometimes get better results from people without them. They don't actually say anything about whether they ignore degrees in hiring; or whether they find a correlation between degrees and hireability.

The statement is certainly constructed to sound more dramatic than that; and depending on their hiring practices it may actually be; but "if we think you are good enough to hire we don't continue to uphold a caste system based on where you did undergrad" is not a terribly radical position to take. Not one that everyone actually does take; but not terribly uncommon.

Comment Input Bandwidth (Score 4, Insightful) 104

It will not be the chatter that kills this but the input bandwidth. Even if you assume it would allow you to set up some "verbal macros" to execute when a single word is spoken I can still click mouse buttons faster than I can speak words. The same goes for output bandwidth but even more so - it is much, much faster to see diagrams, buttons and read text then it is to listen to the computer speak information.

I can see this being useful in limited applications - such as in-car systems where a verbal inface and lowing bandiwicth would be a huge benefit. However, I cannot see it replacing a regular desktop/laptop OS.

Comment Just like drugs (Score 4, Insightful) 50

I've built one from scratch. ...Telling me I don't know AI is well... funny.

I've trained many machine learning models as well, from BDTs through to GNNs but never an LLM - although arguably not entirely from scratch (except for an early BDT) since we used existing libraries to implement a lot of ML functionality and once setup we just provided the training data. If you really have trained an LLM "from scratch" as you claim then surely you must be aware of how inaccurate they can be? I mean even the "professional grade" ones like Gemini and ChatGPT get things wrong, omit details and make utterly illogical inferences from time to time.

I'd agree with the OP that you do not know AI - even if you are capable of building an ML model from scratch (I presume using a suitable toolkit so not realy from scratch) you clearly do not understand the reliabilty of its output or are incapable of seeing how that might be a serious problem when advising someone with mental health issues which raises questions about exactly how much you understand of what you might be doing.

The new law seems to be well written. All it does is ensure that a medical professional has approved the use of the system. It's the same type of protections we have for drugs, we do not just let companies release any drug they like before it has undergone testing and a panel of medical experts agrees that it is both safe and effective and even then they do not always get it right! How is it stupid to have similar protections for computer software used to treat mental health problems? It does not prevent you from using software in this way all it requires is that an expert has said that it is safe and effective.

Slashdot Top Deals

"It is better for civilization to be going down the drain than to be coming up it." -- Henry Allen

Working...