>Unless you have at least a masters in the related subject, ideally a PhD
I disagree. You can spend a few years studying the nomenclature, norms, methods and underlying science and be entirely capable of reading and understanding a paper in context for a field that is not the one you started in.
Sometimes it's what you bring to the party that helps. I bring some knowledge on statistical inference and experimental methods, which arises from my day job. My interest was understanding my own health. It took about a decade of reading papers and textbooks to get up to speed. It has freed me from listening to health advise in media and not knowing how to tell if it's sound. I can go to the sources and see them in context.
If you want a difficult statistical environment, try education - That was my wife's PhD topic. My domain has no shortage of data. I can make all the data I need from silicon. The difficulty is in understanding it and what to do about it.
You could read my most recent paper ( https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fdl.acm.org%2Fdoi%2F10.1007... ) and understand it with only a solid grasp of school level algebra and a spot of probability. I wouldn't expect anyone not involved in my field to care one bit about this algorithm, but I like it. It's neat.
Her comments on the nature of the threat from Russia and China are well put and stand up to analysis. That she was stating these things in public suggests that she wants the politicians to stop dithering and she is correct about that.
Her comments on tech seen naive. The tech world won't take her seriously and with good reason.
Parkinson's and Parkinsonism have a lot of causes. If a person is exposed to any chemical that has defatting or nerve harming properties, like TCE, or various insecticides, they are at risk.
The way to avoid - or mitigate against this is to just limit exposure. A co worker ended up with Parkinsonism because he used a lot of hexane that was in contact cement for mounting photos without ventilation. Avoiding all exposure is probably impossible.
Yes. There is clearly more than one "cause", but the proximate cause is chemicals tricking the immune system into attacking specific cells.
What the MAHA people don't do is read papers. I do.
E.G. Here's a train of thought:
1) A long time ago, in the UK, a study showed a significant correlation between Parkinson's and exposure to insecticides.
2) More recently, a study showed lectins from wheat forming a ring around the vagus nerve and traveling up it to the Parkinson's site in the brain where biosimilarity between the lectins and tissues in the brain set up the autoimmune reaction that is part of Parkinson's. This wasn't a dodgy correlation study, they took photographs.
3) 99+% of the insecticides modern human's encounter are the "natural" insecticides in plants.
Conclusion : I'm suspicious of wheat and not for the usual reasons.
In the case of R, it's because it's the only place where high quality research level statistical algorithms are built en masse (ML libraries in python are not a substitute, they tend to be built by non-subject matter experts who don't even know that there are corner cases)
A long time ago I helped my wife with writing R scripts for her PhD. Why? MANOVER. Your average stats library will do ANOVA (Analysis of Variance) but not MANOVER (Multivariate Analysis of Variance). R did the MANOVER. It could also read in the CSV and had plenty of distribution models to apply.
Of course, I read up and wrote my own MANOVER implementation in python so I don't have to touch that horrible R language ever again. There's a little bit of R in my RNG book, but that's because it's in a section reviewing the distribution support in languages and so R is there because that's what it does well.
When I'm in charge, that's the font they'll be required to use.
There's an interesting new idea where you can get some journals to pre-approve publication of your study by first submitting your plan... so you outline exactly how you are going to perform the experiment and analyze it. Then the journal pre-approves it, you perform the experiment/study, and they'll guarantee to publish your results (if you follow your plan) no matter the outcome. The idea is to fix the problem where journals only want to publish surprising results because they're more exciting, but the problem is that surprising results are also more likely to be wrong, and also to get cited.
The scientific community generally knows they have a serious problem, and they want to fix it, but in my opinion they're moving pretty slow. I don't know if they understand how much trust they're losing every time a story like this comes out. Ultimately it's good that these studies are being retracted, but the slow and painful way it's happening is just crushing trust in science as an institution. I'd like to see the scientific community take a stronger and faster approach to solving these problems.
Do you guys know what you're doing, or are you just hacking?