Let them make what they can of the data. If they can't come up with any substantial, then it validates the data and your theory. If they come up with something you can't counter, then they've found a serious problem with your theory or your methodology which needs to be addressed.
Definitely, let them keep at it. But it's already been a few decades and very little evidence seems to be falling on their side. We can't and shouldn't wait until every single one of them is convinced. 97% of people actively researching in this area are convinced humans are causing global warming*. How certain do we have to be before start to act?
* From wikipedia:
A 2010 paper in the Proceedings of the National Academy of Sciences of the United States (PNAS) (http://en.wikipedia.org/wiki/Proceedings_of_the_National_Academy_of_Sciences) reviewed publication and citation data for 1,372 climate researchers and drew the following two conclusions:
(i) 97–98% of the climate researchers most actively publishing in the field support the tenets of ACC (Anthropogenic Climate Change) outlined by the Intergovernmental Panel on Climate Change, and (ii) the relative climate expertise and scientific prominence of the researchers unconvinced of ACC are substantially below that of the convinced researchers
There's a big difference between the two: a PNG or WebM library may contain exploitable bugs, but they are difficult to exploit because these formats are fundamentally data. GLSL is not, it is executable code which not only has to be run, it has to be run as fast as possible. This means that it's compiled to native code (if you're on an open source OS and not using blob drivers, odds are that it's compiled using code that I worked on). It takes very little in terms of bugs for this to be exploitable, and that's not helped by the fact that the target - the GPU - is typically a horrible design from a security standpoint. This is why 3D was one of the last things for VMs to support, and why they still recommended that you don't enable enable it if you care about security.
News flash for you -- modern javascript engines also go to great pains to make javascript code run fast. Including things like compiling it down to native code. I could see exploiting bugs to crash people's systems, but beyond that I don't see how javascript code issuing WebGL commands is going to be able to do much.
Just as with any native code (like a DirectX game, for instance) there is no way to ensure "safety"...although I'd think almost any other attack vector would be easier than WebGL.
I do wonder. Of course it would mean targetting specific GPU vendors, and perhaps specific driver versions as well. But imagine what you could do if you were able to play with DMA... bye bye to any OS security.
This is NOT native code we're talking about here! This is a javascript API that lets you send shader programs written in a high level language to the GPU! Both the javascript code and the shaders are jit compiled (in a modern browser) before being run. The javascript WebGL api has no way for you to get anywhere near a DMA handle. They GPU may use DMA under the hood, but big whoop, GPU accelerated 2D canvases like IE9 has now do the same thing. You can't get any closer to getting your hands on a DMA handle with WebGL than you can with the 2d canvas context API.
Prediction is very difficult, especially of the future. - Niels Bohr