I have several engineering degrees and have retired after a 35 year career.
If you take 100 readings with a device that has a +- 5 percent accuracy and average them together, you get an average value that has a +- 5 percent accuracy. Easy as that. Repeatedly reading the same value as fast as you can, and then averaging the values together, does not produce a more accurate value.
What you are talking about is akin to an error rate. If you test 6 percent of light bulbs coming off of an assembly line, then you can get the statistical failure rate to some degree of precision. If you test 20 percent of the light bulbs, you get a much more accurate idea of the failure rate. If you test the 6 percent, then you can not state the error rate to 5 significant digits.
I remember taking physics in high school. We started off with slide rules (yes, that long ago). It was nice because you could see where the calculation was getting blurry. "Well, somewhere between 4.5 and 4.6...". Then, calculators were introduced and people started to give results to 8 significant digits. That would end up getting you zero credit. You had to know how many digits were significant.
So when I see maps showing the temperature where I live to less than a tenth of a degree - in 1860 - I react with skepticism. You have to have a very larger amount of very accurate data. Oh wait, that was two tree rings from 40 miles away.... and some layers of mud in a pond 100 miles away.