It is very important for accuracy. Otherwise there wouldn't be a consistent bias between the two. Uniques vs page views doesn't account the large differences. As you said, independent random sampling would certainly result no significant difference, but neither Net Applications' nor Stat Counter's samples are random. Also, their samples are mostly consistent. When there vast differences between different countries, site topic, and market, sample size is very important.
To your other point, election statistics is much more delicate than you make it out to be. They are also not truly random samples, but they are different for each poll--which helps. They also weight the results by location and expected turn out which is a whole different set of assumptions. If surveying the first 1000 people to answer their phone with consent and using the raw numbers was enough, then there wouldn't be any significant differences in the polls.
A population is a population and a 1% sampling has much less meaning than a 99% sampling, whether or not it is random. A larger sample always has more statistical relevance/confidence.
GeneralMayhem pointed out that this is flat out wrong. Here's an example to demonstrate it:
Consider a population where 99% of users use IE, and 1% of
users use Chrome.
From that population you can draw a biased sample consisting of the 99% IE users and conclude that there are no Chrome users at all.
Or you could draw a random sample of 1% of the population, and assuming that 1% adds up to enough people, your chances of drawing a random sample that did not include a reasonable number of Chrome users too would be extremely low.
So depending on methods used, even a 99% sample may be entirely meaningless when compared to a 1% sample.
If you extrapolate the raw numbers, then yes, you can conclude there are no Chrome users at all. But no one would accept this as fact knowing the sampling methodology.
You can, however, still build confidence intervals based on biased samples. In your example, with 100% certainty, between 0-1% of the population uses Chrome. Yet of course, even the 99.999% CI will be wrong due to the severe bias. Now, if in your example only a 1% biased sample were looked at, the 100% CI would be 0-99%. Much less information. Note that you may also still see trends in biased samples if the sample is consistent.
If biased samples were meaningless, then how are Stat Counter or Net Applications results valuable at all since they are not random samples?
That's true, but if the size of your sample is 99% of the population, that sample is always going to be close to random. For all practical purposes it's not actually a sample any longer.
> It is very important for accuracy. Otherwise there wouldn't be a consistent bias between the two.
As mentioned above, they measure different things - page views vs. unique users. That is, usage vs users. There is no reason to expect them to be identical.
To your other point, election statistics is much more delicate than you make it out to be. They are also not truly random samples, but they are different for each poll--which helps. They also weight the results by location and expected turn out which is a whole different set of assumptions. If surveying the first 1000 people to answer their phone with consent and using the raw numbers was enough, then there wouldn't be any significant differences in the polls.