Why askpolly uses margin of error, confidence interval and sample size
askpolly's key differentiator to traditional social listening is that Polly™ starts her studies on people, not keywords. All reports produced on the AI platform include 3 key statistics that allow users to have confidence in the public opinion research that is presented. These stats are important when reporting on a study’s results for unique reasons:
Sample Size: This is the number of individuals that are included in the study. The larger the sample size, the lesser the impact of random variation in the results, increasing their accuracy. In askpolly’s case, the sample is not only large (national samples <700k individuals) but also balanced to the census statistics, one of the key factors for Polly’s success in producing public opinion studies.
What this means for askpolly’s engagement
Samples that are representative of the population and statistically validated allow any insight garnered from a study to be extrapolated to the greater population, even those who aren’t active on social platforms.
“Engagement” on Polly’s reports is an extrapolation of the engagement on your question to the greater population, not the engagement Polly is seeing in the sample.
So, if Polly sees that 10% of her sample is engaging with your prompt, she assumes that 10% of the greater population is engaging with the question. This can explain why there will often be more engagement that the sample size number.
Margin of Error (MOE): This metric indicates how much a study’s results may vary from the actual population value. This stat allows users to understand the margin by which an error may be found in a study’s results. In askpolly’s case, some of her AI-produced summaries of conversations may be off-topic, MOE will indicate how often this will happen.
Confidence Interval (CI): This value is a measure of precision, and with precision, come trade-offs. Let's look at medical tests as an example:
A patient will be referred to another round of medical tests if there is even an off-chance that the first test shows an anomaly. This often creates anxiety for patients (and added cost to the system) when the vast majority of the time there is no danger present. But in medicine, we don't want to take chances.
In the business world, the higher the confidence, the higher the amount of "noise" (i.e. non-relevant) data you will get. We want less data so that we increase the chances that the data will be relevant. While a 95% CI might seem like great marketing, because it sounds like it is more accurate, keep in mind that accuracy is not the same thing as precision. We want to be accurate, not necessarily precise.