Almost every very large statistical sample becomes distributed like a "Bell Curve", a normal distribution.
Central Limit Theorem for dummies
The Central Limit Theorem (CLT for short) basically says that for non-normal data, the distribution of the sample means has an approximate normal distribution, no matter what the distribution of the original data looks like, as long as the sample size is large enough (usually at least 30) and all samples have the same size. And it doesn’t just apply to the sample mean; the CLT is also true for other sample statistics, such as the sample proportion. Because statisticians know so much about the normal distribution, these analyses are much easier. 
Central Limit Theorem, simple explanation 1
There are many people of average height in the world, and a smaller number of very tall and very short people. The more extreme the height, the rarer the people with that height. ...
Everything from the frequencies of photons emitted by a laser to the velocity components of a gas molecule do the same thing. That same smooth bell curve happens all throughout the sciences.
Or height, weight, life span,
It’s inescapable. Why?
The answer is a mathematical fact called the central limit theorem. In slightly imprecise nonmathematical language it says the following: any time you have a quantity which is bumped around by a large number of random processes, you end up with a bell curve distribution for that quantity. And it really doesn’t matter what those random processes are. They themselves don’t have to follow the Gaussian distribution. So long as there’s lots of them and they’re small, the overall effect is Gaussian.
Central Limit Theorem, simple explanation 2
The central limit theorems are theorems for probability theory. They say that given a large number of independent random variables, their sum will follow a stable distribution. If the variance of the random variables is finite, a Gaussian distribution will result. This is one of the reasons why this distribution is also known as normal distribution.
The best known and most important of these is known as the central limit theorem. It is about large numbers of random variables with the same distribution, and with a finite variance and expected value. 
[Learning by simulations]