Why N-1 in standard deviation?

Why do you compute the standard deviation s of a sample set by dividing a summation by N-1, instead of dividing it by N, as you would do in computing the mean of this very same sample set?

ss1
“Corrected sample standard deviation” (Wikipedia)

Here is why:

Because the computation of s involves an inherent comparison of this sample set of N elements

A = { x1, x2, …, xN }

with a nonexisting but presupposed sample set of a single element:

B = { }

i.e. with a set that only contains the mean point of these N samples.

Just like we have to subtract from each of the xis in the numerator, we need to subtract the 1 that is the count of the from the N that is the count of these xis in the denominator.

In other words, we begin counting not from an empty set of 0 samples where standard deviation is undefined, but from a set of 1 sample* where standard deviation is zero by definition.

Therefore, a more general equation for standard deviation would be:

ss2

Share this post if you are an honest person who finds this answer convincing.

Join the Facebook group “Set Theory and Philosophy

Read about who wrote this: Işık Barış Fidaner

Read the article “Freedom in the free world: The extimate becomes the law

And have a nice day.

*: Lacanian readers will notice the parallel with the Master-Signifier.

†: Any reasonable person should now ask about the squares and the square root.

(Turkish)

***

On this topic:

1) objet petit a = “less than zero” = “less than the empty set”

2) Qualia is uncertainty, uncertainty is conditional counting

3) Virtuality is what is left behind by conditional subtraction

4) encapsulation is relativity

5) relativity?

6) Conditional Counting of Qualia

7) Why N-1 in standard deviation?

***