The general intuition is that you can relate these moments using the Pythagorean Theorem (PT) in a suitably defined vector space, by showing that two of the moments are perpendicular and the third is the hypotenuse. The only algebra needed is to show that the two legs are indeed orthogonal.
For the sake of the following I'll assume you meant sample means and variances for computation purposes rather than moments for full distributions. That is:
E[X]=1n∑xi,mean,first central sample momentE[X2]=1n∑x2i,second sample moment (non−central)Var(X)=1n∑(xi−E[X])2,variance,second central sample moment
E[X]E[X2]Var(X)===1n∑xi,1n∑x2i,1n∑(xi−E[X])2,mean,first central sample momentsecond sample moment (non−central)variance,second central sample moment
(where all sums are over nn items).
For reference, the elementary proof of Var(X)=E[X2]−E[X]2Var(X)=E[X2]−E[X]2 is just symbol pushing:
Var(X)=1n∑(xi−E[X])2=1n∑(x2i−2E[X]xi+E[X]2)=1n∑x2i−2nE[X]∑xi+1n∑E[X]2=E[X2]−2E[X]2+1nnE[X]2=E[X2]−E[X]2
Var(X)=====1n∑(xi−E[X])21n∑(x2i−2E[X]xi+E[X]2)1n∑x2i−2nE[X]∑xi+1n∑E[X]2E[X2]−2E[X]2+1nnE[X]2E[X2]−E[X]2
There's little meaning here, just elementary manipulation of algebra. One might notice that E[X]E[X] is a constant inside the summation, but that is about it.
Now in the vector space/geometrical interpretation/intuition, what we'll show is the slightly rearranged equation that corresponds to PT, that
Var(X)+E[X]2=E[X2]
Var(X)+E[X]2=E[X2]
So consider XX, the sample of nn items, as a vector in RnRn. And let's create two vectors E[X]1E[X]1 and X−E[X]1X−E[X]1.
The vector E[X]1E[X]1 has the mean of the sample as every one of its coordinates.
The vector X−E[X]1X−E[X]1 is ⟨x1−E[X],…,xn−E[X]⟩⟨x1−E[X],…,xn−E[X]⟩.
These two vectors are perpendicular because the dot product of the two vectors turns out to be 0:
E[X]1⋅(X−E[X]1)=∑E[X](xi−E[X])=∑(E[X]xi−E[X]2)=E[X]∑xi−∑E[X]2=nE[X]E[X]−nE[X]2=0
E[X]1⋅(X−E[X]1)=====∑E[X](xi−E[X])∑(E[X]xi−E[X]2)E[X]∑xi−∑E[X]2nE[X]E[X]−nE[X]20
So the two vectors are perpendicular which means they are the two legs of a right triangle.
Then by PT (which holds in RnRn), the sum of the squares of the lengths of the two legs equals the square of the hypotenuse.
By the same algebra used in the boring algebraic proof at the top, we showed that we get that E[X2]E[X2] is the square of the hypotenuse vector:
(X−E[X])2+E[X]2=...=E[X2](X−E[X])2+E[X]2=...=E[X2] where squaring is the dot product (and it's really E[x]1E[x]1 and (X−E[X])2(X−E[X])2 is Var(X)Var(X).
The interesting part about this interpretation is the conversion from a sample of nn items from a univariate distribution to a vector space of nn dimensions. This is similar to nn bivariate samples being interpreted as really two samples in nn variables.
In one sense that is enough, the right triangle from vectors and E[X2]E[X2] pops out as the hypotnenuse. We gave an interpretation (vectors) for these values and show they correspond. That's cool enough, but unenlightening either statistically or geometrically. It wouldn't really say why and would be a lot of extra conceptual machinery to, in the end mostly, reproduce the purely algebraic proof we already had at the beginning.
Another interesting part is that the mean and variance, though they intuitively measure center and spread in one dimension, are orthogonal in nn dimensions. What does that mean, that they're orthogonal? I don't know! Are there other moments that are orthogonal? Is there a larger system of relations that includes this orthogonality? central moments vs non-central moments? I don't know!