Yes, we can get an analogous result using the sample mean and variance, with perhaps, a couple slight surprises emerging in the process.
First, we need to refine the question statement just a little bit and set out a few assumptions. Importantly, it should be clear that we cannot hope to replace the population variance with the sample variance on the right hand side since the latter is random! So, we refocus our attention on the equivalent inequality
P(X−EX≥tσ)≤11+t2.
In case it is not clear that these are equivalent, note that we've simply replaced
t with
tσ in the original inequality without any loss in generality.
Second, we assume that we have a random sample X1,…,Xn and we are interested in an upper bound for the analogous quantity
P(X1−X¯≥tS),
where X¯ is the sample mean and S is the sample standard deviation.
A half-step forward
Note that already by applying the original one-sided Chebyshev inequality to X1−X¯, we get that
P(X1−X¯≥tσ)≤11+nn−1t2
where
σ2=Var(X1), which is
smaller than the right-hand side of the original version. This makes sense! Any particular realization of a random variable from a sample will tend to be (slightly) closer to the sample mean to which it contributes than to the population mean. As we shall see below, we'll get to replace
σ by
S under even more general assumptions.
A sample version of one-sided Chebyshev
Claim: Let X1,…,Xn be a random sample such that P(S=0)=0. Then,
P(X1−X¯≥tS)≤11+nn−1t2.
In particular, the
sample version of the bound is tighter than the original population
version.
Note: We do not assume that the Xi have either finite mean or variance!
Proof. The idea is to adapt the proof of the original one-sided Chebyshev inequality and employ symmetry in the process. First, set Yi=Xi−X¯ for notational convenience. Then, observe that
P(Y1≥tS)=1n∑i=1nP(Yi≥tS)=E1n∑i=1n1(Yi≥tS).
Now, for any c>0, on {S>0},
1(Yi≥tS)=1(Yi+tcS≥tS(1+c))≤1((Yi+tcS)2≥t2(1+c)2S2)≤(Yi+tcS)2t2(1+c)2S2.
Then,
1n∑i1(Yi≥tS)≤1n∑i(Yi+tcS)2t2(1+c)2S2=(n−1)S2+nt2c2S2nt2(1+c)2S2=(n−1)+nt2c2nt2(1+c)2,
since
Y¯=0 and
∑iY2i=(n−1)S2.
The right-hand side is a constant (!), so taking expectations on both sides yields,
P(X1−X¯≥tS)≤(n−1)+nt2c2nt2(1+c)2.
Finally, minimizing over
c, yields
c=n−1nt2, which after a little algebra establishes the result.
That pesky technical condition
Note that we had to assume P(S=0)=0 in order to be able to divide by S2 in the analysis. This is no problem for absolutely continuous distributions, but poses an inconvenience for discrete ones. For a discrete distribution, there is some probability that all observations are equal, in which case 0=Yi=tS=0 for all i and t>0.
We can wiggle our way out by setting q=P(S=0). Then, a careful accounting of the argument shows that everything goes through virtually unchanged and we get
Corollary 1. For the case q=P(S=0)>0, we have
P(X1−X¯≥tS)≤(1−q)11+nn−1t2+q.
Proof. Split on the events {S>0} and {S=0}. The previous proof goes through for {S>0} and the case {S=0} is trivial.
A slightly cleaner inequality results if we replace the nonstrict inequality in the probability statement with a strict version.
Corollary 2. Let q=P(S=0) (possibly zero). Then,
P(X1−X¯>tS)≤(1−q)11+nn−1t2.
Final remark: The sample version of the inequality required no assumptions on X (other than that it not be almost-surely constant in the nonstrict inequality case, which the original version also tacitly assumes), in essence, because the sample mean and sample variance always exist whether or not their population analogs do.