Różnice między regresją mostową a elastyczną siatką to fascynujące pytanie, biorąc pod uwagę ich podobnie wyglądające kary. Oto jedno możliwe podejście. Załóżmy, że rozwiązujemy problem regresji pomostowej. Możemy następnie zapytać, jak różni się rozwiązanie elastycznej siatki. Patrząc na gradienty dwóch funkcji strat, możemy powiedzieć coś na ten temat.
Regresja pomostowa
Powiedzmy jest macierzą zawierającą wartości zmiennej niezależnej ( n punktów x d wymiary), Y jest wektor zawierający wartości zmiennej zależnej i W jest wektorem wagi.Xndyw
Funkcja strat karze normę odważników o wielkości λ b :ℓqλb
Lb(w)=∥y−Xw∥22+λb∥w∥qq
Gradient funkcji straty wynosi:
∇wLb(w)=−2XT(y−Xw)+λbq|w|∘(q−1)sgn(w)
v∘c denotes the Hadamard (i.e. element-wise) power, which gives a vector whose ith element is vci. sgn(w) is the sign function (applied to each element of w). The gradient may be undefined at zero for some values of q.
Elastic net
The loss function is:
Le(w)=∥y−Xw∥22+λ1∥w∥1+λ2∥w∥22
This penalizes the ℓ1 norm of the weights with magnitude λ1 and the ℓ2 norm with magnitude λ2. The elastic net paper calls minimizing this loss function the 'naive elastic net' because it doubly shrinks the weights. They describe an improved procedure where the weights are later rescaled to compensate for the double shrinkage, but I'm just going to analyze the naive version. That's a caveat to keep in mind.
The gradient of the loss function is:
∇wLe(w)=−2XT(y−Xw)+λ1sgn(w)+2λ2w
The gradient is undefined at zero when λ1>0 because the absolute value in the ℓ1 penalty isn't differentiable there.
Approach
Say we select weights w∗ that solve the bridge regression problem. This means the the bridge regression gradient is zero at this point:
∇wLb(w∗)=−2XT(y−Xw∗)+λbq|w∗|∘(q−1)sgn(w∗)=0⃗
Therefore:
2XT(y−Xw∗)=λbq|w∗|∘(q−1)sgn(w∗)
We can substitute this into the elastic net gradient, to get an expression for the elastic net gradient at w∗. Fortunately, it no longer depends directly on the data:
∇wLe(w∗)=λ1sgn(w∗)+2λ2w∗−λbq|w∗|∘(q−1)sgn(w∗)
Looking at the elastic net gradient at w∗ tells us: Given that bridge regression has converged to weights w∗, how would the elastic net want to change these weights?
It gives us the local direction and magnitude of the desired change, because the gradient points in the direction of steepest ascent and the loss function will decrease as we move in the direction opposite to the gradient. The gradient might not point directly toward the elastic net solution. But, because the elastic net loss function is convex, the local direction/magnitude gives some information about how the elastic net solution will differ from the bridge regression solution.
Case 1: Sanity check
(λb=0,λ1=0,λ2=1). Bridge regression in this case is equivalent to ordinary least squares (OLS), because the penalty magnitude is zero. The elastic net is equivalent ridge regression, because only the ℓ2 norm is penalized. The following plots show different bridge regression solutions and how the elastic net gradient behaves for each.
Left plot: Elastic net gradient vs. bridge regression weight along each dimension
The x axis represents one component of a set of weights w∗ selected by bridge regression. The y axis represents the corresponding component of the elastic net gradient, evaluated at w∗. Note that the weights are multidimensional, but we're just looking at the weights/gradient along a single dimension.
Right plot: Elastic net changes to bridge regression weights (2d)
Each point represents a set of 2d weights w∗ selected by bridge regression. For each choice of w∗, a vector is plotted pointing in the direction opposite the elastic net gradient, with magnitude proportional to that of the gradient. That is, the plotted vectors show how the elastic net wants to change the bridge regression solution.
These plots show that, compared to bridge regression (OLS in this case), elastic net (ridge regression in this case) wants to shrink weights toward zero. The desired amount of shrinkage increases with the magnitude of the weights. If the weights are zero, the solutions are the same. The interpretation is that we want to move in the direction opposite to the gradient to reduce the loss function. For example, say bridge regression converged to a positive value for one of the weights. The elastic net gradient is positive at this point, so elastic net wants to decrease this weight. If using gradient descent, we'd take steps proportional in size to the gradient (of course, we can't technically use gradient descent to solve the elastic net because of the non-differentiability at zero, but subgradient descent would give numerically similar results).
Case 2: Matching bridge & elastic net
(q=1.4,λb=1,λ1=0.629,λ2=0.355). I chose the bridge penalty parameters to match the example from the question. I chose the elastic net parameters to give the best matching elastic net penalty. Here, best-matching means, given a particular distribution of weights, we find the elastic net penalty parameters that minimize the expected squared difference between the bridge and elastic net penalties:
minλ1,λ2E[(λ1∥w∥1+λ2∥w∥22−λb∥w∥qq)2]
Here, I considered weights with all entries drawn i.i.d. from the uniform distribution on [−2,2] (i.e. within a hypercube centered at the origin). The best-matching elastic net parameters were similar for 2 to 1000 dimensions. Although they don't appear to be sensitive to the dimensionality, the best-matching parameters do depend on the scale of the distribution.
Penalty surface
Here's a contour plot of the total penalty imposed by bridge regression (q=1.4,λb=100) and best-matching elastic net (λ1=0.629,λ2=0.355) as a function of the weights (for the 2d case):
Gradient behavior
We can see the following:
- Let w∗j be the chosen bridge regression weight along dimension j.
- If |w∗j|<0.25, elastic net wants to shrink the weight toward zero.
- If |w∗j|≈0.25, the bridge regression and elastic net solutions are the same. But, elastic net wants to move away if the weight differs even slightly.
- If 0.25<|w∗j|<1.31, elastic net wants to grow the weight.
- If |w∗j|≈1.31, the bridge regression and elastic net solutions are the same. Elastic net wants to move toward this point from nearby weights.
- If |w∗j|>1.31, elastic net wants to shrink the weight.
The results are qualitatively similar if we change the the value of q and/or λb and find the corresponding best λ1,λ2. The points where the bridge and elastic net solutions coincide change slightly, but the behavior of the gradients are otherwise similar.
Case 3: Mismatched bridge & elastic net
(q=1.8,λb=1,λ1=0.765,λ2=0.225). In this regime, bridge regression behaves similar to ridge regression. I found the best-matching λ1,λ2, but then swapped them so that the elastic net behaves more like lasso (ℓ1 penalty greater than ℓ2 penalty).
Relative to bridge regression, elastic net wants to shrink small weights toward zero and increase larger weights. There's a single set of weights in each quadrant where the bridge regression and elastic net solutions coincide, but elastic net wants to move away from this point if the weights differ even slightly.
(q=1.2,λb=1,λ1=173,λ2=0.816). In this regime, the bridge penalty is more similar to an ℓ1 penalty (although bridge regression may not produce sparse solutions with q>1, as mentioned in the elastic net paper). I found the best-matching λ1,λ2, but then swapped them so that the elastic net behaves more like ridge regression (ℓ2 penalty greater than ℓ1 penalty).
Relative to bridge regression, elastic net wants to grow small weights and shrink larger weights. There's a point in each quadrant where the bridge regression and elastic net solutions coincide, and elastic net wants to move toward these weights from neighboring points.