Bruce E. Hansen
Journal of Econometrics (2016) 190, 115-132
Efficient Shrinkage in Parametric Models
This paper introduces shrinkage for general parametric models. We show how to shrink maximum likelihood estimators towards parameter subspaces defined by general nonlinear restrictions. We derive the asymptotic distribution and risk of a shrinkage estimator using a local asymptotic framework. We show that if the shrinkage dimension exceeds two, the asymptotic risk of the shrinkage estimator is strictly less than that of the MLE. This reduction holds globally in the parameter space. We show that the reduction in asymptotic risk is substantial, even for moderately large values of the parameters.
Revised: June 2015
We also provide a new high-dimensional large sample local minimax efficiency bound. The bound is the lowest possible asymptotic risk, uniformly in a local region of the parameter space. Local minimax bounds are a stronger efficiency characterization than global minimax bounds. We show that our shrinkage estimator asymptotically achieves this local asymptotic minimax bound, while the MLE does not. Thus the shrinkage estimator, unlike the MLE, is locally minimax efficient.
This theory is a combination and extension of standard asymptotic efficiency theory (Hájek, 1972) and local minimax efficiency theory for Gaussian models (Pinsker, 1980).
Download PDF file
Link to Programs
Some of the above material is based upon work supported by the National Science Foundation under Grants No. SES-9022176, SES-9120576, SBR-9412339, and SBR-9807111.
Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the author(s), and do not necessarily reflect the views of the NSF.