Załóżmy, że mam następujący model
where , is a vector of explanatory variables, is the parameters of non-linear function and , where naturally is matrix.
The goal is the usual to estimate and . The obvious choice is maximum likelihood method. Log-likelihood for this model (assuming we have a sample ) looks like
Now this seems simple, the log-likelihood is specified, put in data, and use some algorithm for non-linear optimisation. The problem is how to ensure that is positive definite. Using for example optim
in R (or any other non-linear optimisation algorithm) will not guarantee me that is positive definite.
So the question is how to ensure that stays positive definite? I see two possible solutions:
Reparametrise as where is upper-triangular or symmetric matrix. Then will always be positive-definite and can be unconstrained.
Use profile likelihood. Derive the formulas for and . Start with some and iterate , until convergence.
Is there some other way and what about these 2 approaches, will they work, are they standard? This seems pretty standard problem, but quick search did not give me any pointers. I know that Bayesian estimation would be also possible, but for the moment I would not want to engage in it.