-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Errors in variables? #17
Comments
Hello @m93a what do you mean with weighted? |
I mean weighted as in "Weighted Total Least Squares", which apart from values for |
Okay, maybe it's not called Weighted Total Least Squares, my bad... 😀
where
where
. However modifying the algorithm to account for errors isn't as simple as changing the merit function, is it? (Merit function is the part which calculates the sum of residuals.) Could you please point me to the place, where the merit function is defined in your code? And would you mind sharing some of your insight into the problem? You probably understand fitting algorighms way better than me. Thanks! |
Hello @m93a i'm interested in your approach, I am trying to understand your explanation, when you say merit function, this is inbebed in the step function. There is a function called gradient function, this creates a gradient matrix used to compute the new candidates, the matrixFunction return the difference matrix between toFit data ('real' data) and the Func(xi, B) result where B is the vector of current parameters. between both function could you find the place to implement your idea. |
Hello there, Indeed, the LM algorithm should minimize sum of weighted squared residuals. I remember in the original implementation there were a parameter called weight = weighting vector for least squares fit ( weight >= 0 ) ... (https://github.com/mljs/curve-fitting)[https://github.com/mljs/curve-fitting] |
Hey again, I've looked at the old code of Code in this repo, on the other hand, is rather delightful and bears little resemblance to If I understand it correctly, the gradient function returns the gradient of residuals:
where Now there's the problem with determining the error… In the book Numerical Recipes in FORTRAN 77 (page 660 in book, page 690 in pdf) they say that the statistically correct way to fit a straight line to data with errors in variables is to minimize this value:
This exactly corresponds to the equation I've given in my previous comment if you substitute Since the equation We assume that the actual value (let's call it (Edit: In the end used the averaged the "average slopes" of the equiprobable intervals and used first equation (in a loose sense), because the formula for standard deviation converged too slowly.) Wow, that was a lot of maths… If you either don't understand something, think I got something wrong or want to approve my conclusions, feel encouraged to comment! |
I need to fit data points with errors in both variables. Does a "weighted" version of this algorithm exist? Would it be viable to implement it? I've spent hours searching for an implementation of something like that (in any understandable language) but had little success.
The text was updated successfully, but these errors were encountered: