Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to trace the optimization history? #3

Open
GiggleLiu opened this issue Nov 19, 2018 · 3 comments
Open

Is it possible to trace the optimization history? #3

GiggleLiu opened this issue Nov 19, 2018 · 3 comments

Comments

@GiggleLiu
Copy link

Is it possible to know the loss/parameters in each step (rather than each function call).

@GiggleLiu GiggleLiu reopened this Nov 19, 2018
@Gnimuc
Copy link
Owner

Gnimuc commented Nov 19, 2018

I'm not sure, but if setulb doesn't provide such info in dsave, we may need to hack into the Fortran code.

@Gnimuc
Copy link
Owner

Gnimuc commented Nov 19, 2018

Just to make sure you didn't miss this example:

LBFGSB.jl/test/driver3.jl

Lines 151 to 154 in 5148b3c

# "3) the value of the objective function f,"
# "4) the norm of the projected gradient, dsave(13)"
println("Iterate ", isave[30], " nfg = ", isave[34], " f = ", f, " |proj g| = ", dsave[13])

It looks like LBFGSB only exposes its APIs at the iteration level.

@GiggleLiu
Copy link
Author

Thanks for the example, it really helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants