-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GP Kernel #4
Comments
Hey, thanks! I guess I could do it by including an extra parameter at the init method of the class, in which case you could give a kernel object to the optimizer with the parameter of your choice. This way the interface wouldn't be changed and the user has more flexibility in the implementation. |
Hi! Yeah you could do something like this,
where x_dim is the number of features and C is the constant kernel. May have to declare it,
Thanks! |
I had a mistake,
where "length_scale=np.ones(self.x_dim)" indicates that the kernel function has independent length parameters for each individual feature. |
I was thinking more of something like: def __init__(self, ..., kernel=None):
if kernel is None:
self._kernel = Matern(nu) # current default
else:
self._kernel = kernel
self.GP[i] = GPR(kernel=self._kernel) so that the user can pass the kernel he wants or use the default, in this case, I think the |
Yeah that could work too! |
Hi, Again, thanks :) |
Hi, yeah, I just didn't have time to test it yet, but it should work without a problem. |
Hi,
I like your code :)
I was wondering if you can implement the use of kernels with individual parameters for each feature. This tends to make the GPs more accurate.
Thanks :)
The text was updated successfully, but these errors were encountered: