Regression models#
Classes to build models from an class:metatensor.TensorMap.
Model classes listed here use are based on Numpy. Classes based on torch will be added in the future.
- equisolve.numpy.models.Ridge#
alias of
NumpyRidge
- class equisolve.numpy.models.SorKernelRidge[source]#
Uses the subset of regressors (SoR) approximation of the kernel
\[w = (k_{nm}K_{nm} + k_{mm})^-1 @ k_{mn} y\]plus regularization
Reference#
Quinonero-Candela, J., & Rasmussen, C. E. (2005). A unifying view of sparse approximate Gaussian process regression. The Journal of Machine Learning Research, 6, 1939-1959.
- fit(X: TensorMap, X_pseudo: TensorMap, y: TensorMap, kernel_type: str | AggregateKernel = 'linear', kernel_kwargs: dict | None = None, accumulate_key_names: str | List[str] | None = None, alpha: float | TensorMap = 1.0, solver: str = 'RKHS-QR', rcond: float | None = None)[source]#
- Parameters:
X – features if kernel type “precomputed” is used, the kernel k_nm is assumed
X_pseudo – pseudo points if kernel type “precomputed” is used, the kernel k_mm is assumed
y – targets
kernel_type – type of kernel used
kernel_kwargs – additional keyword argument for specific kernel - linear None - polynomial degree
accumulate_key_names – a string or list of strings that specify which key names should be accumulate to one kernel. This is intended for key columns inducing sparsity in the properties (e.g. neighbour species)
alpha – regularization
solver – determines which solver to use … TODO doc …
rcond – argument for the solver lstsq
TODO move to developer doc
Derivation#
We take equation (16b) (the mean expression)
\[\sigma^{-2} K_{tm}\Sigma K_{MN}y\]we put in the $Sigma$
\[\sigma^{-2} K_{tm}(\sigma^{-2}K_{mn}K_{mn}+K_{mm})^{-1} K_{mn}y\]We can move around the $sigma’s$
\[K_{tm}((K_{mn}\sigma^{-1})(\sigma^{-1}K_{mn)}+K_{mm})^{-1} (K_{mn}\sigma^{-1})(y\sigma^{-1})\]you can see the building blocks in the code are $K_{mn}sigma^{-1}$ and $ysigma^{-1}$