36-465/665, Spring 2021
13 April 2021 (Lecture 20)
\[ \newcommand{\myexp}[1]{\exp{\left( #1 \right)}} \]
Use the eigenfunctions to re-write the kernel machine: \[\begin{eqnarray} s(x) & = & \sum_{i=1}^{n}{\alpha_i K(x, x_i)}\\ & = & \sum_{i=1}^{n}{\alpha_i \sum_{j=1}^{\infty}{\lambda_j \phi_j(x) \phi_j(x_i)}}\\ & = & \sum_{j=1}^{\infty}{\lambda_j \left(\sum_{i=1}^{n}{\alpha_i \phi_j(x_i)}\right) \phi_j(x)}\\ & = & \sum_{j=1}^{\infty}{\beta_j \phi_j(x)} \end{eqnarray}\]
This lets us use many more than \(n\) basis functions while only having \(n\) weights \(\alpha_i\)
Priors vs. age for the training set, color-coded for recidivism. Points are “jittered” so that multiple individuals with equal features aren’t superposed. The “rugs” along the axes give a sense of the marginal distributions of the two attributes for the two classes.
One reason this is a good default choice is that it’s really using polynomials to all (even) orders \[\begin{eqnarray} \myexp{u} & = & \sum_{j=0}^{\infty}{\frac{u^j}{j!}}\\ \myexp{-u^2/2h^2} & = & \sum_{j=0}^{\infty}{\left(\frac{-1}{2h^2}\right)^j \frac{u^{2j}}{j!}} \end{eqnarray}\]
This is also called the radial basis function kernel
The first four eigenfunctions
Notice that lots of the eigenvectors don’t even show up any more, because they’re being multiplied by very small (square roots of) eigenvalues
Fits on the training set, with color indicating fitted value (darker being lower predictions of recidivism, lighter higher predictions), and shape indicating actual outcome