We are interested in solving the following regularized nonparametric regression problem under the squared loss:

where is a Mercer kernel. This is optimization over a RKHS. The representer theorem tells us that the the minimizer has the form

so the problem reduces to find the optimal Thus we can write where . Taking the gradient w.r.t. to and keeping in mind that is invertible and symmetric, the minimizer is

So , so

where . This demonstrates that RKHS regression is a member of the class of linear smoothers.