[1] 
S. Börm and J. Garcke.
Approximating gaussian processes with H^{2}matrices.
In J. N. Kok, J. Koronacki, R. L. de Mantaras, S. Matwin, D. Mladen,
and A. Skowron, editors, Proceedings of 18th European Conference on
Machine Learning, Warsaw, Poland, September 1721, 2007. ECML 2007, volume
4701, pages 4253, 2007. [ bib  DOI  .pdf 1 ] To compute the exact solution of Gaussian process regression one needs (N^{3}) computations for direct and (N^{2}) for iterative methods since it involves a densely populated kernel matrix of size N ×N, here N denotes the number of data. This makes large scale learning problems intractable by standard techniques. We propose to use an alternative approach: the kernel matrix is replaced by a datasparse approximation, called an H^{2}matrix. This matrix can be represented by only O(N m) units of storage, where m is a parameter controlling the accuracy of the approximation, while the computation of the H^{2}matrix scales with O(N m logN). Practical experiments demonstrate that our scheme leads to significant reductions in storage requirements and computing times for large data sets in lower dimensional spaces.
