Browse by author
Lookup NU author(s): Dr Peter Andras
Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high dimensional data into two steps: first the mapping of the high dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides; second the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two step procedure by considering support vector machines and Bayesian self-organizing maps for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high dimensional data.
Author(s): Andras P
Publication type: Article
Publication status: Published
Journal: IEEE Transactions on Neural Networks and Learning Systems
Year: 2014
Volume: 25
Issue: 3
Pages: 495-505
Print publication date: 04/09/2013
Acceptance date: 26/07/2013
Date deposited: 09/09/2013
ISSN (print): 2162-237X
ISSN (electronic): 2162-2388
Publisher: IEEE
URL: http://dx.doi.org/10.1109/TNNLS.2013.2276044
DOI: 10.1109/TNNLS.2013.2276044
Altmetrics provided by Altmetric