Browse by author
Lookup NU author(s): Dr Jie ZhangORCiD
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
A Bayesian selective combination method is proposed for combining multiple neural networks in nonlinear dynamic process modelling. Instead of using fixed combination weights, the probability of a particular network being the true model is used as the combination weight for combining that network. The prior probability is calculated using the sum of squared errors of individual networks on a sliding window covering the most recent sampling times. A nearest neighbour method is used for estimating the network error for a given input data point, which is then used in calculating the combination weights for individual networks. Forward selection and backward elimination are used to select the individual networks to be combined. In forward selection, individual networks are gradually added into the aggregated network until the aggregated network error on the original training and testing data sets cannot be further reduced. In backward elimination, all the individual networks are initially aggregated and some of the individual networks are then gradually eliminated until the aggregated network error on the original training and testing data sets cannot be further reduced. Application results demonstrate that the proposed techniques can significantly improve model generalisation and perform better than aggregating all the individual networks. © Springer-Verlag London Limited 2004.
Author(s): Ahmad Z, Zhang J
Publication type: Article
Publication status: Published
Journal: Neural Computing and Applications
Year: 2005
Volume: 14
Issue: 1
Pages: 78-87
Print publication date: 01/03/2005
ISSN (print): 0941-0643
ISSN (electronic): 1433-3058
Publisher: Springer
URL: http://dx.doi.org/10.1007/s00521-004-0451-y
DOI: 10.1007/s00521-004-0451-y
Altmetrics provided by Altmetric