Show simple item record

dc.contributor.authorFieldsend, Jonathan E.
dc.contributor.authorSingh, Sameer
dc.date.accessioned2013-07-11T14:07:10Z
dc.date.issued2005-03-07
dc.description.abstractFor the purposes of forecasting (or classification) tasks neural networks (NNs) are typically trained with respect to Euclidean distance minimization. This is commonly the case irrespective of any other end user preferences. In a number of situations, most notably time series forecasting, users may have other objectives in addition to Euclidean distance minimization. Recent studies in the NN domain have confronted this problem by propagating a linear sum of errors. However this approach implicitly assumes a priori knowledge of the error surface defined by the problem, which, typically, is not the case. This study constructs a novel methodology for implementing multiobjective optimization within the evolutionary neural network (ENN) domain. This methodology enables the parallel evolution of a population of ENN models which exhibit estimated Pareto optimality with respect to multiple error measures. A new method is derived from this framework, the Pareto evolutionary neural network (Pareto-ENN). The Pareto-ENN evolves a population of models that may be heterogeneous in their topologies inputs and degree of connectivity, and maintains a set of the Pareto optimal ENNs that it discovers. New generalization methods to deal with the unique properties of multiobjective error minimization that are not apparent in the uni-objective case are presented and compared on synthetic data, with a novel method based on bootstrapping of the training data shown to significantly improve generalization ability. Finally experimental evidence is presented in this study demonstrating the general application potential of the framework by generating populations of ENNs for forecasting 37 different international stock indexes.en_GB
dc.identifier.citationVol. 16 (2), pp. 338 - 354en_GB
dc.identifier.doi10.1109/TNN.2004.841794
dc.identifier.urihttp://hdl.handle.net/10871/11712
dc.language.isoenen_GB
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_GB
dc.subjectPareto optimisationen_GB
dc.subjectEvolutionary computationen_GB
dc.subjectneural netsen_GB
dc.subjecttime seriesen_GB
dc.subjectEuclidean distance minimizationen_GB
dc.subjectPareto evolutionary neural networksen_GB
dc.subjectmultiobjective optimizationen_GB
dc.subjecttime series forecastingen_GB
dc.subjectEconometricsen_GB
dc.subjectEuclidean distanceen_GB
dc.subjectMinimization methodsen_GB
dc.subjectNetwork topologyen_GB
dc.subjectNeural networksen_GB
dc.subjectOptimization methodsen_GB
dc.subjectPredictive modelsen_GB
dc.subjectTime measurementen_GB
dc.subjectTraining dataen_GB
dc.titlePareto Evolutionary Neural Networksen_GB
dc.typeArticleen_GB
dc.date.available2013-07-11T14:07:10Z
dc.identifier.issn1045-9227
dc.descriptionCopyright © 2005 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.en_GB
dc.descriptionNotes: This paper introduces a novel method for the effective training and evaluation of artificial neural networks with many competing objectives, and demonstrates the superiority of this approach to previous work making use of only a single composite objective. Its publication led to invitations to write a number of book chapters in the area of multi-objective machine learning, as well invitations to join technical committees of conference sessions in the emergent area. Theory from it also fed in to three DTI-funded KTP projects (one with NATS and two with AI Corporation Ltd.) for which I am a ‘University Supervisor’.en_GB
dc.identifier.journalIEEE Transactions on Neural Networksen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record