Show simple item record

dc.contributor.authorChristmas, JT
dc.date.accessioned2016-04-08T15:10:43Z
dc.date.issued2016-11-03
dc.description.abstractWe introduce a method for estimating the motion of an image field between two images, in which the displacement of pixels between the images is specified by some theoretical motion function of the spatial coordinates based on a small number of parameters. The form of the function is selected to represent the expected features of the class of problem and the values of the parameters are estimated by considering the images as a whole. The probability distributions of the parameters are estimated through a Bayesian model that makes use of variational approximation and importance sampling. The method is demonstrated on a passive navigation problem, with the theoretical motion based on the Focus of Expansion model. The example video is taken from a car driving down a country lane, so there are few, if any, distinctive features that can be tracked. We show that even theoretical motion functions that are gross simplifications of the true underlying motion are able to give useful results.en_GB
dc.identifier.citation2016 International Joint Conference on Neural Networks, 24-29 July, pp. 4001-4008
dc.identifier.doi10.1109/IJCNN.2016.7727720
dc.identifier.urihttp://hdl.handle.net/10871/21023
dc.language.isoenen_GB
dc.publisherIEEEen_GB
dc.rightsThis is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.en_GB
dc.titleTheoretical Motion Functions for Video Analysis, with a Passive Navigation Exampleen_GB
dc.typeConference paperen_GB
dc.description2016 International Joint Conference on Neural Networks (IJCNN 2016), part of the IEEE World Congress on Computational Intelligence (IEEE WCCI), Vancouver, Canada, 24-29 July 2016


Files in this item

This item appears in the following Collection(s)

Show simple item record