Theoretical Motion Functions for Video Analysis, with a Passive Navigation Example
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.
We introduce a method for estimating the motion of an image field between two images, in which the displacement of pixels between the images is specified by some theoretical motion function of the spatial coordinates based on a small number of parameters. The form of the function is selected to represent the expected features of the class of problem and the values of the parameters are estimated by considering the images as a whole. The probability distributions of the parameters are estimated through a Bayesian model that makes use of variational approximation and importance sampling. The method is demonstrated on a passive navigation problem, with the theoretical motion based on the Focus of Expansion model. The example video is taken from a car driving down a country lane, so there are few, if any, distinctive features that can be tracked. We show that even theoretical motion functions that are gross simplifications of the true underlying motion are able to give useful results.
2016 International Joint Conference on Neural Networks (IJCNN 2016), part of the IEEE World Congress on Computational Intelligence (IEEE WCCI), Vancouver, Canada, 24-29 July 2016
2016 International Joint Conference on Neural Networks, 24-29 July, pp. 4001-4008