Convolutional Adaptive Particle Filter with Multiple Models for Visual Tracking

Document Type

Conference Proceeding

Publication Date

11-10-2018

Publisher

Springer

Source Publication

Lecture Notes in Computer Science

Source ISSN

0302-9743

Abstract

Although particle filters improve the performance of convolutional-correlation trackers, especially in challenging scenarios such as occlusion and deformation, they considerably increase the computational cost. We present an adaptive particle filter to decrease the number of particles in simple frames in which there is no challenging scenario and the target model closely reflects the current appearance of the target. In this method, we consider the estimated position of each particle in the current frame as a particle in the next frame. These refined particles are more reliable than sampling new particles in every frame. In simple frames, target estimation is easier, therefore many particles may converge together. Consequently, the number of particles decreases in these frames. We implement resampling when the number of particles or the weight of the selected particle is too small. We use the weight computed in the first frame as a threshold for resampling because that weight is calculated by the ground truth model. Another contribution of this article is the generation of several target models by applying different adjusting rates to each of the high-likelihood particles. Thus, we create multiple models; some are useful in challenging frames because they are more influenced by the previous model, while other models are suitable for simple frames because they are less affected by the previous model. Experimental results on the Visual Tracker Benchmark v1.1 beta (OTB100) demonstrate that our proposed framework significantly outperforms state-of-the-art methods.

Comments

Lecture Notes in Computer Science, Vol. 11241 (November 10, 2018). DOI.

Share

COinS