Tracked people

Nils T Siebel
People Tracking for Visual Surveillance

This page describes details of my system to track people in camera images, the Reading People Tracker.

1 Introduction

In a research project at University of Reading I started doing research on People Tracking in camera images for automatic visual surveillance systems. The research was carried out within the European Framework V project ADVISOR in which we have developed an integrated visual surveillance and behaviour analysis system. The People Tracker developed within the project is based on the Leeds People Tracker which was developed by Adam Baumberg.

Contents of this page

1 Introduction
2 Main Results
3 Tracking Algorithm
3.1 Overview of the 4 Detection / Tracking Modules
3.2 Details of the Tracking Algorithm
3.3 Example Image of Tracked People
4 Software Engineering Aspects
5 Special Features of our System
6 Relevant Publications
7 Acknowledgements
8 Source Code of the Reading People Tracker

2 Main Results

Tracked people

3 Tracking Algorithm

Tracking Algorithm

3.1 Overview of the 4 Detection / Tracking Modules

Module 1 - Motion Detector

Motion Detector
A Motion Detector detects moving pixels in the image. It models the background as an image with no people in it. Simply subtracting it pixelwise from of the current video image and thresholding the result yields the binary Motion Image. Regions (bounding boxes) with detected moving blobs are then extracted and written out as the output from this module.

Main features:

Example:

Original image - Reference (background) image = Difference image Arrow Thresholded Motion image
This example shows how pixelwise differencing of video and background images leads to the difference image which is then thresholded to give the binary motion image. The example also shows how a low contrast of an area against the background (in this case the white coat against the light yellow background) results in non-detection of that area in the motion image.

Module 2 - Region Tracker

Region Tracker
A Region Tracker tracks these moving regions (i.e. bounding boxes) over time. This includes region splitting and merging using predictions from the previous frame.

Main features:

Example:

Region splitting and merging

Module 3 - Head Detector

Head Detector
A Head Detector makes rapid guesses of head positions in all detected moving regions.

Main features:

Example:

Motion image Arrow Detected heads in video image

Module 4 - Active Shape Tracker

Active Shape Tracker
An Active Shape Tracker uses a deformable model for the 2D outline shape of a walking pedestrian to detect and track people. The initialisation of contour shapes (called profiles) is done from the output by the Region Tracker and the Head Detector.

Main features:

Example:

Motion image Arrow Detected heads in video image
The left image shows how the output from the Region Tracker (purple, green and white boxes) and estimated head positions detected in them from the Head Detector (drawn in red/pink) are used to provide initial hypotheses for profiles (outline shapes, drawn in white) to be tracked by the Active Shape Tracker. Each such hypothesis, as well as the Active Shape Tracker's own predictions from the previous frame, is then examined by the Active Shape Tracker using image measurements. The right image details how a local search for edges around the contour is used in an iterative optimisation loop for shape fitting.

Combining the Modules

The main goal of using more than one tracking module is to make up for deficiencies in the individual modules, thus achieving a better overall tracking performance than each single module could provide. Of course, when combining the information from different models it is important to be aware of the main sources of error for the different modules. If two modules are subject to the same type of error then there is little benefit in combining the outputs. The new People Tracker has been designed keeping this aspect in mind, and using the redundancy introduced by the multiplicity of modules in an optimal manner.
These are the main features of the system:

3.2 Details of the Tracking Algorithm

The complete people tracking algorithm is given below.
Complete Tracking Algorithm

3.3 Example Image of Tracked People

The following image shows the output from the People Tracker after hypothesis refinement. Tracked regions (bounding boxes) and profiles (outline shapes) are shown in the image.
Tracked people

4 Software Engineering Aspects

The People Tracker was completely re-engineered, yielding a new design. Now, the software is highly maintainable and portable, and a software process for all maintenence work is well defined and documented. Extensibility and scalability were kept in mind while designing the new tracker.
The source code adheres closely to the ISO/IEC 14882-1998 C++ standard, as well as IEEE POSIX 1003.1c-1995 extensions for multi-threading, making it easily portable. While the code is being maintained under GNU/Linux, it also compiles under Windows 2000.
Members of The University of Reading's Applied Software Engineering group and I have examined the software processes in our own work on the People Tracker and the way these have influenced the maintainability of the code (see JSME/SMR paper). The re-engineering phase and its effect on the maintainability of The Code was also more closely examined in a case study (see ICSM 2002 paper).

5 Special Features of our System


6 Relevant Publications

Please have a look at my visual surveillance publications for more details.

7 Acknowledgements


8 Source Code of the Reading People Tracker

The Reading People Tracker which has been developed within this project is maintained by Nils T Siebel who is now at the Christian-Albrechts-University of Kiel, Germany.
The source code of the Reading People Tracker is available for download from its dedicated Reading People Tracker download page.

Author of these pages: Nils T Siebel.
Last modified on Wed May 26 2010.
Best Viewed With Any Browser!
Valid HTML 4.01!
Valid CSS!
This page and all files in these subdirectories are Copyright © 2004-2010 Nils T Siebel, Berlin, Germany.