# Motion Segmentation

## Contents

# 23.5. Motion Segmentation#

The theory of structure from motion and motion segmentation has evolved over a set of papers [14, 24, 39, 53, 66, 73, 74]. In this section, we review the essential ideas from this series of work.

A typical image sequence (from a single camera shot) may contain multiple objects moving independently of each other. In the simplest model, we can assume that images in a sequence are views of a single moving object observed by a stationary camera or a stationary object observed by a moving camera. Only rigid motions are considered. In either case, the object is moving with respect to the camera.

The *structure from motion* problem
focuses on recovering the (3D) shape and motion information
of the moving object.
In the general case, there are multiple objects moving
independently.
Thus, we also need to perform a
*motion segmentation* such that motions of
different objects can be separated and (either
after or simultaneously) shape and motion of each object
can be inferred.

This problem is typically solved in two stages. In the first stage, a frame to frame correspondence problem is solved which identifies a set of feature points whose coordinates can be tracked over the sequence as the point moves from one position to other in the sequence. We obtain a set of trajectories for these points over the frames in the video. If there is a single moving object or the scene is static and the observer is moving then all the feature points will belong to the same object. Otherwise, we need to cluster these feature points to different objects moving in different directions. In the second stage, these trajectories are analyzed to group the feature points into separate objects and recover the shape and motion for individual objects. In this section we assume that the feature trajectories have been obtained by an appropriate method. Our focus is to identify the moving objects and obtain the shape and motion information for each object from the trajectories.

## 23.5.1. Modeling Structure from Motion for Single Object#

We start with the simple model of a static camera and a moving object. All feature point trajectories belong to the moving object. Our objective is to demonstrate that the subspace spanned by feature trajectories of a single moving object is a low dimensional subspace.

Let the image sequence consist of \(F\) frames denoted by \(1 \leq f \leq F\). Let us assume that \(S\) feature points of the moving object have been tracked over this image sequence. Let \((u_{f s}, v_{f s})\) be the image coordinates of the \(s\)-th point in \(f\)-th frame. We form the feature trajectory vector for the \(s\)-th point by stacking its coordinates for the \(F\) frames vertically as

Putting together the feature trajectory vectors of \(S\) points in a single feature trajectory matrix, we obtain

This is the data matrix under consideration from which the shape and motion of the object need to be inferred.

We need two coordinate systems. We use the camera coordinate system as the world coordinate system with the \(Z\)-axis along the optical axis. The coordinates of different points in the object are changing from frame to frame in the world coordinate system as the object is moving. We also establish a coordinate system within the object with origin at the centroid of the feature points such that the coordinates of individual points do not change from frame to frame in the object coordinate system. The (rigid) motion of the object is then modeled by the translation (of the centroid) and rotation of its coordinate system with respect to the world coordinate system. Let \((a_s, b_s, c_s)\) be the coordinate of the \(s\)-th point in the object coordinate system. Then, the matrix

represents the shape of the object (w.r.t. its centroid).

Let us choose an orthonormal basis in the object coordinate system. Let \(\bd_f\) be the position of the centroid and \((\bi_f, \bj_f, \bk_f)\) be the (orthonormal) basis vectors of the object coordinate system in the \(f\)-th frame. Then, the position of the \(s\)-th point in the world coordinate system in \(f\)-th frame is given by

Assuming orthographic projection and letting \(\bh_{f s} = (u_{f s}, v_{f s}, w_{f s})\), the image coordinates are obtained by chopping of the third component \(w_{f s}\). We define the rotation matrix for \(f\)-th frame as

where \(\underline{\bi}_f\), \(\underline{\bj}_f\), \(\underline{\bk}_f\) are the row vectors of \(\bR_f\). Let \(\bx_s = (a_s, b_s, c_s, 1)\) be the homogeneous coordinates of the \(s\)-th point in object coordinate system. We can write the homogeneous coordinates in camera coordinate system as

If we write \(\bd_f = (d_{f i}, d_{f j}, d_{f k})\), then, the data matrix \(\bY\) can be factorized as

We rewrite this as

where \(\MM\) represents the motion
information of the object and
\(\SS\)
represents the shape information of the object.
This factorization is known as
the *Tomasi-Kanade factorization* of shape and motion
information of a moving object.
Note that \(\MM \in \RR^{2F \times 4}\)
and \(\SS \in \RR^{4 \times S}\). Thus
the rank of \(\bY\) is at most 4.
Thus the feature trajectories
of the rigid motion of an object span an
up to 4-dimensional
subspace of the trajectory space \(\RR^{2F}\).

## 23.5.2. Solving the Structure From Motion Problem#

We digress a bit to understand how to perform the factorization of \(\bY\) into \(\MM\) and \(\SS\). Using SVD, \(\bY\) can be decomposed as

Since \(\bY\) is at most rank \(4\), we keep only the first 4 singular values as

Matrices \(\bU \in \RR^{2F \times 4}\) and \(\bV \in \RR^{S \times 4}\) are the left and right singular matrices respectively.

There is no unique factorization of \(\bY\) in general. One simple factorization can be obtained by defining:

But for any \(4 \times 4\) invertible matrix \(\bA\),

is also a possible solution since \(\MM \SS = \widehat{\MM} \widehat{\SS} = \bY\). Remember that \(\MM\) is not an arbitrary matrix but represents the rigid motion of an object. There is considerable structure inside the motion matrix. These structural constraints can be used to compute an appropriate \(\bA\) and thus obtain \(\MM\) from \(\widehat{\MM}\). To proceed further, let us break \(\bA\) into two parts

where \(\bA_R \in \RR^{4 \times 3}\) is the rotational component and \(\ba_t \in \RR^4\) is related to translation. We can now write:

**Rotational constraints**

Recall that \(\bR_f\) is a rotation matrix hence its rows are unit norm and orthogonal to each other. Thus every row of \(\widehat{\MM} \bA_R\) is unit norm and every pair of rows (for a given frame) is orthogonal. This yields following constraints.

where \(\widehat{\bm}_k\) are rows of matrix \(\widehat{\MM}\) for \(1 \leq f \leq F\). This over-constrained system can be solved for the entries of \(\bA_R\) using least squares techniques.

**Translational constraints**

Recall that the image of a centroid of a set of points under an isometry (rigid motion) is the centroid of the images of the points under the same isometry. The homogeneous coordinates of the centroid in the object coordinate system are \((0, 0, 0, 1)\). The coordinates of the centroid in image are

Putting back, we obtain

A least squares solution for \(\ba_t\) is straight-forward.

## 23.5.3. Modeling Motion for Multiple Objects#

The generalization of modeling of motion of one object to multiple objects is straight-forward. Let there be \(K\) objects in the scene moving independently 1. Let \(S_1, S_2, \dots, S_K\) feature points be tracked for objects \(1,2, \dots, K\) respectively for \(F\) frames with \(S = \sum_k S_k\). Let these feature trajectories be put in a data matrix \(\bY \in \RR^{2F \times S}\). In general, we don’t know which feature point belongs to which object and how many feature points are there for each object. There is at least one feature point for each object (otherwise the object isn’t being tracked at all). We could permute the columns of \(\bY\) via an (unknown) permutation \(\Gamma\) so that the feature points of each object are placed contiguously giving us

Clearly, each submatrix \(\bY_k\) (\(1 \leq k \leq K\))
which consists of feature trajectories of one object
spans an (up to) 4 dimensional subspace.
Now, the problem
of *motion segmentation* is essentially separating
\(\bY\) into \(\bY_k\) which reduces to a standard
subspace clustering problem.

Let us dig a bit deeper to see how the motion shape factorization identity changes for the multi-object formulation. Each data submatrix \(\bY_k\) can be factorized as

\(\bY^*\) now has the canonical factorization:

If we further denote :

then we obtain a factorization similar to the single object case given by

Thus, when the segmentation of \(Y\) in terms of the unknown permutation \(\Gamma\) has been obtained, (sorted) data matrix \(Y^*\) can be factorized into shape and motion components as appropriate.

## 23.5.4. Limitations#

Our discussion so far has established that
feature trajectories for each moving object span a 4-dimensional
space. There are a number of reasons why this is only *approximately*
valid: perspective distortion of camera, tracking errors, and
pixel quantization. Thus, a subspace clustering algorithm
should allow for the presence of noise or corruption of data
in real life applications.

- 1
Our realization of an object is a set of feature points undergoing same rotation and translation over a sequence of images. The notion of locality, color, connectivity etc. plays no role in this definition. It is possible that two visually distinct objects are undergoing same rotation and translation within a given image sequence. For the purposes of inferring an object from its motion, these two visually distinct object are treated as one.