Nothing Special   »   [go: up one dir, main page]

Elhabian ICP09

Download as pdf or txt
Download as pdf or txt
You are on page 1of 130

A Tutorial on Rigid

Registration

Iterative Closed Point


(ICP)
By
Shireen Elhabian,
Amal Farag, Aly Farag

University of Louisville, CVIP Lab


March 2009
Outline
• The problem
• Motivation
• Data types
• Mathematical preliminaries
– Somas statistics: centroid, variance and covariance
– Inner product
– Transformation matrices
– Eigenvalues problem
– Mean square error
– Rigid transformations (scaling, rotation and translation in 2D and 3D)
– Quaternions
Outline
• Iterative closest point algorithm
– Problem statement
– Main idea of ICP
– Algorithm outline
– Nomenclature
– Finding correspondences
– Alignment calculation
• Finding the translational offset
• Finding the scaling factor
• Finding the best rotation
– Let’s do it …
– Experiments
The Problem
Align two partially-
overlapping meshes
given initial guess
for relative transform
The Problem
Motivation
• Shape inspection
• Motion estimation
• Appearance analysis
• Texture Mapping
• Tracking http://blog.wired.com/defense/2007/08/danger-room-inb.html
Motivation
• Range images registration
Motivation
• Range images registration
Range Scanners
Data Types
• Point sets
• Line segment sets (polylines)
• Implicit curves : f(x,y,z) = 0
• Parametric curves : (x(u),y(u),z(u))
• Triangle sets (meshes)
• Implicit surfaces : s(x,y,z) = 0
• Parametric surfaces (x(u,v),y(u,v),z(u,v)))
Mathematical
Preliminaries
Centroid
• The centroid of a data (point) set is the
weighted mean of all data points presented in
the set.
• For a data set A, having n points, each denoted
by ai , the centroid is given by :-

1 n
µ A = å ai
n i =1
Variance

• A measure of the spread of the data in a data


set A with mean µA:
n

å (a - µA )
2
i
s A2 = i =1
(n - 1)
• Variance is claimed to be the original statistical
measure of spread of data.
13
Covariance
• Covariance matrix gives a measure of similarity
between 2 data sets to be matched.
• If µA and µB are the centroids of the data sets
A and B respectively then, the covariance matrix
between the two sets is given by:-
n

å (a i - µ A )(bi - µ B )
å AB = i =1
(n - 1)
Inner Product
• Let a and b be two vectors defined as: a = [a1 ,..., an ]T and , b = [b1 ,..., bn ]T
n
• Inner (dot) product: a × b = a b = å ai bi
T

i =1

• Length (Eucledian norm) of a vector a is normalized iff ||a||= 1


n

åa
2
a = a ×a =
T
i
i =1 a ×b
• The angle between two n-dimesional vectors cosq =
• An inner product is a measure of collinearity: || a || || b ||
– a and b are orthogonal iff a ×b = 0

– a and b are collinear iff a × b =|| a || || b ||


• A set of vectors is linearly independent if no vector is a linear
combination of other vectors.
Transformation Matrices
• Consider the following:

• The square (transformation) matrix scales (3,2)


• Now assume we take a multiple of (3,2)
é 3ù é6 ù
2´ ê ú = ê ú
ë 2û ë 4û

é2 3ù é6ù é24 ù é6 ù
ê2 ú ´ ê ú = ê ú = 4´ ê ú
ë 1û ë4û ë16 û ë 4û 16
Transformation Matrices
• Scale vector (3,2) by a value 2 to get (6,4)
• Multiply by the square transformation matrix
• And we see that the result is still scaled by 4.
WHY?
A vector consists of both length and direction. Scaling a
vector only changes its length and not its direction. This is
an important observation in the transformation of matrices
leading to formation of eigenvectors and eigenvalues.
Irrespective of how much we scale (3,2) by, the solution
(under the given transformation matrix) is always a multiple
of 4.
17
Eigenvalue Problem
• The eigenvalue problem is any problem having the
following form:
A.v=λ.v
A: m x m matrix
v: m x 1 non-zero vector
λ: scalar
• Any value of λ for which this equation has a
solution is called the eigenvalue of A and the vector
v which corresponds to this value is called the
eigenvector of A.
18
Eigenvalue Problem
• Going back to our example:

A . v = λ. v

• Therefore, (3,2) is an eigenvector of the square matrix A


and 4 is an eigenvalue of A
• The question is:
Given matrix A, how can we calculate the eigenvector
and eigenvalues for A?
19
Calculating Eigenvectors & Eigenvalues

• Simple matrix algebra shows that:


A.v=λ.v
Û A.v-λ.I.v=0
Û (A - λ . I ). v = 0

• Finding the roots of |A - λ . I| will give the eigenvalues


and for each of these eigenvalues there will be an
eigenvector
Example …
20
Calculating Eigenvectors & Eigenvalues

• Let
é0 1ù
A=ê ú
ë - 2 - 3û
• Then:
é0 1ù é1 0ù é 0 1 ù él 0 ù
A - l .I = ê ú - lê ú =ê ú -ê ú
ë - 2 - 3û ë 0 1 û ë - 2 - 3û ë 0 l û
é- l 1 ù
=ê ú = (- l ´ (- 3 - l )) - (- 2 ´ 1) = l2
+ 3l + 2
ë- 2 - 3 - lû
• And setting the determinant to 0, we obtain 2 eigenvalues:
λ1 = -1 and λ2 = -2 21
Calculating Eigenvectors & Eigenvalues

• For λ1 the eigenvector is:


( A - l1.I ).v1 = 0
é1 1 ù é v1:1 ù
ê- 2 - 2ú.êv ú = 0
ë û ë 1:2 û
v1:1 +v1:2 = 0 and - 2v1:1 - 2v1:2 = 0
v1:1 = -v1:2

• Therefore the first eigenvector is any column vector in


which the two elements have equal magnitude and opposite
sign.

22
Calculating Eigenvectors & Eigenvalues

• Therefore eigenvector v1 is
é+ 1ù
v1 = k1 ê ú
ë- 1û
where k1 is some constant.
• Similarly we find that eigenvector v2

é + 1ù
v2 = k 2 ê ú
where k2 is some constant. ë - 2û

23
Properties of Eigenvectors and
Eigenvalues
• Eigenvectors can only be found for square matrices and
not every square matrix has eigenvectors.
• Given an m x m matrix (with eigenvectors), we can find
m eigenvectors.
• All eigenvectors of a symmetric* matrix are
perpendicular to each other, no matter how many
dimensions we have.
• In practice eigenvectors are normalized to have unit
length.
*Note: covariance matrices are symmetric! 24
Mean Square Error
• In statistics, the mean squared error or MSE
of an estimator is one of many ways to quantify
the amount by which an estimator differs from
the true value of the quantity being estimated.
• The Mean Square error (MSE) between 2 data
sets A and B having n points each is given by :-
1 n
MSE = å ai - bi
2

n i =1
where ||.|| denotes the L2-Norm/Euclidean
Norm between two data points.
Rigid Transformations
• Rigid transformations can be classified into:
– Scaling
– Rotation
– Translation
Transformations – Scaling in 2D
• Scaling a coordinate means multiplying each of
its components by a scalar
• Uniform scaling means this scalar is the same
for all components:

´2
Transformations - Scaling in 2D
• Non-uniform scaling: different scalars per
component:

X ´ 2,
Y ´ 0.5

• How can we represent this in matrix form?


Transformations - Scaling in 2D
• Scaling operation:
é x'ù éax ù
ê y 'ú = êby ú
ë û ë û
• Or, in matrix form:
é x' ù éa 0ù é x ù
ê y 'ú = ê0 ú ê ú
bû ë y û
ë û ë
scaling matrix
Transformations – Rotation in 2D
x’ = x cos(q) - y sin(q)
(x’, y’) y’ = x sin(q) + y cos(q)

Or, in matrix form


(x, y)

é x'ù écosq - sin q ù é x ù


ê y 'ú = ê sin q ú ê
cosq û ë y û ú
q ë û ë
Transformations – Translation in 2D
x' = x + t x
y' = y + t y

tx = 2
ty = 1
Transformations – Scaling in 3D
é x'ù é s x 0 0 0ù é x ù
ê y 'ú ê 0 sy 0 0úú êê y úú
ê ú=ê ×
ê z'ú ê 0 0 sz 0ú ê z ú
ê ú ê ú ê ú
ë1û ë 0 0 0 1û ë 1 û
y

z x
Transformations – Translation in 3D

é x'ù é1 0 0 tx ù é xù
ê y 'ú ê0 1 0 t y úú êê y úú
ê ú=ê ×
ê z ' ú ê0 0 1 tz ú ê z ú y
ê ú ê ú ê ú
ë 1 û ë0 0 0 1 û ë1 û
(x’,y’,z’)

(x,y,z) T=(tx,ty,tz)
x

z
Transformations – Rotation in 3D
• How should we specify rotations?
• In 2D, it was always counterclockwise in the xy-
plane.
• In 3D, we have more choices
– xz-plane, yz-plane, an arbitrary plane.

• We could specify these in terms of the vector


perpendicular to the plane of rotation.
– z axis, y-axis, x-axis, arbitrary axis
Rotation in 3D – Euler Angles
y

é x'ù écos g - sin g 0 0ù é x ù


ê y 'ú ê sin g cos g 0 0úú êê y úú
ê ú=ê ×
ê z'ú ê 0 0 1 0ú ê z ú
ê ú ê ú ê ú y
ë1û ë 0 0 0 1û ë 1 û
x
é x'ù é cos b 0 sin b 0ù é x ù z
ê y 'ú ê 0 1 0 0úú êê y úú
ê ú=ê ×
ê z ' ú ê- sin b 0 cos b 0ú ê z ú
ê ú ê ú ê ú y
ë1û ë 0 0 0 1û ë 1 û
z x
é x'ù é1 0 0 0ù é x ù
ê y 'ú ê0 cos a - sin a 0úú êê y úú
ê ú=ê ×
ê z ' ú ê0 sin a cos a 0ú ê z ú
ê ú ê ú ê ú
ë 1 û ë0 0 0 1û ë 1 û
z x
Quaternions
• A quaternion q can be thought of as:
– A vector with four components: q = [q0,qx,qy,qz]T.
– A composite of a scalar and an ordinary vector: q =
[q0, q].
– A complex number with three different imaginary
parts: q = q0 + iqx+jqy+kqz.
Products of Quaternions
• Multiplication of quaternions can be defined in terms
of products of their components.
• Suppose that we let : i2 = j2 = k2 = -1
ij = k, jk = i, ki = j
and ji = -k, kj = -i, ik = -j
• Then if r = r0 + irx+jry+krz ,
we get; rq = (r0 q0 - rx q x - ry q y - rz q z )
+ i (r0 q x + rx q0 + ry q z - rz q y )
• The product qr has a similar form
+ j (r0 q y - rx q z + ry q0 + rz q x )
with six of the signs changed.
+ k (r0 q z + rx q y - ry q x + rz q0 )
Products of Quaternions
• The product of two quaternions can also be expressed in terms
of the product of an orthogonal 4x4 matrix and a vector with
four components.
• One may choose to expand either the first or the second
quaternion in a product into an orthogonal 4x4 matrix as
follows:
é r0 - rx - ry - rz ù
• Note that R differs from R êr r - r r ú
rq = ê x 0 z y ú
q = Rq
in that the lower-right-hand ê ry rz r0 - rx ú
ê ú
êë rz - ry rx r0 úû
3x3 sub-matrix is transposed.
é r0 - rx - ry - rz ù
êr r r - r ú
qr = ê x 0 z yú
q = Rq
êry - rz r0 rx ú
ê ú
êë rz ry - rx r0 úû
Dot Product of Quaternions
• Considering a quaternion as a vector of four
components, the dot product of two quaternions is the
sum of products of corresponding components:
r.q = r0 q0 + rx q x + ry q y + rz q z

• The square of the magnitude of a quaternion is the dot


product of the quaternion with itself.
2
q = q.q = q02 + qx2 + q y2 + qz2
• A unit quaternion is a quaternion whose magnitude
equals 1.
q.q = 1
Conjugate of Quaternions
• Taking the conjugate of a quaternion negates its
imaginary part, thus: q* = q0 – iqx – jqy – kqz.
• The 4x4 matrix associated with the conjugate of
a quaternion is just the transpose of the matrix
associated with the quaternion itself, i.e. QT
where é q0 - qx - qy - qz ù
êq q0 - qz q y úú
Q=ê
x

êq y qz q0 - qx ú
ê ú
êë q z - qy qx q0 úû
Conjugate of Quaternions
• Since Q is an orthogonal matrix, thus the product with
its transpose is a diagonal matrix, that is: QQT = (q.q)I,
where I is the 4x4 identity matrix.
• Correspondingly, the product of q and q* is real, that
is: qq* = q02 + qx2 + q y2 + qz2 = q.q
• We immediately conclude that a non-zero quaternion
has an inverse: q*
-1
q =
q.q

• In case of a unit quaternion (q.q = 1), the inverse is


just the conjugate
Useful Properties of Products
• Since the matrices associated with quaternions are
orthogonal, hence dot products are preserved, that is:
(qv).(qr) = (q.q) (v.r)
• Proof: (qv )(. qr ) = (Qv ).(Qr ) = (Qv )T (Qr )
= vT QT Qr = vT (q.q)Ir = (q.q)(
. v.r )

• In the case of a unit quaternion : (qv).(qr)=(v.r)


• A special case follows immediately:
(qr).(qr) = (q.q) (r.r)
that is the magnitude of a product is just the product of the magnitudes.
Vectors
• Vectors can be represented by purely imaginary
quaternions. If r = (x, y, z)T, we can use the
quaternion: r = 0 + i x + j y + k z.
• Similarly, scalars can be represented by using real
quaternions.
• Note that the matrices R and R associated with
a purely imaginary quaternion and its conjugate
are skew symmetric, that is
T
R = -R
T
and R = -R
Unit Quaternions and Rotation
• Since the length of a vector is not changed by rotation
nor is the angle between vectors, thus rotation
preserves dot product. (note that a.b = |a||b|cosθ).
• Now, we have already established that multiplication by
a unit quaternion preserves dot products between two
quaternions, that is (qv).(qr) = (q.q) (v.r) = v.r
• And since a vector can be represented as a purely
imaginary quaternion, thus we can represent rotation by
using unit quaternions if we can find a way of mapping
purely imaginary quaternions into purely imaginary
quaternions in such a way that preserves dot products.
Unit Quaternions and Rotation
• However, we can not use simple multiplication to
represent rotation, since the product of a unit
quaternion and a purely imaginary quaternion is
generally not a purely imaginary quaternion.
• What we can use instead is the composite product
defined as follows:
– Let r be a purely imaginary quaternion representing a 3D
point (vector) such that : r = 0 + i x + j y + k z.
– Let q be a unit quaternion such that q = q0 + iqx+jqy+kqz
– Rotating the vector (point) r by a unit quaternion q can be
defined as r’ = qrq*, where r’ is a purely imaginary
quaternion representing the vector r after rotation by q.
Unit Quaternions and Rotation
Prove that the composite product leads to purely imaginary quaternion, hence
it can be used to represent rotation.
Proof:

• The objective is representing the composite product as a matrix multiplied by


a vector r. let’s find this matrix in terms of the matrices associated to the unit
quaternion q and its conjugate.

qrq = (Qr )q = Q (Qr ) = (Q Q)r


* * T T

é q0 - qx - qy - qz ù é q0 - qx - qy - qz ù
êq q0 - qz q y úú êq q0 qz - q y úú
where Q = ê and Q = ê
x x

êq y qz q0 - qx ú êq y - qz q0 qx ú
ê ú ê ú
êë q z - qy qx q0 úû êë q z qy - qx q0 úû
Where Q and Q are the 4x4 matrices corresponding to the unit quaternion q.
Unit Quaternions and Rotation
Proof: cont

é q0 qx qy q z ù é q0 - q x - q y - q z ù
ê- q q - q q ú êq q - q q ú
\ Q Q=ê
T x 0 z y úê x 0 z y ú

ê- q y q z q0 - q x ú ê q y q z q0 - q x ú
ê úê ú
-
êë zq - q y q x q 0 ú q
û êë z - q y q x q 0 ú
û
éq.q 0 0 0 ù
ê 0 (q 2 + q 2 - q 2 - q 2 ) 2 ( q q - q q ) 2 ( q q + q q ) ú
= ê ú
0 x y z x y 0 z x z 0 y

ê0 2(q y q x + q0 q z ) (q02 - q x2 + q y2 - q z2 ) 2(q y q z - q0 q x ) ú


ê 2 ú
êë 0 2 ( q q
z x - q q
0 y ) 2( q q
z y + q q
0 x ) ( q0
2
- q 2
x - q 2
y + q z )ú
û
where q.q = q02 + q x2 + q y2 + q z2
T
\ qrq = (Q Q)r is a purely imaginary quaternion if r is a purely imaginary
*

quaternion.
Unit Quaternions and Rotation
Proof: cont
• Since q is a quaternion, then Q and Q are orthogonal matrices
by definition.
• Since q is a unit quaternion, then Q and Q are orthonormal
matrices,.
T
• Hence the lower-right-hand 3x3 sub-matrix of Q Q must also be
orthonormal , hence it is the rotation matrix R that take r to r’
such that r’=Rr.
T
• The expansion of Q Q provides an explicit method for
computing the orthonormal rotation matrix R from the
components of the unit quaternion q.

Iterative Closest
Point (ICP)
Algorithm
Problem Statement
• Given a model shape which maybe
represented as:
– Point Sets, Line Segment Sets, Implicit Curves,
Parametric Curves, Triangle Sets, Implicit Surfaces,
Parametric Surfaces
• Given a scene shape which is represented as a
point set, the scene shape may correspond to the
model shape
• It is required to estimate the optimal rotation,
translation and scaling that aligns or registers the
scene shape to the model shape
• Main Application is to register digitize(sensed)
data from un-fixtured rigid objects with an
idealized geometric model prior to shape
inspection.
Main Idea of ICP
1. Begin with initial rotation, translation and scaling (initial value
for registration parameters).
2. Fix the model shape and start moving the scene shape by
applying the initial registration parameters. i.e. scale, rotate and
then translate.
3. Compute the error metric that reflects the dissimilarity of the
scene shape from the model shape.
4. If the error is minimum, we have correctly aligned the scene
shape to the model shape, return with the aligned scene shape.
5. Else, calculate the new values for the registration parameters
and go back to step 2 with the new parameter values.
Algorithm Outline
1. Initialize registration parameters (R,t,s)
and registration error; Error = ∞ Initialize the error to ∞
2. For each point in the scene shape, find
the corresponding closest point in the Find point correspondence
model shape.
3. Calculate registration parameters given Find alignment
point correspondences obtained from
step 2.
4. Apply the alignment to the scene shape. Apply alignment
5. Calculate the registration error between
the currently aligned scene shape and the Update error
model shape.
6. If error > threshold, return to step 2, If error > threshold
else return with new scene shape.
Nomenclature
• Let the model shape be represented as a set of points, mi, where i
i =1,2,…NM .
– Where NM is the number of points in the model shape and M = {mi}
denotes the model point set
– Note: mi= [mxi, myi,mzi]T in the case of 3D

• Let the scene shape be represented as a set of points pi, where


i=1,2,..,NP .
– Where NP is the number of points in the scene shape and P = {pi}
denotes the scene point set.
– Note: pi = [pxi, pyi,pzi]T in the case of 3D
Nomenclature
• Registration parameters:
– s : a scalar value which represents the scaling parameter
– t : a vector representing translation parameters. In 3D case t= [tx, ty,tz]T

– R(.): an operator which applies rotation to its argument (a point).

Note: R(.) will have a different definition according to the way it is used
to represent rotation. E.g. Euler angles (Θx,Θy,Θz), rotation matrix R(3x3
orthogonal matrix) or quaternion q (rotation angle and axis of rotation)
Finding Correspondences
• If correct correspondences are known, we can find
correct relative rotation/translation
Finding Correspondences
• How to find correspondences: User input? Feature
detection? Signatures?
• Alternative: assume closest points correspond
Finding Correspondences
• How to find correspondences: User input? Feature
detection? Signatures?
• Alternative: assume closest points correspond
Finding Correspondences
• Converges if starting position “close enough“
Finding Correspondences
• For every point, pi, in the scene shape P ={pi}, i=1,..NP, we
search for the closest point mj in the model shape M to the scene
point pi using the Euclidean distance.

• Given two points pi and mi, the Euclidean distance can be


computes as follows:
d ( pi , mk ) = pi - mk

= ( pxi - mxk ) + ( p yi - myk ) + ( pzi - mzk )


2 2 2

• Given a scene point pi and the model point set M, the Euclidean
distance between pi and M can be found as:

d ( pi , M ) = min d ( pi , mk ) = min pi - mk
k =1,... N M k =1,... N M
Finding Correspondences
• The closest point mj Є M (model point set) satisfies the equality:
i.e.
d ( pi ,m j ) = d ( pi , M )
• In other words, j is the index of the closest point pi
j = arg min d ( pi , mk )
k =1,..N M

• The closest point in the model set M that yields the minimum
distance will be denoted by yi. That is yi = mj such that
d(pi,yi)=d(pi,M). Now pi corresponds to yi.

• Let Y denote the resulting set of closets points and C be the


closest point operator such that Y = C( P , M )
{yi} {mk}
{pi}
Alignment Calculation
• Given: A set of point correspondences between the
scene shape P and the model shape M, where Y Í M
denotes the set of closest points to P, such that yi is the
corresponding closest point to pi, where i=1,2,..,NP.

• Required: Find the optimal registration parameters


(scaling, rotation and translation) which brings the
scene points P to the closest model points Y.

• Approach: Quaternion based method


Alignment Calculation
• Let NP be the number of points correspondences.
• The measurement coordinates in the scene and model
coordinate systems will be denoted by P={pi} and Y={yi},
respectively, where i ranges from 1 to NP.
• We are looking for a transformation of the form:

Y = sR( P) + t (1)
which registers (aligns) the scene points P to the corresponding
model points Y.
where: s: is a scale factor
t: is the translational offset
R(P) denotes the rotated version of the points P.

Note: At this time we do not use any particular notation for rotation.
Alignment Calculation
• Rotation is a linear operation and it preserves lengths such that:
2 2
R( p) = p

Where || p ||2 = p × p is the square of the length of the vector point p.

• Since correspondences are not perfect, we will not be able to


find a scale factor, a translation and a rotation such that the
transformation equation(1) is satisfied for each point.

• Instead there will be a residual error for each point pair


(correspondence) defined as follows:

ei = yi - ($
sR( pi ) + t )
!#!"
(2)
Transforme d version of the scene point p i
Alignment Calculation
• We will minimize the total error defined as the sum of squares
of these errors:
NP

E= å || e ||
i =1
i
2

• We will consider the variation of the total error first with


translation, then with scale and finally with respect to rotation.
Finding the Translation
• It is useful to refer all points to their centroids which are defined
as follows:
1 NP 1 NP
µP = å
N P i =1
pi and µY = å
N P i =1
yi

• Lets denote the new points by:


pi ' = pi - µ p and yi ' = yi - µY
• After this transformation, the point sets become zero mean, i.e
NP NP
1 1
NP
å i =0
p '

i =1
and
NP
å i =0
y '

i =1
Finding the Translation
• Now, the error term can be re-written as follows:

ei = yi '-(sR( pi ' ) + t ') (3)


where t ' = t - µY + sR( µ P ) (4)
t’ is the new translational offset after bringing the points to the origin (i.e. zero-mean).

Q: Prove equation (4)


Finding the Translation
Proof:
Set equation (2) equal to equation (3) since the residual errors of
the points before and after bringing the points to the origin are
equal.

Thus yi - (sR( pi ) + t ) = yi '-(sR( pi ' ) + t ')


Since yi ' = yi - µY and pi ' = pi - µ P

Therefore yi - sR( pi ) - t = yi - µY - sR( pi - µ P ) - t '


Since rotation is a linear operator
yi - sR( pi ) - t = yi - µY - sR( pi ) + sRµ P ) - t '
t ' = t - µY + sR( µ P )

Finding the Translation
Hence the sum of squares of errors becomes:
NP NP
E = å || ei || = å || yi '- sR( pi ' ) - t ' || 2
2

i =1 i =1

[ ]
NP
= å || yi '- sR( pi ' ) || 2 -2t ' || yi '- sR( pi ' ) || + || t ' || 2
i =1
NP NP
= å || yi '- sR( pi ' ) || -2t ' å || yi '- sR( pi ' ) || + N P || t ' || 2
2
(5)
i =1 i =1

Now the sum in the middle of the expression in (5) is zero since
the points are referred to the centroid (i.e. zero-mean and
rotation and scaling don’t affect the mean.)

Hence, The first and third terms are left:


NP
E = å || yi '-sR( pi ' ) ||2 + N P || t ' ||2 (6)
i =1
Finding the Translation
• Remember that we are looking for the optimal translational
offset t’ which will minimize the total error E.
• Since the first term in (6) does not depend on the translation and
the second term in (6) can not be negative since:
|| t ' ||2 ³ 0 and NP > 0
• Thus the total error is obviously minimized with t’ = 0.
• Therefore the translational offset can be found as follows:
! t ' = t - µY + sR( µ P ) = 0
\ t = µY - sR( µ P ) (7)
That is, the translation is just the difference of the model points centroid and
the scaled and rotated scene points centroid.
We return to this equation to find the translational offset once we have found
the scale and rotation.
Finding the Translation
• At this point, we note that the error term can be
written as:

ei = yi '-sR( pi ' )
• Since t’ = 0, so the total error to be minimized can be
re-written as follows:
NP
E = å || yi '-sR( pi ' ) ||2 (8)
i =1
Finding the Scale
• Expanding the total error defined in (8), we will get:

[ ]
NP
E = å || yi ' || 2 -2 yi ' sR( pi ' )+ || sR( pi ' ) || 2
i =1
NP NP NP
= å || yi ' || -2s å yi ' R( pi ' ) + s 2 å || R( pi ' ) || 2
2

i =1 i =1 i =1
• Since rotation preserves length, i.e.

|| R( pi ' ) || 2 =|| p 'i || 2


• Therefore we obtain:
NP NP NP
E = å || yi ' ||2 -2så yi ' R( pi ' ) + s 2 å || p'i ||2
i =1 i =1 i =1
• Let
NP NP NP
S y = å || yi ' ||2 , S P = å || p'i ||2 and D = å yi ' R( pi ' )
i =1 i =1 i =1
Finding the Scale
• Hence;
E = S y - 2sD + s 2 S P (9)
where Sy and Sp are the sums of the squares of the points length
relative to their centroids, while D is the sum of the dot products
of the corresponding points in the model with the rotated points
in the scene.
• Since we are looking for the scaling factor s, complete the square
in (9) with respect to s, we will get:

æ D ÷2ö S S - D 2
ç
E = s SP - +
y P
(10)
ç S ÷ SP
è P ø
Lets prove (10) L…
Finding the Scale
Proof:
Recall : (s - A) = s 2 - 2 As + A 2
2

To solve for s, equate the total error to zero.


E = S P s 2 - 2 Ds + SY = 0
Move the constant term to the other side :
S P s 2 - 2 Ds = - SY
Divide by the coffecient of s 2 :
2 Ds - SY
s2 - =
SP SP
Take half of the coffecient of the s - term and square it :
D æ D2 ö
- ¾®çç
¾ 2
÷
÷
SP S
è P ø
Add this to both sides :
2 Ds æ D 2 ö - SY æ D 2 ö
s -
2
+ çç 2
÷=
÷ + çç 2
÷
÷
SP S
è P ø S P S
è P ø
Finding the Scale
Proof: cont
Convert the left hand side to the squared form and simplify the right hand side :
2
æ Dö 1 æ D2 ö
çç s - ÷÷ = çç - SY ÷÷
è SP ø SP è SP ø
Mulitply both sides by S P
2
æ D ö æ D2 ö
S P çç s - ÷÷ = çç - SY ÷÷
è SP ø è SP ø
NP
Since S P = å || pi || 2 ³ 0, Hence, we can take the square root
i =1
2

(S) P
2 æ D ö æ D2
çç s - ÷÷ - çç
SP ø è SP
ö
- SY ÷÷ = 0
è ø
2
æ D SP ö D 2
ç s SP - ÷ + SY - =0
ç SP ÷ SP
è ø
2
æ D SP ö SY S P - D 2
Therefore : E = ç s S P - ÷ +
ç SP ÷ SP
è ø

Finding the Scale
• The total error can be minimized with respect to the scaling
factor s when the first term in (10) is set to zero, since the
second term doesn’t depend on s.
2
æ D ö
Therefore : ç s S P - ÷ = 0,
ç S ÷
è P ø

D D
s SP - = 0 thus S P =
SP SP
NP

å y 'R(p ')
i i
Hence : s = i =1
NP
(11)
å i
||p
i =1
'||2
Symmetry in Scale
• If we exchange the roles of the scene and the model points as recommended
by Horn[2], we will be finding the best fit instead of to the transformation:

yi = (sR(pi ) + t )
• We will find to the inverse transformation:

pi = æç s T RT (yi ) + t ö÷
_

è ø
• The scale factor equation in this case becomes:

æ NP ö
ç å || y 'i || 2 ÷
s = ç Ni =P1 ÷
ç ÷
çå
2
|| p 'i || ÷
è i =1 ø
Finding the Rotation
• Recall (10):
æ D ö 2 S y SP - D2
E = ç s SP - ÷ +
ç S ÷ SP
è P ø
• To minimize the error with respect to scaling, we set the first
term to zero, since the second term doesn’t depend on scaling.
æ D ö2
ç s SP - ÷ =0
ç S ÷
è P ø
• Hence, the error term can now be re-written as:
S y SP - D2
E= (12)
SP
NP NP NP
where: S y = å || yi ' || , S P = å || p'i || and D = å yi ' R( pi ' )
2 2

i =1 i =1 i =1

• Hence Sy≥0 and Sp ≥0 , therefore SySp ≥0 and doesn’t depend on


the rotation, i.e. constant with respect to R(.), and D2 ≥0 (self
evident) is the only part in the error expression that depends on
R(.).
Finding the Rotation
• The total error can thus be re-written as:
S y SP - D2
E= =0
SP
• Thus E = S S
y P - D 2
is needed to minimized.
• Recall: NP
D = å yi'R(pi')
i =1

which is the sum of the dot products of the model points and
the rotated scene points.
Finding the Rotation
• This maximization can be interpreted geometrically as follows:

yi' × R(pi') = ||yi'||||R(pi')||cosq


where θ is the angle between the model points and the rotated
scene points.
• To obtain the optimal rotation, θ should be zero, therefore cos θ
= 1 which is the maximum value obtained by the cosine
function.
• Since ||yi'|| ³ 0 and ||R(pi')|| ³ 0 , having θ = 0 will lead to the
maximum value of yi' × R(pi').
• Therefore, maximizing D implicitly means minimizing the angle
between the model points and the rotated scene points.
Finding the Rotation
Representation of rotation:
• There are many ways to represent rotation, including Euler
angles, axis and angle, orthonormal matrices and Hamilton’s
quaternion's.
• Orthonormal matrices have been used most often in
photogrammetry and robotics. However, there are a number of
advantages to the unit-quternion notation.
• Also, unit-quaternions are closely allied to the geometrically
intuitive axis and angle notation.
• Here we solve the problem of finding the rotation that
NP

maximizes D = å y ' R( p ' ) by using unit-quaternions


i i
i =1
Finding the Rotation
• We have to find the unit quaternion q that maximizes:
Np Np
( )
D = å yi' .R pi' = å yi' . qpi' q* ( )
i =1 i =1
Np Np
( )
= å qpi' q* . yi' = å qpi' . yi' q ( )( )
i =1 i =1

• Supposed that

i [
p = p ,p ,p
' '
xi
'
yi
' T
zi ] and '
i [
y = y ,y ,y '
xi
'
yi ]
' T
zi

• Then while
qpi' = Pi q pi' q = Yi q
é0 - p xi' - p 'yi - p zi' ù é0 - y xi' - y 'yi - y zi' ù
ê ' ú ê ' ú
ê p xi 0 p zi' - p 'yi ú ê y xi 0 - y zi' y 'yi ú
= ' q = ' q
ê p yi - p zi' 0 ' ú
p xi ê y yi y zi' 0 - y xi' ú

ê ' ú ê ' ú
êë p zi p 'yi - p xi' 0 úû êë y zi - y 'yi y xi' 0 úû
Finding the Rotation
• Note that Pi and Yi are skew symmetric as well as orthogonal
since they are associated to purely imaginary quaternions.
• The sum that we have to maximize can now be written as:

( ) ( ) (Y q )
Np Np Np
( )( )
D = å qp . y q'
i
'
i = å P i q .(Yi q ) = å P i q
T
i
i =1 i =1 i =1
Np
æ Np
ö æ Np
ö
= å q P Yi q = q çç å P i Yi ÷÷q = q çç å N i ÷÷q
T T T T T
i
i =1 è i =1 ø è i =1 ø
= q T Nq
Np

N = å Ni
T
where N i = P Yi i and
i =1
Finding the Rotation
T
N i = P Yi
i

é 0 p xi' p 'yi p zi' ù é 0 - y xi' - y 'yi - y zi' ù


ê ' úê ú
ê - p xi 0 - p zi' p 'yi ú ê y xi' 0 - y zi' y 'yi ú
=
ê- p 'yi p zi' 0 - p xi' ú ê y 'yi y zi' 0 - y xi' ú
ê úê ' ú
- p
êë zi
'
- p 'yi p xi' 0 úû êë y zi - y 'yi y xi' 0 úû

é pxi' yxi' + p 'yi y 'yi + pzi' yzi' p 'yi yzi' - pzi' y 'yi - pxi' yzi' + pzi' yxi' pxi' y 'yi - p 'yi yxi' ù
ê ú
ê - pzi y yi + p yi yzi
' ' ' '
pxi yxi - pzi yzi - p yi y yi
' ' ' ' ' '
pxi y yi + p yi yxi
' ' ' '
pxi yzi + pzi yxi ú
' ' ' '

=
ê pzi' yxi' - pxi' yzi' p 'yi yxi' + pxi' y 'yi p 'yi y 'yi - pzi' yzi' - pxi' yxi' p 'yi yzi' + pzi' y 'yi ú
ê ' ' ú
êë - p yi yxi + pxi y yi pzi yxi + pxi yzi pzi y yi + p yi yzi pzi yzi - p yi y yi - pxi yxi úû
' ' ' ' ' ' ' ' ' ' ' ' ' ' ' '
Finding the Rotation
Np

N = å Ni
i =1

é S xx + S yy + S zz S yz - S zy - S xz + S zx S xy - S yz ù
ê -S +S S xx - S zz - S yy S xy + S yx S xz + S zx úú

zy yz

ê S zx - S xz S yx + S xy S yy - S zz - S xx S yz + S zy ú
ê ú
êë - S yx + S xy S zx + S xz S zy + S yz S zz - S yy - S xx úû

Np Np Np

where S xx = å p xi' y xi' , S xy = å p xi' y 'yi , S xz = å p xi' y zi' ... etc


i =1 i =1 i =1
Finding the Rotation
• Hence we can define the S-matrix whose elements are
sums of products of coordinates measured in the scene
shape with coordinates measured in the model shape,
such that:
é S xx S xy S xz ù
ê ú
S = ê S yx S yy S yz ú
ê S zx S zy S zz úû
ë
• Note that the scene and model points were brought to
the origin by subtracting their centroids.
Finding the Rotation
Eigenvector Maximizes Matrix Product:
• It can be shown that the unit quaternion that maximizes qTNq is
the eigenvector corresponding to the most positive eigenvalue of
the matrix N.
Proof:
• To find the rotation that minimizes the sum of squares of erros,
we have to find the quaternion q that maximizes qTNq subject
to the contraint that q.q = 1 (unit quaternion).
• The symmetric 4x4 matrix N will have four real eigenvalues, say
λ1, λ2, λ3 and λ4.
• A corresponding set of orthogonal unit eigenvectors v1, v2, v3
and v4 can constructed such that Nvi = λivi where i = 1,2,3,4.
Finding the Rotation
Proof: cont
• The eigenvectors span the 4D space, so an arbitrary quaternion
q can be written as a linear combination in the form:
q = a1v1 + a 2v2 + a 3v3 + a 4v4
• Since the eigenvectors are orthogonal, we have:

q.q = a12 + a 22 + a 32 + a 42
• We know that this has to be equal to one since we are looking
for a unit quaternion.
• Now Nq = λq = a1 λ1v1 + a 2 λ2v2 + a 3 λ3v3 + a 4 λ4v4
• We conclude that
q T Nq = q.(Nq ) = a12 λ1 + a 22 λ2 + a 32 λ3 + a 42 λ4
Finding the Rotation
Proof: cont
• Now, suppose we have arranged the eigenvalues in order such
that λ1≥ λ2 ≥ λ3 ≥ λ4.
• Then we have:

qT Nq = q.(Nq ) = a12 λ1 + a 22 λ2 + a 32 λ3 + a 42 λ4
£ a12 λ1 + a 22 λ1 + a 32 λ1 + a 42 λ1
= (a 2
1 + a + a + a λ1
2
2
2
3
2
4 )
= (q.q)λ1
= λ1
Finding the Rotation
Proof: cont
• Since we need to maximize qTNq , but we have qTNq ≤ λ1 ,
hence qTNq is bounded above by λ1 which is the largest
eigenvalue.
• Thus the maximum of qTNq is attained when qTNq = λ1 .
• This only happens if α1 = 1 and α2 = α3 = α4 = 0, i.e. :
q = a1v1 + a 2v2 + a 3v3 + a 4v4
= v1
which is the eigenvector corresponding to the largest positive
eigenvalue λ1.

Finding Alignment - Summary
• We can now summarize the algorithm of finding the rotation,
scaling factor and the translational offset given pairs of points
correspondences as follows:
1. Find the centroids of the two point sets in the scene and model shapes.
NP
1 1 NP
µP =
NP
åp
i =1
i and µY =
NP
åy
i =1
i

2. Compute the points coordinates relative to their centroids.

pi ' = pi - µ p and y i ' = y i - µY


3. For each pair of points {pi’,yi’}, compute the nine possible products of
the two vectors pi’ and yi’. Then add them up to obtain Sxx, Sxy, … Szz.
These nine totals contain all the information needed to find the
solution. Np Np Np

where S xx = å pxi' y xi' , S xy = å pxi' y 'yi , S xz = å pxi' y zi' ... etc


i =1 i =1 i =1
Finding Alignment - Summary
4. Compute the ten independent elements of the 4x4 symmetric matrix N
by combining the sums obtained in (3) as follows:

é S xx + S yy + S zz S yz - S zy - S xz + S zx S xy - S yz ù
ê -S +S S xx - S zz - S yy S xy + S yx S xz + S zx ú
N= ê zy yz ú
ê S zx - S xz S yx + S xy S yy - S zz - S xx S yz + S zy ú
ê ú
êë - S yx + S xy S zx + S xz S zy + S yz S zz - S yy - S xx úû
5. Find the eigenvalues and eigenvectors of N.

6. The quaternion q representing the rotation is the eigenvector


corresponding to the largest positive eigenvalue of N.
Finding Alignment - Summary
7. Compute the scaling factor as follows:
Np Np

s = å y qp q'
i ( ' *
i ) å p ' 2
i
i =1 i =1

where qp q = Q Q pi' ' *


i ( ) T

éq.q 0 0 0 ù
ê 0 (q 2 + q 2 - q 2 - q 2 ) 2 ( q q - q q ) 2 ( q q + q q ) ú
Q Q=ê ú
T 0 x y z x y 0 z x z 0 y

ê0 2(q y q x + q0 q z ) ( q0 - q x + q y - q z )
2 2 2 2
2(q y q z - q0 q x ) ú
ê 2 ú
êë 0 2 ( q q
z x - q q
0 y ) 2 ( q q
z y + q q
0 x ) ( q 2
0 - q 2
x - q 2
y + q z )ú
û
where q.q = q02 + q x2 + q y2 + q z2
8. Compute the translational offset as follows:

(
t = µY - s qµ P q = µY - s Q Q µ P
*
) ( )
T
Let’s do it …
Iterative Closest Point Algorithm
• Initialization
• Find correspondences
• Find alignment
• Apply alignment
• Compute residual error
Function Prototype
Initializations …
ICP Loop: Finding Correspondences

d ( pi , mk ) = ( pxi - mxk )2 + ( p yi - myk )2 + ( pzi - mzk )2

j = arg min d ( pi , mk )
k =1,.. N M

Loop to be continued …
ICP Loop: Finding/Applying Alignment
Explanation to follow

ei = yi - (sRpi + t ) i = 1,2,...N P
NP
E = å || ei || 2

i =1
Let’s do it …
Finding Alignment
• Zero mean point sets.
• Quaternion computation.
• Rotation matrix computation.
• Scaling factor computation.
• Translational offset computation.
• Residual error computation.
Function Prototype
Test Given Point Sets
Zero Mean Point Sets
NP
1
µP =
NP
åp
i =1
i
NP
1
µY =
NP
åy
i =1
i

pi ' = p i - µ p

yi ' = yi - µY
Quaternion Computation
Rotation Matrix Computation

é q0 - qx - qy - qz ù
êq q0 qz - q y úú
Q= ê x

êq y - qz q0 qx ú
ê ú
êë q z qy - qx q0 úû

é q0 - qx - qy - qz ù
êq q0 - qz q y úú
Q=ê
x

êq y qz q0 - qx ú
ê ú
êë q z - qy qx q0 úû
Scaling Factor Computation

æ NP ö
ç å || y 'i || 2 ÷
s = ç Ni =P1 ÷
ç ÷
ç å || p 'i ||
2
÷
è i =1 ø
Translation Offset Computation

( )
T
t = µY - s Q Q µ P = µY - sRµ P
Residual Error

ei = yi - (sRpi + t ) i = 1,2,...N P
NP
E = å || ei || 2
i =1
Experiments
Registering Line to Line
Registering Line to Line – with noise
Registering Line to Polygon
Registering Line to Polygon – with noise
Registering Circle to Polygon
Registering Circle to Circle
Registering Polygon to Circle
Registering Triangle to Triangle
Registering Line to Triangle
Registering Line to Triangle – with noise
Iterative Closest Point
(ICP) Algorithm

Some Proofs …
A Tutorial on Rigid Registration

Iterative Closed Point (ICP)


Some Proofs …

Aly A. Farag
University of Louisville
Acknowledgements:
Help with these slides were provided by Amal Farag and Shireen Elhabian
Prove that quaternion multiplication preserves
dot product, i.e. (qv).(qr) = (q.q) (v.r). What
happen in case q is a unit quaternion?

• Proof:
(qv )(. qr ) = (Qv ).(Qr ) = (Qv )T (Qr )
= vT QT Qr = vT (q.q)Ir = (q.q)(
. v.r )
• In the case of a unit quaternion : (qv).(qr)=(v.r)
• A special case follows immediately:
(qr).(qr) = (q.q) (r.r)
that is the magnitude of a product is just the product of the magnitudes.
Prove that the composite product leads to purely imaginary quaternion,
hence it can be used to represent rotation, i.e. Rotating the vector (point) r by
a unit quaternion q can be defined as r’ = qrq*, where r’ is a purely imaginary
quaternion representing the vector r after rotation by q.
Proof:

• The objective is representing the composite product as a matrix multiplied by


a vector r. let’s find this matrix in terms of the matrices associated to the unit
quaternion q and its conjugate.
qrq = (Qr )q = Q (Qr ) = (Q Q)r
* * T T

é q0 - qx - qy - qz ù é q0 - qx - qy - qz ù
êq q0 - qz q y úú êq q0 qz - q y úú
where Q = ê and Q = ê
x x

êq y qz q0 - qx ú êq y - qz q0 qx ú
ê ú ê ú
êë q z - qy qx q0 úû êë q z qy - qx q0 úû

Where Q and Q are the 4x4 matrices corresponding to the unit quaternion q.
Proof: cont

é q0 qx qy q z ù é q0 - q x - q y - q z ù
ê- q q - q q ú êq q - q q ú
\ Q Q=ê
T x 0 z y úê x 0 z y ú

ê- q y q z q0 - q x ú êq y q z q0 - q x ú
ê úê ú
-
êë zq - q y q x q 0 ú q
û êë z - q y q x q 0 ú
û
éq.q 0 0 0 ù
ê 0 (q 2 + q 2 - q 2 - q 2 ) 2( q q - q q ) 2 ( q q + q q ) ú
= ê ú
0 x y z x y 0 z x z 0 y

ê0 2(q y q x + q0 q z ) (q02 - q x2 + q y2 - q z2 ) 2(q y q z - q0 q x ) ú


ê 2 ú
êë 0 2(q z q x - q0 q y ) 2(q z q y + q0 q x ) (q0 - q x - q y + q z )úû
2 2 2

where q.q = q02 + q x2 + q y2 + q z2


T
\ qrq = (Q Q)r is a purely imaginary quaternion if r is a purely imaginary
*

quaternion.
Proof: cont
• Since q is a quaternion, then Q and Q are orthogonal matrices
by definition.
• Since q is a unit quaternion, then Q and Q are orthonormal
matrices,.
T
• Hence the lower-right-hand 3x3 sub-matrix of Q Q must also be
orthonormal , hence it is the rotation matrix R that take r to r’
such that r’=Rr.
T
• The expansion of Q Q provides an explicit method for
computing the orthonormal rotation matrix R from the
components of the unit quaternion q.

Prove that that the unit quaternion that maximizes qTNq is
the eigenvector corresponding to the most positive eigenvalue
of the matrix N.
Proof:
• To find the rotation that minimizes the sum of squares of erros,
we have to find the quaternion q that maximizes qTNq subject
to the contraint that q.q = 1 (unit quaternion).
• The symmetric 4x4 matrix N will have four real eigenvalues, say
λ1, λ2, λ3 and λ4.
• A corresponding set of orthogonal unit eigenvectors v1, v2, v3
and v4 can constructed such that Nvi = λivi where i = 1,2,3,4.
Proof: cont
• The eigenvectors span the 4D space, so an arbitrary quaternion
q can be written as a linear combination in the form:
q = a1v1 + a 2v2 + a 3v3 + a 4v4
• Since the eigenvectors are orthogonal, we have:

q.q = a12 + a 22 + a 32 + a 42
• We know that this has to be equal to one since we are looking
for a unit quaternion.
• Now Nq = λq = a1 λ1v1 + a 2 λ2v2 + a 3 λ3v3 + a 4 λ4v4
• We conclude that
q T Nq = q.(Nq ) = a12 λ1 + a 22 λ2 + a 32 λ3 + a 42 λ4
Proof: cont
• Now, suppose we have arranged the eigenvalues in order such
that λ1≥ λ2 ≥ λ3 ≥ λ4.
• Then we have:

qT Nq = q.(Nq ) = a12 λ1 + a 22 λ2 + a 32 λ3 + a 42 λ4
£ a12 λ1 + a 22 λ1 + a 32 λ1 + a 42 λ1
= (a 2
1 + a + a + a λ1
2
2
2
3
2
4 )
= (q.q)λ1
= λ1
Proof: cont
• Since we need to maximize qTNq , but we have qTNq ≤ λ1 ,
hence qTNq is bounded above by λ1 which is the largest
eigenvalue.
• Thus the maximum of qTNq is attained when qTNq = λ1 .
• This only happens if α1 = 1 and α2 = α3 = α4 = 0, i.e. :
q = a1v1 + a 2v2 + a 3v3 + a 4v4
= v1
which is the eigenvector corresponding to the largest positive
eigenvalue λ1.

References
1. P.J. Besl, N.D. McKay, A method for registration of 3D shapes, IEEE
Transactions on Pattern Analysis and Machine Intellinegce 14 (1992) 239–
254.

2. B. K. P. Horn, "Closed-form solution of absolute orientation using unit


quaternions,"J. Opt. Soc. Amer. Avol. 4, no. 4, pp. 629-642, Apr. 1987.

3. www.cs.virginia.edu/~gfx/Courses/2004/Intro.Spring.04/Lectures/lecture05.ppt

4. gmm.fsksm.utm.my/~graphics/files/Week9-3D_Transformations.ppt

5. www.cs.tau.ac.il/~dcor/Graphics/adv-slides/ICP.ppt
Thank You

You might also like