Elhabian ICP09
Elhabian ICP09
Elhabian ICP09
Registration
1 n
µ A = å ai
n i =1
Variance
å (a - µA )
2
i
s A2 = i =1
(n - 1)
• Variance is claimed to be the original statistical
measure of spread of data.
13
Covariance
• Covariance matrix gives a measure of similarity
between 2 data sets to be matched.
• If µA and µB are the centroids of the data sets
A and B respectively then, the covariance matrix
between the two sets is given by:-
n
å (a i - µ A )(bi - µ B )
å AB = i =1
(n - 1)
Inner Product
• Let a and b be two vectors defined as: a = [a1 ,..., an ]T and , b = [b1 ,..., bn ]T
n
• Inner (dot) product: a × b = a b = å ai bi
T
i =1
åa
2
a = a ×a =
T
i
i =1 a ×b
• The angle between two n-dimesional vectors cosq =
• An inner product is a measure of collinearity: || a || || b ||
– a and b are orthogonal iff a ×b = 0
é2 3ù é6ù é24 ù é6 ù
ê2 ú ´ ê ú = ê ú = 4´ ê ú
ë 1û ë4û ë16 û ë 4û 16
Transformation Matrices
• Scale vector (3,2) by a value 2 to get (6,4)
• Multiply by the square transformation matrix
• And we see that the result is still scaled by 4.
WHY?
A vector consists of both length and direction. Scaling a
vector only changes its length and not its direction. This is
an important observation in the transformation of matrices
leading to formation of eigenvectors and eigenvalues.
Irrespective of how much we scale (3,2) by, the solution
(under the given transformation matrix) is always a multiple
of 4.
17
Eigenvalue Problem
• The eigenvalue problem is any problem having the
following form:
A.v=λ.v
A: m x m matrix
v: m x 1 non-zero vector
λ: scalar
• Any value of λ for which this equation has a
solution is called the eigenvalue of A and the vector
v which corresponds to this value is called the
eigenvector of A.
18
Eigenvalue Problem
• Going back to our example:
A . v = λ. v
• Let
é0 1ù
A=ê ú
ë - 2 - 3û
• Then:
é0 1ù é1 0ù é 0 1 ù él 0 ù
A - l .I = ê ú - lê ú =ê ú -ê ú
ë - 2 - 3û ë 0 1 û ë - 2 - 3û ë 0 l û
é- l 1 ù
=ê ú = (- l ´ (- 3 - l )) - (- 2 ´ 1) = l2
+ 3l + 2
ë- 2 - 3 - lû
• And setting the determinant to 0, we obtain 2 eigenvalues:
λ1 = -1 and λ2 = -2 21
Calculating Eigenvectors & Eigenvalues
22
Calculating Eigenvectors & Eigenvalues
• Therefore eigenvector v1 is
é+ 1ù
v1 = k1 ê ú
ë- 1û
where k1 is some constant.
• Similarly we find that eigenvector v2
é + 1ù
v2 = k 2 ê ú
where k2 is some constant. ë - 2û
23
Properties of Eigenvectors and
Eigenvalues
• Eigenvectors can only be found for square matrices and
not every square matrix has eigenvectors.
• Given an m x m matrix (with eigenvectors), we can find
m eigenvectors.
• All eigenvectors of a symmetric* matrix are
perpendicular to each other, no matter how many
dimensions we have.
• In practice eigenvectors are normalized to have unit
length.
*Note: covariance matrices are symmetric! 24
Mean Square Error
• In statistics, the mean squared error or MSE
of an estimator is one of many ways to quantify
the amount by which an estimator differs from
the true value of the quantity being estimated.
• The Mean Square error (MSE) between 2 data
sets A and B having n points each is given by :-
1 n
MSE = å ai - bi
2
n i =1
where ||.|| denotes the L2-Norm/Euclidean
Norm between two data points.
Rigid Transformations
• Rigid transformations can be classified into:
– Scaling
– Rotation
– Translation
Transformations – Scaling in 2D
• Scaling a coordinate means multiplying each of
its components by a scalar
• Uniform scaling means this scalar is the same
for all components:
´2
Transformations - Scaling in 2D
• Non-uniform scaling: different scalars per
component:
X ´ 2,
Y ´ 0.5
tx = 2
ty = 1
Transformations – Scaling in 3D
é x'ù é s x 0 0 0ù é x ù
ê y 'ú ê 0 sy 0 0úú êê y úú
ê ú=ê ×
ê z'ú ê 0 0 sz 0ú ê z ú
ê ú ê ú ê ú
ë1û ë 0 0 0 1û ë 1 û
y
z x
Transformations – Translation in 3D
é x'ù é1 0 0 tx ù é xù
ê y 'ú ê0 1 0 t y úú êê y úú
ê ú=ê ×
ê z ' ú ê0 0 1 tz ú ê z ú y
ê ú ê ú ê ú
ë 1 û ë0 0 0 1 û ë1 û
(x’,y’,z’)
(x,y,z) T=(tx,ty,tz)
x
z
Transformations – Rotation in 3D
• How should we specify rotations?
• In 2D, it was always counterclockwise in the xy-
plane.
• In 3D, we have more choices
– xz-plane, yz-plane, an arbitrary plane.
êq y qz q0 - qx ú
ê ú
êë q z - qy qx q0 úû
Conjugate of Quaternions
• Since Q is an orthogonal matrix, thus the product with
its transpose is a diagonal matrix, that is: QQT = (q.q)I,
where I is the 4x4 identity matrix.
• Correspondingly, the product of q and q* is real, that
is: qq* = q02 + qx2 + q y2 + qz2 = q.q
• We immediately conclude that a non-zero quaternion
has an inverse: q*
-1
q =
q.q
é q0 - qx - qy - qz ù é q0 - qx - qy - qz ù
êq q0 - qz q y úú êq q0 qz - q y úú
where Q = ê and Q = ê
x x
êq y qz q0 - qx ú êq y - qz q0 qx ú
ê ú ê ú
êë q z - qy qx q0 úû êë q z qy - qx q0 úû
Where Q and Q are the 4x4 matrices corresponding to the unit quaternion q.
Unit Quaternions and Rotation
Proof: cont
é q0 qx qy q z ù é q0 - q x - q y - q z ù
ê- q q - q q ú êq q - q q ú
\ Q Q=ê
T x 0 z y úê x 0 z y ú
ê- q y q z q0 - q x ú ê q y q z q0 - q x ú
ê úê ú
-
êë zq - q y q x q 0 ú q
û êë z - q y q x q 0 ú
û
éq.q 0 0 0 ù
ê 0 (q 2 + q 2 - q 2 - q 2 ) 2 ( q q - q q ) 2 ( q q + q q ) ú
= ê ú
0 x y z x y 0 z x z 0 y
quaternion.
Unit Quaternions and Rotation
Proof: cont
• Since q is a quaternion, then Q and Q are orthogonal matrices
by definition.
• Since q is a unit quaternion, then Q and Q are orthonormal
matrices,.
T
• Hence the lower-right-hand 3x3 sub-matrix of Q Q must also be
orthonormal , hence it is the rotation matrix R that take r to r’
such that r’=Rr.
T
• The expansion of Q Q provides an explicit method for
computing the orthonormal rotation matrix R from the
components of the unit quaternion q.
■
Iterative Closest
Point (ICP)
Algorithm
Problem Statement
• Given a model shape which maybe
represented as:
– Point Sets, Line Segment Sets, Implicit Curves,
Parametric Curves, Triangle Sets, Implicit Surfaces,
Parametric Surfaces
• Given a scene shape which is represented as a
point set, the scene shape may correspond to the
model shape
• It is required to estimate the optimal rotation,
translation and scaling that aligns or registers the
scene shape to the model shape
• Main Application is to register digitize(sensed)
data from un-fixtured rigid objects with an
idealized geometric model prior to shape
inspection.
Main Idea of ICP
1. Begin with initial rotation, translation and scaling (initial value
for registration parameters).
2. Fix the model shape and start moving the scene shape by
applying the initial registration parameters. i.e. scale, rotate and
then translate.
3. Compute the error metric that reflects the dissimilarity of the
scene shape from the model shape.
4. If the error is minimum, we have correctly aligned the scene
shape to the model shape, return with the aligned scene shape.
5. Else, calculate the new values for the registration parameters
and go back to step 2 with the new parameter values.
Algorithm Outline
1. Initialize registration parameters (R,t,s)
and registration error; Error = ∞ Initialize the error to ∞
2. For each point in the scene shape, find
the corresponding closest point in the Find point correspondence
model shape.
3. Calculate registration parameters given Find alignment
point correspondences obtained from
step 2.
4. Apply the alignment to the scene shape. Apply alignment
5. Calculate the registration error between
the currently aligned scene shape and the Update error
model shape.
6. If error > threshold, return to step 2, If error > threshold
else return with new scene shape.
Nomenclature
• Let the model shape be represented as a set of points, mi, where i
i =1,2,…NM .
– Where NM is the number of points in the model shape and M = {mi}
denotes the model point set
– Note: mi= [mxi, myi,mzi]T in the case of 3D
Note: R(.) will have a different definition according to the way it is used
to represent rotation. E.g. Euler angles (Θx,Θy,Θz), rotation matrix R(3x3
orthogonal matrix) or quaternion q (rotation angle and axis of rotation)
Finding Correspondences
• If correct correspondences are known, we can find
correct relative rotation/translation
Finding Correspondences
• How to find correspondences: User input? Feature
detection? Signatures?
• Alternative: assume closest points correspond
Finding Correspondences
• How to find correspondences: User input? Feature
detection? Signatures?
• Alternative: assume closest points correspond
Finding Correspondences
• Converges if starting position “close enough“
Finding Correspondences
• For every point, pi, in the scene shape P ={pi}, i=1,..NP, we
search for the closest point mj in the model shape M to the scene
point pi using the Euclidean distance.
• Given a scene point pi and the model point set M, the Euclidean
distance between pi and M can be found as:
d ( pi , M ) = min d ( pi , mk ) = min pi - mk
k =1,... N M k =1,... N M
Finding Correspondences
• The closest point mj Є M (model point set) satisfies the equality:
i.e.
d ( pi ,m j ) = d ( pi , M )
• In other words, j is the index of the closest point pi
j = arg min d ( pi , mk )
k =1,..N M
• The closest point in the model set M that yields the minimum
distance will be denoted by yi. That is yi = mj such that
d(pi,yi)=d(pi,M). Now pi corresponds to yi.
Y = sR( P) + t (1)
which registers (aligns) the scene points P to the corresponding
model points Y.
where: s: is a scale factor
t: is the translational offset
R(P) denotes the rotated version of the points P.
Note: At this time we do not use any particular notation for rotation.
Alignment Calculation
• Rotation is a linear operation and it preserves lengths such that:
2 2
R( p) = p
ei = yi - ($
sR( pi ) + t )
!#!"
(2)
Transforme d version of the scene point p i
Alignment Calculation
• We will minimize the total error defined as the sum of squares
of these errors:
NP
E= å || e ||
i =1
i
2
i =1
and
NP
å i =0
y '
i =1
Finding the Translation
• Now, the error term can be re-written as follows:
i =1 i =1
[ ]
NP
= å || yi '- sR( pi ' ) || 2 -2t ' || yi '- sR( pi ' ) || + || t ' || 2
i =1
NP NP
= å || yi '- sR( pi ' ) || -2t ' å || yi '- sR( pi ' ) || + N P || t ' || 2
2
(5)
i =1 i =1
Now the sum in the middle of the expression in (5) is zero since
the points are referred to the centroid (i.e. zero-mean and
rotation and scaling don’t affect the mean.)
ei = yi '-sR( pi ' )
• Since t’ = 0, so the total error to be minimized can be
re-written as follows:
NP
E = å || yi '-sR( pi ' ) ||2 (8)
i =1
Finding the Scale
• Expanding the total error defined in (8), we will get:
[ ]
NP
E = å || yi ' || 2 -2 yi ' sR( pi ' )+ || sR( pi ' ) || 2
i =1
NP NP NP
= å || yi ' || -2s å yi ' R( pi ' ) + s 2 å || R( pi ' ) || 2
2
i =1 i =1 i =1
• Since rotation preserves length, i.e.
æ D ÷2ö S S - D 2
ç
E = s SP - +
y P
(10)
ç S ÷ SP
è P ø
Lets prove (10) L…
Finding the Scale
Proof:
Recall : (s - A) = s 2 - 2 As + A 2
2
(S) P
2 æ D ö æ D2
çç s - ÷÷ - çç
SP ø è SP
ö
- SY ÷÷ = 0
è ø
2
æ D SP ö D 2
ç s SP - ÷ + SY - =0
ç SP ÷ SP
è ø
2
æ D SP ö SY S P - D 2
Therefore : E = ç s S P - ÷ +
ç SP ÷ SP
è ø
■
Finding the Scale
• The total error can be minimized with respect to the scaling
factor s when the first term in (10) is set to zero, since the
second term doesn’t depend on s.
2
æ D ö
Therefore : ç s S P - ÷ = 0,
ç S ÷
è P ø
D D
s SP - = 0 thus S P =
SP SP
NP
å y 'R(p ')
i i
Hence : s = i =1
NP
(11)
å i
||p
i =1
'||2
Symmetry in Scale
• If we exchange the roles of the scene and the model points as recommended
by Horn[2], we will be finding the best fit instead of to the transformation:
yi = (sR(pi ) + t )
• We will find to the inverse transformation:
pi = æç s T RT (yi ) + t ö÷
_
è ø
• The scale factor equation in this case becomes:
æ NP ö
ç å || y 'i || 2 ÷
s = ç Ni =P1 ÷
ç ÷
çå
2
|| p 'i || ÷
è i =1 ø
Finding the Rotation
• Recall (10):
æ D ö 2 S y SP - D2
E = ç s SP - ÷ +
ç S ÷ SP
è P ø
• To minimize the error with respect to scaling, we set the first
term to zero, since the second term doesn’t depend on scaling.
æ D ö2
ç s SP - ÷ =0
ç S ÷
è P ø
• Hence, the error term can now be re-written as:
S y SP - D2
E= (12)
SP
NP NP NP
where: S y = å || yi ' || , S P = å || p'i || and D = å yi ' R( pi ' )
2 2
i =1 i =1 i =1
which is the sum of the dot products of the model points and
the rotated scene points.
Finding the Rotation
• This maximization can be interpreted geometrically as follows:
• Supposed that
i [
p = p ,p ,p
' '
xi
'
yi
' T
zi ] and '
i [
y = y ,y ,y '
xi
'
yi ]
' T
zi
• Then while
qpi' = Pi q pi' q = Yi q
é0 - p xi' - p 'yi - p zi' ù é0 - y xi' - y 'yi - y zi' ù
ê ' ú ê ' ú
ê p xi 0 p zi' - p 'yi ú ê y xi 0 - y zi' y 'yi ú
= ' q = ' q
ê p yi - p zi' 0 ' ú
p xi ê y yi y zi' 0 - y xi' ú
ê ' ú ê ' ú
êë p zi p 'yi - p xi' 0 úû êë y zi - y 'yi y xi' 0 úû
Finding the Rotation
• Note that Pi and Yi are skew symmetric as well as orthogonal
since they are associated to purely imaginary quaternions.
• The sum that we have to maximize can now be written as:
( ) ( ) (Y q )
Np Np Np
( )( )
D = å qp . y q'
i
'
i = å P i q .(Yi q ) = å P i q
T
i
i =1 i =1 i =1
Np
æ Np
ö æ Np
ö
= å q P Yi q = q çç å P i Yi ÷÷q = q çç å N i ÷÷q
T T T T T
i
i =1 è i =1 ø è i =1 ø
= q T Nq
Np
N = å Ni
T
where N i = P Yi i and
i =1
Finding the Rotation
T
N i = P Yi
i
é pxi' yxi' + p 'yi y 'yi + pzi' yzi' p 'yi yzi' - pzi' y 'yi - pxi' yzi' + pzi' yxi' pxi' y 'yi - p 'yi yxi' ù
ê ú
ê - pzi y yi + p yi yzi
' ' ' '
pxi yxi - pzi yzi - p yi y yi
' ' ' ' ' '
pxi y yi + p yi yxi
' ' ' '
pxi yzi + pzi yxi ú
' ' ' '
=
ê pzi' yxi' - pxi' yzi' p 'yi yxi' + pxi' y 'yi p 'yi y 'yi - pzi' yzi' - pxi' yxi' p 'yi yzi' + pzi' y 'yi ú
ê ' ' ú
êë - p yi yxi + pxi y yi pzi yxi + pxi yzi pzi y yi + p yi yzi pzi yzi - p yi y yi - pxi yxi úû
' ' ' ' ' ' ' ' ' ' ' ' ' ' ' '
Finding the Rotation
Np
N = å Ni
i =1
é S xx + S yy + S zz S yz - S zy - S xz + S zx S xy - S yz ù
ê -S +S S xx - S zz - S yy S xy + S yx S xz + S zx úú
=ê
zy yz
ê S zx - S xz S yx + S xy S yy - S zz - S xx S yz + S zy ú
ê ú
êë - S yx + S xy S zx + S xz S zy + S yz S zz - S yy - S xx úû
Np Np Np
q.q = a12 + a 22 + a 32 + a 42
• We know that this has to be equal to one since we are looking
for a unit quaternion.
• Now Nq = λq = a1 λ1v1 + a 2 λ2v2 + a 3 λ3v3 + a 4 λ4v4
• We conclude that
q T Nq = q.(Nq ) = a12 λ1 + a 22 λ2 + a 32 λ3 + a 42 λ4
Finding the Rotation
Proof: cont
• Now, suppose we have arranged the eigenvalues in order such
that λ1≥ λ2 ≥ λ3 ≥ λ4.
• Then we have:
qT Nq = q.(Nq ) = a12 λ1 + a 22 λ2 + a 32 λ3 + a 42 λ4
£ a12 λ1 + a 22 λ1 + a 32 λ1 + a 42 λ1
= (a 2
1 + a + a + a λ1
2
2
2
3
2
4 )
= (q.q)λ1
= λ1
Finding the Rotation
Proof: cont
• Since we need to maximize qTNq , but we have qTNq ≤ λ1 ,
hence qTNq is bounded above by λ1 which is the largest
eigenvalue.
• Thus the maximum of qTNq is attained when qTNq = λ1 .
• This only happens if α1 = 1 and α2 = α3 = α4 = 0, i.e. :
q = a1v1 + a 2v2 + a 3v3 + a 4v4
= v1
which is the eigenvector corresponding to the largest positive
eigenvalue λ1.
■
Finding Alignment - Summary
• We can now summarize the algorithm of finding the rotation,
scaling factor and the translational offset given pairs of points
correspondences as follows:
1. Find the centroids of the two point sets in the scene and model shapes.
NP
1 1 NP
µP =
NP
åp
i =1
i and µY =
NP
åy
i =1
i
é S xx + S yy + S zz S yz - S zy - S xz + S zx S xy - S yz ù
ê -S +S S xx - S zz - S yy S xy + S yx S xz + S zx ú
N= ê zy yz ú
ê S zx - S xz S yx + S xy S yy - S zz - S xx S yz + S zy ú
ê ú
êë - S yx + S xy S zx + S xz S zy + S yz S zz - S yy - S xx úû
5. Find the eigenvalues and eigenvectors of N.
s = å y qp q'
i ( ' *
i ) å p ' 2
i
i =1 i =1
éq.q 0 0 0 ù
ê 0 (q 2 + q 2 - q 2 - q 2 ) 2 ( q q - q q ) 2 ( q q + q q ) ú
Q Q=ê ú
T 0 x y z x y 0 z x z 0 y
ê0 2(q y q x + q0 q z ) ( q0 - q x + q y - q z )
2 2 2 2
2(q y q z - q0 q x ) ú
ê 2 ú
êë 0 2 ( q q
z x - q q
0 y ) 2 ( q q
z y + q q
0 x ) ( q 2
0 - q 2
x - q 2
y + q z )ú
û
where q.q = q02 + q x2 + q y2 + q z2
8. Compute the translational offset as follows:
(
t = µY - s qµ P q = µY - s Q Q µ P
*
) ( )
T
Let’s do it …
Iterative Closest Point Algorithm
• Initialization
• Find correspondences
• Find alignment
• Apply alignment
• Compute residual error
Function Prototype
Initializations …
ICP Loop: Finding Correspondences
j = arg min d ( pi , mk )
k =1,.. N M
Loop to be continued …
ICP Loop: Finding/Applying Alignment
Explanation to follow
ei = yi - (sRpi + t ) i = 1,2,...N P
NP
E = å || ei || 2
i =1
Let’s do it …
Finding Alignment
• Zero mean point sets.
• Quaternion computation.
• Rotation matrix computation.
• Scaling factor computation.
• Translational offset computation.
• Residual error computation.
Function Prototype
Test Given Point Sets
Zero Mean Point Sets
NP
1
µP =
NP
åp
i =1
i
NP
1
µY =
NP
åy
i =1
i
pi ' = p i - µ p
yi ' = yi - µY
Quaternion Computation
Rotation Matrix Computation
é q0 - qx - qy - qz ù
êq q0 qz - q y úú
Q= ê x
êq y - qz q0 qx ú
ê ú
êë q z qy - qx q0 úû
é q0 - qx - qy - qz ù
êq q0 - qz q y úú
Q=ê
x
êq y qz q0 - qx ú
ê ú
êë q z - qy qx q0 úû
Scaling Factor Computation
æ NP ö
ç å || y 'i || 2 ÷
s = ç Ni =P1 ÷
ç ÷
ç å || p 'i ||
2
÷
è i =1 ø
Translation Offset Computation
( )
T
t = µY - s Q Q µ P = µY - sRµ P
Residual Error
ei = yi - (sRpi + t ) i = 1,2,...N P
NP
E = å || ei || 2
i =1
Experiments
Registering Line to Line
Registering Line to Line – with noise
Registering Line to Polygon
Registering Line to Polygon – with noise
Registering Circle to Polygon
Registering Circle to Circle
Registering Polygon to Circle
Registering Triangle to Triangle
Registering Line to Triangle
Registering Line to Triangle – with noise
Iterative Closest Point
(ICP) Algorithm
Some Proofs …
A Tutorial on Rigid Registration
Aly A. Farag
University of Louisville
Acknowledgements:
Help with these slides were provided by Amal Farag and Shireen Elhabian
Prove that quaternion multiplication preserves
dot product, i.e. (qv).(qr) = (q.q) (v.r). What
happen in case q is a unit quaternion?
• Proof:
(qv )(. qr ) = (Qv ).(Qr ) = (Qv )T (Qr )
= vT QT Qr = vT (q.q)Ir = (q.q)(
. v.r )
• In the case of a unit quaternion : (qv).(qr)=(v.r)
• A special case follows immediately:
(qr).(qr) = (q.q) (r.r)
that is the magnitude of a product is just the product of the magnitudes.
Prove that the composite product leads to purely imaginary quaternion,
hence it can be used to represent rotation, i.e. Rotating the vector (point) r by
a unit quaternion q can be defined as r’ = qrq*, where r’ is a purely imaginary
quaternion representing the vector r after rotation by q.
Proof:
é q0 - qx - qy - qz ù é q0 - qx - qy - qz ù
êq q0 - qz q y úú êq q0 qz - q y úú
where Q = ê and Q = ê
x x
êq y qz q0 - qx ú êq y - qz q0 qx ú
ê ú ê ú
êë q z - qy qx q0 úû êë q z qy - qx q0 úû
Where Q and Q are the 4x4 matrices corresponding to the unit quaternion q.
Proof: cont
é q0 qx qy q z ù é q0 - q x - q y - q z ù
ê- q q - q q ú êq q - q q ú
\ Q Q=ê
T x 0 z y úê x 0 z y ú
ê- q y q z q0 - q x ú êq y q z q0 - q x ú
ê úê ú
-
êë zq - q y q x q 0 ú q
û êë z - q y q x q 0 ú
û
éq.q 0 0 0 ù
ê 0 (q 2 + q 2 - q 2 - q 2 ) 2( q q - q q ) 2 ( q q + q q ) ú
= ê ú
0 x y z x y 0 z x z 0 y
quaternion.
Proof: cont
• Since q is a quaternion, then Q and Q are orthogonal matrices
by definition.
• Since q is a unit quaternion, then Q and Q are orthonormal
matrices,.
T
• Hence the lower-right-hand 3x3 sub-matrix of Q Q must also be
orthonormal , hence it is the rotation matrix R that take r to r’
such that r’=Rr.
T
• The expansion of Q Q provides an explicit method for
computing the orthonormal rotation matrix R from the
components of the unit quaternion q.
■
Prove that that the unit quaternion that maximizes qTNq is
the eigenvector corresponding to the most positive eigenvalue
of the matrix N.
Proof:
• To find the rotation that minimizes the sum of squares of erros,
we have to find the quaternion q that maximizes qTNq subject
to the contraint that q.q = 1 (unit quaternion).
• The symmetric 4x4 matrix N will have four real eigenvalues, say
λ1, λ2, λ3 and λ4.
• A corresponding set of orthogonal unit eigenvectors v1, v2, v3
and v4 can constructed such that Nvi = λivi where i = 1,2,3,4.
Proof: cont
• The eigenvectors span the 4D space, so an arbitrary quaternion
q can be written as a linear combination in the form:
q = a1v1 + a 2v2 + a 3v3 + a 4v4
• Since the eigenvectors are orthogonal, we have:
q.q = a12 + a 22 + a 32 + a 42
• We know that this has to be equal to one since we are looking
for a unit quaternion.
• Now Nq = λq = a1 λ1v1 + a 2 λ2v2 + a 3 λ3v3 + a 4 λ4v4
• We conclude that
q T Nq = q.(Nq ) = a12 λ1 + a 22 λ2 + a 32 λ3 + a 42 λ4
Proof: cont
• Now, suppose we have arranged the eigenvalues in order such
that λ1≥ λ2 ≥ λ3 ≥ λ4.
• Then we have:
qT Nq = q.(Nq ) = a12 λ1 + a 22 λ2 + a 32 λ3 + a 42 λ4
£ a12 λ1 + a 22 λ1 + a 32 λ1 + a 42 λ1
= (a 2
1 + a + a + a λ1
2
2
2
3
2
4 )
= (q.q)λ1
= λ1
Proof: cont
• Since we need to maximize qTNq , but we have qTNq ≤ λ1 ,
hence qTNq is bounded above by λ1 which is the largest
eigenvalue.
• Thus the maximum of qTNq is attained when qTNq = λ1 .
• This only happens if α1 = 1 and α2 = α3 = α4 = 0, i.e. :
q = a1v1 + a 2v2 + a 3v3 + a 4v4
= v1
which is the eigenvector corresponding to the largest positive
eigenvalue λ1.
■
References
1. P.J. Besl, N.D. McKay, A method for registration of 3D shapes, IEEE
Transactions on Pattern Analysis and Machine Intellinegce 14 (1992) 239–
254.
3. www.cs.virginia.edu/~gfx/Courses/2004/Intro.Spring.04/Lectures/lecture05.ppt
4. gmm.fsksm.utm.my/~graphics/files/Week9-3D_Transformations.ppt
5. www.cs.tau.ac.il/~dcor/Graphics/adv-slides/ICP.ppt
Thank You