Welcome
Undergraduate studies
Graduate studies
Prospective undergraduate students
Prospective graduate students
General information
next up previous
Next: Regularization Constraint Up: Registration Algorithm Previous: Symmetric Similarity Cost Function


Inverse Consistency Constraint

Minimizing a symmetric cost function like Eq. 1 is not sufficient to guarantee that $ h$ and $ g$ are inverses of each other because the contributions of $ h$ and $ g$ to the cost function are independent. In order to couple the estimation of $ h$ with that of $ g$, an inverse consistency constraint is imposed that is minimized when $ h = g^{-1}$. The inverse consistency constraint is given by

$\displaystyle C_{\text{ICC}}(u , \tilde{w}) + C_{\text{ICC}}(w , \tilde{u})$ $\displaystyle = \int_{\Omega_c} \vert\vert h_c(x)- g_c^{-1}(x)\vert\vert^2 dx +\int_{\Omega_c} \vert\vert g_c(x)-h_c^{-1}(x)\vert\vert^2 dx$    
  $\displaystyle = \int_{\Omega_c} \vert\vert u_c(x)-\tilde{w}_c(x)\vert\vert^2 dx +\int_{\Omega_c} \vert\vert w_c(x)-\tilde{u}_c(x)\vert\vert^2 dx$ (3)

where $ h_c(x) = x+u_c(x)$, $ h_c^{-1}(x) = x+\tilde{u}_c(x)$, $ g_c(x) = x+w_c(x)$ and $ g_c^{-1}(x) = x+\tilde{w}_c(x)$. Notice that the inverse consistency constraint is written in a symmetric form like the symmetric cost function for similar reasons. The discretized inverse consistency constraint is given by

$\displaystyle C_{\text{ICC}}(u , \tilde{w}) + C_{\text{ICC}}(w , \tilde{u}) = \...
...-\tilde{w}_d [n]\vert\vert^2 + \vert\vert w_d [n]-\tilde{u}_d [n]\vert\vert^2 .$ (4)

The procedure used to compute the inverse transformation of a transformation with minimum Jacobian greater than zero is as follows. Assume that $ h_c(x)$ is a continuously differentiable transformation that maps $ \Omega_c$ onto $ \Omega_c$ and has a positive Jacobian for all $ x \in \Omega_c$. The fact that the Jacobian is positive at a point $ x \in \Omega_c$ implies that it is locally one-to-one and therefore has a local inverse. It is therefore possible to select a point $ y \in \Omega_c$ and iteratively search for a point $ x \in \Omega_c$ such that $ \vert\vert y - h(x)\vert\vert$ is less than some threshold provided that the initial guess of $ x$ is close to the final value of $ x$.

The discretized inverse transformation is related to the continuous inverse transformation by $ h_d[n] = h_c( \frac n N)$. This implies that the discrete inverse transformation only needs to be calculated at each sample point $ n \in \Omega_d$. The following procedure is used to compute the discrete inverse $ h^{-1}_d$ of the transformation $ h_d$.


For each 
$ n \in \Omega_d$ do { 

Set $ \delta = [1, 1, 1]^T$, $ x = \frac n N$, iteration = 0.
While ( $ \vert\vert \delta \vert\vert$ $ >$ threshold) do {
$ \delta = \frac n N - h_d [N x]$
$ x = x + \frac {\delta} 2$
iteration = iteration + 1
if (iteration $ >$ max_iteration) then
Report algorithm failed to converge and exit.
}
$ h_d^{-1} [n] = x$
}
As before, $ \frac n N = [\frac {n_1}{N_1}, \frac {n_2}{N_2}, \frac {n_3}{N_3}]^T$, $ N x = [N_1 x_1, N_2 x_2, N_3 x_3]^T$, and $ h_d [N x]$ is computed using trilinear interpolation. We typically set threshold = $ 10^{-4}$ and max_iteration = 1000. In practice, the algorithm converges when the minimum Jacobian of $ h$ is greater than zero although we have not proved this mathematically. Reducing the value of the threshold gives a more accurate inverse but increases the iteration time. To reduce computation time, the above algorithm is modified by rastering through the elements of $ \Omega _d$ and initializing $ x = \frac n N + \varepsilon$ where $ \varepsilon$ is equal to the displacement of the previously estimated grid point.


next up previous
Next: Regularization Constraint Up: Registration Algorithm Previous: Symmetric Similarity Cost Function
Xiujuan Geng 2002-07-04

Copyright © 2002 • The University of Iowa. All rights reserved. Iowa City, Iowa 52242
Questions or Comments: gary-christensen@uiowa.edu