The Fast Fourier Transform Algorithm


This is how the DFT may be computed efficiently.
1D Case

equation366
has to be evaluated for N values of u, which if done in the obvious way clearly takes tex2html_wrap_inline3260 multiplications.
It is possible to calculate the DFT more efficiently than this, using the fast Fourier transform or FFT algorithm, which reduces the number of operations to tex2html_wrap_inline3262.
We shall assume for simplicity that N is a power of 2, tex2html_wrap_inline3268.
If we define tex2html_wrap_inline3270 to be the tex2html_wrap_inline3272 root of unity given by tex2html_wrap_inline3274, and set M=N/2, we have
equation376
This can be split apart into two separate sums of alternate terms from the original sum,
equation384

Now, since the square of a tex2html_wrap_inline3278 root of unity is an tex2html_wrap_inline3280 root of unity, we have that
equation402
and hence
equation408
If we call the two sums demarcated above tex2html_wrap_inline3282 and tex2html_wrap_inline3284 respectively, then we have
 equation430

Note that each of tex2html_wrap_inline3282 and tex2html_wrap_inline3284 for tex2html_wrap_inline3290 is in itself a discrete Fourier transform over N/2=M points.
How does this help us?
Well
equation441
and we can also write
 equation452

Thus, we can compute an N-point DFT by dividing it into two parts:
To show how many operations this requires, let T(n) be the time taken to perform a transform of size tex2html_wrap_inline3268, measured by the number of multiplications performed. The above analysis shows that
 equation465
the first term on the right hand side coming from the two transforms of half the original size, and the second term coming from the multiplications of tex2html_wrap_inline3306 by tex2html_wrap_inline3297. Induction can be used to prove that
 equation472

A similar argument can also be applied to the number of additions required, to show that the algorithm as a whole takes time tex2html_wrap_inline3262.
Also Note that the same algorithm can be used with a little modification to perform the inverse DFT too. Going back to the definitions of the DFT and its inverse,
equation366
and
equation485

If we take the complex conjugate of the second equation, we have that
equation490
This now looks (apart from a factor of 1/N) like a forward DFT, rather than an inverse DFT.
Thus to compute an inverse DFT,
2D Case
The same fast Fourier transform algorithm can be used -- applying the separability property of the 2D transform.
Rewrite the 2D DFT as
eqnarray498

The right hand sum is basically just a one-dimensional DFT if x is held constant. The left hand sum is then another one-dimensional DFT performed with the numbers that come out of the first set of sums.
So we can compute a two-dimensional DFT by
This requires a total of 2 N one dimensional transforms, so the overall process takes time tex2html_wrap_inline3324 .

Fast Normalized Cross-Correlation

J. P. Lewis

Abstract:

Although it is well known that cross correlation can be efficiently implemented in the transform domain, the normalized form of cross correlation preferred for feature matching applications does not have a simple frequency domain expression. Normalized cross correlation has been computed in the spatial domain for this reason. This short paper shows that unnormalized cross correlation can be efficiently normalized using precomputing integrals of the image and image tex2html_wrap_inline501 over the search window.

Introduction


The correlation between two signals (cross correlation) is a standard approach to feature detection [6, 7] as well as a component of more sophisticated techniques (e.g. [3]).
Textbook presentations of correlation describe the convolution theorem and the attendant possibility of efficiently computing correlation in the frequency domain using the fast Fourier transform. Unfortunately the normalized form of correlation (correlation coefficient) preferred in template matching does not have a correspondingly simple and efficient frequency domain expression. For this reason normalized cross-correlation has been computed in the spatial domain (e.g., [7], p. 585). Due to the computational cost of spatial domain convolution, several inexact but fast spatial domain matching methods have also been developed [2]. This paper describes a recently introduced algorithm [10] for obtaining normalized cross correlation from transform domain convolution. The new algorithm in some cases provides an order of magnitude speedup over spatial domain computation of normalized cross correlation (Section 5).
Since we are presenting a version of a familiar and widely used algorithm no attempt will be made to survey the literature on selection of features, whitening, fast convolution techniques, extensions, alternate techniques, or applications. The literature on these topics can be approached through introductory texts and handbooks [16, 7, 13] and recent papers such as [1, 19]. Nevertheless, due to the variety of feature tracking schemes that have been advocated it may be necessary to establish that normalized cross-correlation remains a viable choice for some if not all applications. This is done in section 3.
In order to make the paper self contained, section 2 describes normalized cross-correlation and section 4 briefly reviews transform domain and other fast convolution approaches and the phase correlation technique. These sections can be skipped by most readers. Section 5 describes how normalized cross-correlation can be obtained from a transform domain computation of correlation. Section 6 presents performance results.

Template Matching by Cross-Correlation

The use of cross-correlation for template matching is motivated by the distance measure (squared Euclidean distance)
displaymath505
(where f is the image and the sum is over x,y under the window containing the feature t positioned at u,v). In the expansion of tex2html_wrap_inline517
eqnarray57
the term tex2html_wrap_inline519 is constant. If the term tex2html_wrap_inline521 is approximately constant then the remaining cross-correlation term
  equation60
is a measure of the similarity between the image and the feature.
There are several disadvantages to using (1) for template matching:
The correlation coefficient overcomes these difficulties by normalizing the image and feature vectors to unit length, yielding a cosine-like correlation coefficient

  equation78

displaymath506
where tex2html_wrap_inline527 is the mean of the feature and tex2html_wrap_inline529 is the mean of f(x,y) in the region under the feature. We refer to (2) as normalized cross-correlation.

Feature Tracking Approaches and Issues


It is clear that normalized cross-correlation (NCC) is not the ideal approach to feature tracking since it is not invariant with respect to imaging scale, rotation, and perspective distortions. These limitations have been addressed in various schemes including some that incorporate NCC as a component. This paper does not advocate the choice of NCC over alternate approaches. Rather, the following discussion will point out some of the issues involved in various approaches to feature tracking, and will conclude that NCC is a reasonable choice for some applications.
SSDA. The basis of the sequential similarity detection algorithm (SSDA) [2] is the observation that full precision is only needed near the maximum of the cross-correlation function, while reduced precision can be used elsewhere. The authors of [2] describe several ways of implementing `reduced precision'. An SSDA implementation of cross-correlation proceeds by computing the summation in (1) in random order and uses the partial computation as a Monte Carlo estimate of whether the particular match location will be near a maximum of the correlation surface. The computation at a particular location is terminated before completing the sum if the estimate suggests that the location corresponds to a poor match.
The SSDA algorithm is simple and provides a significant speedup over spatial domain cross-correlation. It has the disadvantage that it does not guarantee finding the maximum of the correlation surface. SSDA performs well when the correlation surface has shallow slopes and broad maxima. While this condition is probably satisfied in many applications, it is evident that images containing arrays of objects (pebbles, bricks, other textures) can generate multiple narrow extrema in the correlation surface and thus mislead an SSDA approach. A secondary disadvantage of SSDA is that it has parameters that need to determined (the number of terms used to form an estimate of the correlation coefficient, and the early termination threshold on this estimate).
Gradient Descent Search. If it is assumed that feature translation between adjacent frames is small then the translation (and parameters of an affine warp in [19]) can be obtained by gradient descent [12].
Successful gradient descent search requires that the interframe translation be less than the radius of the basin surrounding the minimum of the matching error surface. This condition may be satisfied in many applications. Images sequences from hand-held cameras can violate this requirement, however: small rotations of the camera can cause large object translations. Small or (as with SSDA) textured templates result in matching error surfaces with narrow extrema and thus constrain the range of interframe translation that can be successfully tracked. Another drawback of gradient descent techniques is that the search is inherently serial, whereas NCC permits parallel implementation.
Snakes. Snakes (active contour models) have the disadvantage that they cannot track objects that do not have a definable contour. Some ``objects'' do not have a clearly defined boundary (whether due to intrinsic fuzzyness or due to lighting conditions), but nevertheless have a characteristic distribution of color that may be trackable via cross-correlation. Active contour models address a more general problem than that of simple template matching in that they provide a representation of the deformed contour over time. Cross-correlation can track objects that deform over time, but with obvious and significant qualifications that will not be discussed here. Cross-correlation can also easily track a feature that moves by a significant fraction of its own size across frames, whereas this amount of translation could put a snake outside of its basin of convergence.
Wavelets and other multi-resolution schemes. Although the existence of a useful convolution theorem for wavelets is still a matter of discussion (e.g., [11]; in some schemes wavelet convolution is in fact implemented using the Fourier convolution theorem), efficient feature tracking can be implemented with wavelets and other multi-resolution representations using a coarse-to-fine multi-resolution search. Multi-resolution techniques require, however, that the images contain sufficient low frequency information to guide the initial stages of the search. As discussed in section 6, ideal features are sometimes unavailable and one must resort to poorly defined ``features'' that may have little low-frequency information, such as a configuration of small spots on an otherwise uniform surface.
Each of the approaches discussed above has been advocated by various authors, but there are fewer comparisons between approaches. Reference [19] derives an optimal feature tracking scheme within the gradient search framework, but the limitations of this framework are not addressed. An empirical study of five template matching algorithms in the presence of various image distortions [4] found that NCC provides the best performance in all image categories, although one of the cheaper algorithms performs nearly as well for some types of distortion. A general hierarchical framework for motion tracking is discussed in [1]. A correlation based matching approach is selected though gradient approaches are also considered.
Despite the age of the NCC algorithm and the existence of more recent techniques that address its various shortcomings, it is probably fair to say that a suitable replacement has not been universally recognized. NCC makes few requirements on the image sequence and has no parameters to be searched by the user. NCC can be used `as is' to provide simple feature tracking, or it can be used as a component of a more sophisticated (possibly multi-resolution) matching scheme that may address scale and rotation invariance, feature updating, and other issues. The choice of the correlation coefficient over alternative matching criteria such as the sum of absolute differences has also been justified as maximum-likelihood estimation [18]. We acknowledge NCC as a default choice in many applications where feature tracking is not in itself a subject of study, as well as an occasional building block in vision and pattern recognition research (e.g. [3]). A fast algorithm is therefore of interest.

Transform Domain Computation


Consider the numerator in (2) and assume that we have images tex2html_wrap_inline533 and tex2html_wrap_inline535 in which the mean value has already been removed:
  equation114
For a search window of size tex2html_wrap_inline537 and a feature of size tex2html_wrap_inline539 (3) requires approximately tex2html_wrap_inline541 additions and tex2html_wrap_inline541 multiplications.
Eq. (3) is a convolution of the image with the reversed feature t'(-x,-y) and can be computed by
  equation122
where tex2html_wrap_inline547 is the Fourier transform. The complex conjugate accomplishes reversal of the feature via the Fourier transform property tex2html_wrap_inline549 .
Implementations of the FFT algorithm generally require that f' and t' be extended with zeros to a common power of two. The complexity of the transform computation (3) is then tex2html_wrap_inline555 real multiplications and tex2html_wrap_inline557 real additions/subtractions.
When M is much larger than N the complexity of the direct `spatial' computation (3) is approximately tex2html_wrap_inline563 multiplications/additions, and the direct method is faster than the transform method. The transform method becomes relatively more efficient as N approaches M and with larger M, N.

Fast Convolution


There are several well known ``fast'' convolution algorithms that do not use transform domain computation [13]. These approaches fall into two categories: algorithms that trade multiplications for additional additions, and approaches that find a lower point on the tex2html_wrap_inline571 characteristic of (one-dimensional) convolution by embedding sections of a one-dimensional convolution into separate dimensions of a smaller multidimensional convolution. While faster than direct convolution these algorithms are nevertheless slower than transform domain convolution at moderate sizes [13] and in any case they do not address computation of the denominator of (2).

Phase Correlation


Because (4) can be efficiently computed in the transform domain, several transform domain methods of approximating the image energy normalization in (2) have been developed. Variation in the image energy under the template can be reduced by high-pass filtering the image before cross-correlation. This filtering can be conveniently added to the frequency domain processing, but selection of the cutoff frequency is problematic--a low cutoff may leave significant image energy variations, whereas a high cutoff may remove information useful to the match.
A more robust approach is phase correlation [9]. In this approach the transform coefficients are normalized to unit magnitude prior to computing correlation in the frequency domain. Thus, the correlation is based only on phase information and is insensitive to changes in image intensity. Although experience has shown this approach to be successful, it has the drawback that all transform components are weighted equally, whereas one might expect that insignificant components should be given less weight. In principle one should select the spectral pre-filtering so as to maximize the expected correlation signal-to-noise ratio given the expected second order moments of the signal and signal noise. This approach is discussed in [16] and is similar to the classical matched filtering random signal processing technique. With typical ( tex2html_wrap_inline573 ) image correlation the best pre-filtering is approximately Laplacian rather than a pure whitening.

Normalizing


Examining again the numerator of (2), we note that the mean of the feature can be precomputed, leaving

eqnarray150
Since t' has zero mean and thus zero sum the term tex2html_wrap_inline579 is also zero, so the numerator of the normalized cross-correlation can be computed using (4).
Examining the denominator of (2), the length of the feature vector can be precomputed in approximately tex2html_wrap_inline581 operations (small compared to the cost of the cross-correlation), and in fact the feature can be pre-normalized to length one.
The problematic quantities are those in the expression tex2html_wrap_inline583 . The image mean and local (RMS) energy must be computed at each u,v, i.e. at tex2html_wrap_inline587 locations, resulting in almost tex2html_wrap_inline589 operations (counting add, subtract, multiply as one operation each). This computation is more than is required for the direct computation of (3) and it may considerably outweight the computation indicated by (4) when the transform method is applicable. A more efficient means of computing the image mean and energy under the feature is desired.
These quantities can be efficiently computed from tables containing the integral (running sum) of the image and image square over the search area, i.e.,
displaymath575

eqnarray158
with tex2html_wrap_inline591 when either u,v < 0. The energy of the image under the feature positioned at u,v is then
eqnarray160
and similarly for the image sum under the feature.
The problematic quantity tex2html_wrap_inline583 can now be computed with very few operations since it expands into an expression involving only the image sum and sum squared under the feature. The construction of the tables requires approximately tex2html_wrap_inline599 operations, which is less than the cost of computing the numerator by (4) and considerably less than the tex2html_wrap_inline589 required to compute tex2html_wrap_inline583 at each u,v.
This technique of computing a definite sum from a precomputed running sum has been independently used in a number of fields; a computer graphics application is developed in [5]. If the search for the maximum of the correlation surface is done in a systematic row-scan order it is possible to combine the table construction and reference through state variables and so avoid explicitly storing the table. When implemented on a general purpose computer the size of the table is not a major consideration, however, and flexibility in searching the correlation surface can be advantageous. Note that the s(u,v) and tex2html_wrap_inline609 expressions are marginally stable, meaning that their z-transform tex2html_wrap_inline611 (one dimensional version here) has a pole at z=1, whereas stability requires poles to be strictly inside the unit circle [14]. The computation should thus use large integer rather than floating point arithmetic.

  figure166
Figure 1: Measured relative performance of transform domain versus spatial domain normalized cross-correlation as a function of the search window size (depth axis) and the ratio of the feature size to search window size.



 
table170

Table 1: Two tracking sequences from Forest Gump were re-timed using both direct and fast NCC algorithms using identical features and search windows on a 100 Mhz R4000 processor. These times include a tex2html_wrap_inline495 sub-pixel search [17] at the location of the best whole-pixel match. The sub-pixel search was computed using Eq. (2) (direct convolution) in all cases.



 
table185

Table 2: Measured tracking times on a short sequence using the commercial Flint system and the algorithm described in the text. These are wall-clock times obtained on an unloaded 200 Mhz R4400 processor with 380 megabytes of memory (no swapping occurred). Flint settings were MATCH LUM(ONLY), FIXED REF, OVERSAMPLE OFF. It appears that subpixel search is only available in the more expensive Flame system.

Performance



  figure198
Figure 2: A tracked feature from a special effects sequence in the movie Forest Gump. The region is out of focus and has noticeable film-grain noise across frames. A small (e.g. tex2html_wrap_inline497 or smaller) area from this region would not provide a usable feature. The chosen feature size is more than tex2html_wrap_inline499 pixels.


The performance of this algorithm will be discussed in the context of special effects image processing. The integration of synthetic and processed images into special effects sequences often requires accurate tracking of sequence movement and features. The use of automated feature tracking in special effects was pioneered in movies such as Cliffhanger, Forest Gump, and Speed. Recently cross-correlation based feature trackers have been introduced in commercial image compositing systems such as Flame/Flint [20], Matador, Advance [21], and After Effects [22].
The algorithm described in this paper was developed for the movie Forest Gump (1994), and has been used in a number of subsequent projects. Special effects sequences in that movie included the replacement of various moving elements and the addition of a contemporary actor into historical film and video sequences. Manually picked features from one frame of a sequence were automatically tracked over the remaining frames; this information was used as the basis for further processing.
The relative performance of our algorithm is a function of both the search window size and the ratio of the feature size to search window size. Relative performance increases along the window size axis (Fig. 1); a higher resolution plot would show an additional ripple reflecting the relation between the search window size and the bounding power of two. The property that the relative performance is greater on larger problems is desirable. Table 1 illustrates the performance obtained in a special effects feature tracking application. Table 2 compares the performance of our algorithm with that of a high-end commercial image compositing package.
Note that while a small (e.g. tex2html_wrap_inline497 ) feature size would suffice in an ideal digital image, in practice much larger feature sizes and search windows are sometimes required or preferred: As a result of these considerations feature sizes of tex2html_wrap_inline637 and larger and search windows of tex2html_wrap_inline639 and larger are often employed.
The fast algorithm in some cases reduces high-resolution feature tracking from an overnight to an over-lunch procedure. With lower (``proxy'') resolution and faster machines, semi-automated feature tracking is tolerable in an interactive system. Certain applications in other fields may also benefit from the algorithm described here.

References


1
P. Anandan, ``A Computational Framework and an Algorithm for the Measurement of Visual Motion'', Int. J. Computer Vision, 2(3), p. 283-310, 1989.
2
D. I. Barnea, H. F. Silverman, ``A class of algorithms for fast digital image registration'', IEEE Trans. Computers, 21, pp. 179-186, 1972.
3
R. Brunelli and T. Poggio, ``Face Recognition: Features versus Templates", IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1042-1052, 1993.
4
P. J. Burt, C. Yen, X. Xu, ``Local Correlation Measures for Motion Analysis: a Comparitive Study'', IEEE Conf. Pattern Recognition Image Processing 1982, pp. 269-274.
5
F. Crow, ``Summed-Area Tables for Texture Mapping'', Computer Graphics, vol 18, No. 3, pp. 207-212, 1984.
6
R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, New York: Wiley, 1973.
7
R. C. Gonzalez and R. E. Woods, Digital Image Processing (third edition), Reading, Massachusetts: Addison-Wesley, 1992.
8
A. Goshtasby, S. H. Gage, and J. F. Bartholic, ``A Two-Stage Cross-Correlation Approach to Template Matching'', IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 6, no. 3, pp. 374-378, 1984.
9
C. Kuglin and D. Hines, ``The Phase Correlation Image Alignment Method,'' Proc. Int. Conf. Cybernetics and Society, 1975, pp. 163-165.
10
J. P. Lewis, ``Fast Template Matching'', Vision Interface, p. 120-123, 1995.
11
A. R. Lindsey, ``The Non-Existence of a Wavelet Function Admitting a Wavelet Transform Convolution Theorem of the Fourier Type'', Rome Laboratory Technical Report C3BB, 1995.
12
B. D. Lucas and T. Kanade, ``An Iterative Image Registration Technique with an Application to Stereo Vision'', IJCAI 1981.
13
S. K. Mitra and J. F. Kaiser, Handbook for Digital Signal Processing, New York: Wiley, 1993.
14
A. V. Oppenheim and R. W. Schafer, Digital Signal Processing, Englewood Cliffs, New Jersey: Prentice-Hall, 1975.
15
D. Polk, ``Product Probe'' - Panasonic PV-IQ604, Videomaker, October 1994, pp. 55-57.
16
W. Pratt, Digital Image Processing, John Wiley, New York, 1978.
17
Qi Tian and M. N. Huhns, ``Algorithms for Subpixel Registration'', CVGIP 35, p. 220-233, 1986.
18
T. W. Ryan, ``The Prediction of Cross-Correlation Accuracy in Digital Stereo-Pair Images'', PhD thesis, University of Arizona, 1981.
19
J. Shi and C. Tomasi, ``Good Features to Track'', Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 1994.
20
Flame effects compositing software, Discreet Logic, Montreal, Quebec.
21
Advance effects compositing software, Avid Technology, Inc., Tewksbury, Massachusetts.
22
After Effects effects compositing software, Adobe (COSA), Mountain View, California.