This invention relates to surveillance and counter-surveillance classificon of military vehicles using one-dimensional analysis of the target images. A two-dimensional image is digitized into a n by n pixel matrix which is summed along each row and column to produce a pair of n-component vectors which are invariant under image translation or rotation. A one-dimensional Fourier transform can be obtained from either or both of these two vectors. The image vectors are useful in the identification of U.S. and threat vehicle targets, or in the detection of target image motion.

Patent
   4490851
Priority
Apr 16 1982
Filed
Apr 16 1982
Issued
Dec 25 1984
Expiry
Apr 16 2002
Assg.orig
Entity
Large
40
6
EXPIRED

REINSTATED
1. A method for obtaining target vehicle identification values representing a target outline configuration within a detector's field of view, comprising the steps of:
a. digitizing the target image into a two dimensional array of pixels;
b. summing target image pixels along one scan dimension to provide a first image vector b;
c. summing target image pixels along the other orthogonal dimension to provide a second image vector c;
d. shifting image vector b normal to the scan direction such that the maximum component is at a fixed reference point;
e. Fourier transforming the shifted image vector in step d to obtain F;
f. obtaining the real part of F by summing F and its conjugate F* and dividing the resultant vector by 2;
g. obtaining the imaginary part of F by subtracting F* from F and dividing the resultant vector by 2;
h. obtaining a numerical value representing the symmetric part of F in step e by integrating over the square of the real part of F;
i. obtaining a numerical value representing the asymmetric part of F by integrating over the product of the imaginary part of F and the conjugate of the imaginary part of F;
j. obtaining the identification value of image vector b by dividing the numerical value in step h by the sum of the values in steps h and i;
k. repeating steps d through j, using the second image vector c; and
l. in a look-up table predefined in memory, looking up the vehicle corresponding to the identification values obtained by following steps a through k.

The invention described herein may be manufactured, used, and licensed by or for the Government for governmental purposes without payment to us of any royalty thereon.

U.S. Pat. No. 3,836,712 to P. G. Kornreich and S. T. Kowel discloses a pictorial information digital simulation technique that operates on the irradiance distribution of an image as a function of position. The device is a real time sensor which optimally determines the one-dimensional Fourier transform of a two-dimensional image, using the photo-acoustic surface properties of physical materials. This incoherent light device outputs to a single pair of terminals a set of time sequential voltages, each of which indicates the phase and amplitude of a particular Fourier component. The resulting ac output represents a bandwidth limited, undirectional Fourier transform of the image. This procedure creates a one-dimensional function out of the original two-dimensional image.

The present invention involves a somewhat analogous technique of Fourier transforming an image, using a digital technique which looks at the image irradiance as a function of position. The detected image is digitized either directly on a detector, or subsequently through an analog-to-digital converter, to produce an n×n pixel matrix whose elements correspond to the irradiance of the image at a particular pixel position. A large amount of data is quickly reduced to a single pair of n-component vectors by summing along the rows or columns of the pixel matrix. A one-dimensional Fourier transform representation of the image can be obtained from either of the two vectors. An n2 component pixel matrix is reduced to a pair of n component image vectors. The number of pixels is effectively reduced from n2 to 2n. In an illustrative case a 512×512 matrix will be reduced from over a quarter million elements to 1024 elements, a reduction in data of about 99%. The signal-to-noise ratio of each component in the image vectors is increased approximately by a factor of .sqroot.n. Data storage requirements are reduced significantly by scanning the image line by line and repeatedly adding to a single storage block of n words. Computation times are reduced as compared with operations on a two-dimensional Fourier transform image which may require over a million complex operations that can take a digital computer several minutes to complete, depending on the speed and memory capacity of the computer.

FIG. 1 schematically illustrates apparatus embodying our invention.

FIGS. 2 and 3 are typical vehicle silhouettes, their corresponding image vectors and one-dimensional Fourier transforms obtained by using the FIG. 1 apparatus.

FIG. 4 illustrates a method for utilizing Fourier transforms to calculate vehicle identification numbers.

FIG. 5 illustrates typical vehicle identification numbers obtained by using the FIG. 4 calculation methods.

Referring in greater detail to FIG. 1, there is shown a detector 10, which may be a visual or infrared imaging sensor, directed on a distant target vehicle 12 of unknown friendly or unfriendly character. Detector 10 includes, or has associated therewith, a digitizer 14 that is electrically connected to a Fourier analyzer 16. Detector 10 and digitizer 14 typically reproduce the target image on a monitor 18, shown in FIGS. 2 and 3. Digital adders sum the target video picture (pixel) elements along each horizontal or vertical scan direction. Pixel elements in the background (non-target) scene are ignored or discarded. A representative video image system may include 512 scan lines, each line containing 512 pixel elements.

In FIGS. 2 and 3 curve 22 depicts a vector where the target pixel elements are summed along the horizontal scan lines; curve 24 depicts the corresponding vectors of target pixel elements which are summed along the vertical scan lines. These summation vectors can be processed in a Fourier analyzer 16 (FIG. 1) to produce the one-dimensional Fourier transforms 26 and 28 of FIGS. 2 and 3. This process transforms signals 22 and 24 from the time domain into the frequency domain. U.S. Pat. No. 3,466,431 to A. M. Fuchs and S. C. Catania describes the general operation of a Fourier analyzer.

Summation vectors 22 and 24 are achieved according to the following general equations: ##EQU1## where [a] is the pixel matrix, and b and c are the resultant orthogonal n-component image vectors. A one-dimensional Fourier transform of the image can be obtained from either of these two vectors. One particular application or usage for this concept is in determining a measure of detectability and identification for U.S. and threat vehicle targets. This particular algorithm involves the symmetry of the image vector about some unique point. If the maximum component 30 (FIGS. 2 and 3) in the image vector is chosen as the reference point, then the vector functions b and c can be separated into respective even and odd functions about this point of reference. If the image vectors are translated so the peak component is centered at the origin, then the even-odd function decomposition has a particularly simple form. The decomposition for b is given as follows:

bsi =1/2[bi +b-i ]

bai =1/2[bi -b-i ]

where bs and ba are respectively the symmetric and asymmetic parts of the general vector function b. This decomposition procedure is performed in the time domain and it is unique only when the reference point is positioned at the origin.

A numerical vehicle identification value I which is normalized from zero to 100% can be calculated from the symmetric/asymmetric function decomposition. A method for calculating I is illustrated in FIG. 4. The image vector f is positioned to a fixed reference point in the time domain. The vector F is the Fourier transform of f. The real part of F(ω) corresponds to the symmetric component of f and the imaginary part of F(ω) to the asymmetric component of f. If the DC frequency component is zero, then the area under the real part FR and imaginary part FI of F=FR +iFI is zero. The components of |FR |2 and |FI |2 are positive; consequently, the total integrated areas under |FR |2 and |FI |2 are positive and equal to the area under |F|2. The identification value I is calculated by the following expression: ##EQU2## The quantity I can be calculated for either of the two image vectors and is invariant for scale and translation transformation. These values for I can be tabulated in a look-up table for vehicle identification. FIG. 5 shows identification values for the vehicles depicted in FIGS. 2 and 3.

The use of the derivative f' of the time space vector f adds flexibility to the above algorithm. This operation multiplies F point by point by -iω and enhances the higher frequencies, attenuates the lower frequencies and eliminates any DC component. The identification factor I is now computed for this new f' vector to identify the vehicle.

A useful property of the above algorithm is that I is the same for the right and left sides of the vehicle if they are mirror images of one another. The mirror image vector is obtained by setting t→-t. The factor I is independent of this time inversion.

Since a translational shift in the time domain corresponds to a phase shift in the frequency domain, the quantity I is highly dependent upon the first step in the algorithm which shifts the peak component of f to a fixed reference point. The presence of noise spikes and other irregularities in the vehicle signal causes serious errors in the location of the peak vector component. Image vectors which are nearly flat in regions near the peak component or have multiple relative maxima with nearly the same values can also produce major errors. One solution which reduces these problems smooths the interval containing the peak valued component of f by convolving it with the mean filter consisting of a rectangular bar n channels wide with unit area. This operation decreases the error due to peak value location but at the expense of decreased high frequency resolution. This procedure may be repeated for several iterations to obtain the time coordinate of the peak of the smoothed vector f. This point is considered as a reference point for the image vector f and is shifted to the origin to compute the identification value I of the image. Shifting at this point will improve the consistency of the results.

The two image vectors b and c are also useful to detect motion in an image. This can be very beneficial in surveillance applications. By continuously scanning and summing an image from a fixed camera position, and then computing the difference between scans, we can detect if anything in the image has changed. Any non-zero components in the difference vectors imply some form of motion along the summed line corresponding to that point. By comparing the differences between three or more scans and knowing the approximate distance to the target we can determine the direction and velocity of the motion.

A second motion detection algorithm utilizing the image vectors is based on a theorem of Fourier transforms which states that translational motion in the time domain corresponds to a pure phase shift in the frequency domain. That is, if f(t) has the Fourier transform F(ω) then f(t-Δt) has the Fourier transform e-iωΔt F(ω). Each component is delayed in phase by an amount proportional to ω. The higher the frequency the greater the change in phase angle.

Assume we are analyzing a single one-dimensional scan of an image along which an object is moving. This signal f(t) will be composed of a stationary background component b(t) and a moving part m(t). As the moving object travels across our field-of-view it masks and unmasks part of the stationary background. The image signal f(t) observed is given by:

f(t)=b(t)·h(t)+m(t)

where h(t) is a binary function which has a value of zero in the masked region and one elsewhere.

Since multiplication in the time domain corresponds to convolution in the frequency domain then

F(ω)=B(ω)×H(ω)+M(ω)

where F, B, H and M are the respective Fourier transforms of f, b, h and m. If the moving part M(t) has shifted by an amount Δt then

M(t)→M(t+Δt), M(ω)→M(ω)eiΔφ

h(t)→h(t+Δt), H(ω)→H(ω)eiΔφ

This implies the transform of the observed image signal becomes ##EQU3## which implies the phase has shifted by Δφ and the magnitude of the transform is unaffected.

The phase of any Fourier component can be analyzed to give the time shift Δt of the signal M(t). Values of the phase of the image signal can be computed and plotted at successive time intervals. The derivative of this plot is proportional to the instantaneous velocity of the moving object.

We wish it to be understood that we do not desire to be limited to the exact details of construction shown and described for obvious modifications will occur to a person skilled in the art.

Graziano, James M., Gerhart, Grant R.

Patent Priority Assignee Title
4739401, Jan 25 1985 Raytheon Company Target acquisition system and method
4764973, May 28 1986 AIR FORCE, UNITED STATES OF AMERICA, THE, AS REPRESENTED BY THE SECRETARY OF THE Whole word, phrase or number reading
4783827, May 27 1985 FUJI ELECTRIC CO , LTD , A CORP OF JAPAN Serial data processing apparatus
4800511, Mar 26 1986 FUJI PHOTO FILM CO , LTD Method of smoothing image data
4811407, Jan 22 1986 BLOKKER, JOHAN F JR ; RANDALL, ERIC B Method and apparatus for converting analog video character signals into computer recognizable binary data
4817169, Apr 14 1987 NIPPON SHEET GLASS CO , LTD Page segmentor
4827412, Jan 29 1986 COMPUTER SPORTS SYSTEMS, INC , 1100 MASSACHUSETTS AVE , CAMBRIDGE, MA, A MA CORP Pinfall detector using video camera
4838644, Sep 15 1987 The United States of America as represented by the United States Position, rotation, and intensity invariant recognizing method
4847772, Feb 17 1987 Regents of the University of Minnesota; REGENTS OF THE UNIVERSITY OF MINNESOTA, A CORP OF MINNESOTA Vehicle detection through image processing for traffic surveillance and control
4870267, Jan 13 1988 The Boeing Company Ambient light sensitive activator
4870694, Mar 24 1987 FUJIFILM Corporation Method of determining orientation of image
4881270, Oct 28 1983 The United States of America as represented by the Secretary of the Navy Automatic classification of images
4955063, Apr 14 1987 Nippon Sheet Glass Co., Ltd. Vector display apparatus
5063524, Nov 10 1988 Thomson-CSF Method for estimating the motion of at least one target in a sequence of images and device to implement this method
5074673, Aug 01 1984 Westinghouse Electric Corp. Laser-based target discriminator
5078501, Oct 17 1986 E. I. du Pont de Nemours and Company Method and apparatus for optically evaluating the conformance of unknown objects to predetermined characteristics
5159474, Oct 17 1986 E. I. du Pont de Nemours and Company Transform optical processing system
5161107, Oct 25 1990 Mestech Creation Corporation; MESTECH CREATION CORPORATION, A CORP OF TX Traffic surveillance system
5193124, Jun 15 1990 RESEARCH FOUNDATION OF STATE UNIVERSITY OF NEW YORK, THE, Computational methods and electronic camera apparatus for determining distance of objects, rapid autofocusing, and obtaining improved focus images
5202936, Jul 25 1990 International Business Machines Corporation Method for generating a gray-scale pattern
5288938, Dec 05 1990 Yamaha Corporation Method and apparatus for controlling electronic tone generation in accordance with a detected type of performance gesture
5296852, Feb 27 1991 Method and apparatus for monitoring traffic flow
5297222, May 04 1982 Hitachi, Ltd. Image processing apparatus
5315668, Nov 27 1991 The United States of America as represented by the Secretary of the Air Offline text recognition without intraword character segmentation based on two-dimensional low frequency discrete Fourier transforms
5321772, Mar 05 1990 Honeywell Inc. Digital image processor
5341435, Mar 17 1992 Corbett Technology Company, Inc. System for detection and recognition of an object by video imaging means
5390133, Sep 30 1992 Nevada Asset Liquidators, LLC Image processor for target detection and tracking
5434927, Dec 08 1993 Minnesota Mining and Manufacturing Company Method and apparatus for machine vision classification and tracking
5450503, Jul 04 1985 Canon Kabushiki Kaisha Image recognition method
5761326, Dec 08 1993 Minnesota Mining and Manufacturing Company Method and apparatus for machine vision classification and tracking
5847755, Jan 17 1995 Sarnoff Corporation Method and apparatus for detecting object movement within an image sequence
6044166, Jan 17 1995 Sarnoff Corporation Parallel-pipelined image processing system
6160900, Feb 04 1994 Canon Kabushiki Kaisha Method and apparatus for reducing the processing time required in motion vector detection
6606402, Dec 18 1998 Cognex Corporation System and method for in-line inspection of stencil aperture blockage
7107144, Feb 27 2003 SPECTRA RESEARCH, INC Non-intrusive traffic monitoring system
7321699, Sep 06 2002 Rytec Corporation Signal intensity range transformation apparatus and method
7346188, Aug 23 2004 Denso Corporation Motion detection method and device, program and vehicle surveillance system
7522745, Aug 31 2000 Sensor and imaging system
7957556, Apr 25 2006 Arriver Software AB Vehicle surroundings monitoring apparatus, vehicle, vehicle surroundings monitoring method, and vehicle surroundings monitoring program
7991192, Nov 27 2002 Lockheed Martin Corporation Method of tracking a moving object by an emissivity of the moving object
Patent Priority Assignee Title
3466431,
3803553,
3836712,
3846752,
3930230,
4316218, Mar 28 1980 The United States of America Government as represented by the Secretary Video tracker
///
Executed onAssignorAssigneeConveyanceFrameReelDoc
Apr 16 1982The United States of America as represented by the Secretary of the Army(assignment on the face of the patent)
Nov 01 1982GERHART, GRANT R UNITED STATES OF AMERICA AS REPRESENTED BY THE SECRETARY OF THE ARMY, THEASSIGNMENT OF ASSIGNORS INTEREST 0040970497 pdf
Nov 01 1982GRAZIANO, JAMES M UNITED STATES OF AMERICA AS REPRESENTED BY THE SECRETARY OF THE ARMY, THEASSIGNMENT OF ASSIGNORS INTEREST 0040970497 pdf
Date Maintenance Fee Events
Jul 26 1988REM: Maintenance Fee Reminder Mailed.
Dec 25 1988EXPX: Patent Reinstated After Maintenance Fee Payment Confirmed.
Jul 28 1992REM: Maintenance Fee Reminder Mailed.
Dec 27 1992EXP: Patent Expired for Failure to Pay Maintenance Fees.


Date Maintenance Schedule
Dec 25 19874 years fee payment window open
Jun 25 19886 months grace period start (w surcharge)
Dec 25 1988patent expiry (for year 4)
Dec 25 19902 years to revive unintentionally abandoned end. (for year 4)
Dec 25 19918 years fee payment window open
Jun 25 19926 months grace period start (w surcharge)
Dec 25 1992patent expiry (for year 8)
Dec 25 19942 years to revive unintentionally abandoned end. (for year 8)
Dec 25 199512 years fee payment window open
Jun 25 19966 months grace period start (w surcharge)
Dec 25 1996patent expiry (for year 12)
Dec 25 19982 years to revive unintentionally abandoned end. (for year 12)