information defining elements of a picture is estimated by interpolation using information from related locations in preceding and succeeding versions of the picture. The related locations are determined by forming an estimate of the displacement of objects in the picture. displacement estimates are advantageously formed recursively, with updates being formed only in moving areas of the picture. If desired, an adaptive technique can be used to permit motion compensated interpolation or fixed position interpolation, depending upon which produces better results.

Patent
   4383272
Priority
Apr 13 1981
Filed
Apr 13 1981
Issued
May 10 1983
Expiry
Apr 13 2001
Assg.orig
Entity
Large
105
4
all paid
18. A method of estimating the intensity values of each element (pel) of a picture being processed by interpolating between the intensity values of related pels in first and second other versions of said picture, including the steps of
estimating the displacement of objects in said picture occurring between said other versions, and
selecting said related pels in accordance with said displacement estimate.
6. Apparatus for estimating the intensity values of each element (pel) of a picture being processed by interpolating between the intensity values of related pels in first and second other versions of said picture, including:
means for estimating the displacement of objects in said picture occurring between said other versions, and
means for selecting said related pels in accordance with said displacement estimate.
13. A method of estimating the intensities of elements (pels) in a picture in accordance with information defining intensities of pels in preceding and succeeding versions of the picture including the step of determining by interpolation intensities of pels in said picture in accordance with intensities of pels in related locations in said preceding and succeeding versions,
characterized in that
said determining step includes selecting said related locations as a function of the displacement of objects in said picture.
1. Apparatus for estimating the intensities of elements (pels) in a picture in accordance with information defining intensities of pels in preceding and succeeding versions of the picture including means for determining by interpolation intensities of pels in said picture in accordance with intensities of pels in related locations in said preceding and succeeding versions,
characterized in that
said determining means includes means for selecting said related locations as a function of the displacement of objects in said picture.
22. A method of reducing the bandwidth needed to transmit a video signal representing a sequence of pictures by encoding the intensity values of pels in ones of said pictures in said sequence and reconstructing missing pictures using information from encoded pictures, including:
computing the intensity of pels in a missing picture by interpolating the intensity of pels in corresponding locations in the encoded ones of said pictures which precede and follow said missing picture, and
selecting said corresponding locations as a function of the displacement of objects in said picture between said preceding and following pictures.
10. Apparatus for reducing the bandwidth needed to transmit a video signal representing a sequence of pictures by encoding the intensity values of pels in ones of said pictures in said sequence and reconstructing missing pictures using information from encoded pictures, including:
means for computing the intensity of pels in a missing picture by interpolating the intensity of pels in corresponding locations in the encoded ones of said pictures which precede and follow said missing picture, and
means for selecting said corresponding locations as a function of the displacement of objects in said picture between said preceding and following pictures.
2. The invention defined in claim 1 wherein said apparatus includes:
means for storing a present estimate Di of said displacement, and
means for recursively updating said estimate for each element in said picture.
3. The invention defined in claim 2 wherein said apparatus includes means for operating said updating means only in moving areas in said picture.
4. The invention defined in claim 3 wherein said apparatus further includes:
means for computing a frame difference FD(x) indicating the intensity difference at spatially corresponding locations in preceding and succeeding versions, and
means for computing a displaced frame difference [DFD(x,D)] indicating the intensity difference at the related locations determined by said displacement estimate,
wherein said selecting means is arranged to select said displaced locations if said displaced frame difference is smaller than said frame difference and to select said corresponding locations otherwise.
5. The invention defined in claim 1 wherein said apparatus further includes:
means for storing the intensity values for pels in said preceding and succeeding versions, and
means responsive to said present displacement estimate for addressing selected ones of said stored values.
7. The invention defined in claim 6 wherein said first and second other versions occur at intervals K1 τ before and K2 τ after said picture being processed, where K1 and K2 are positive integers and τ is a predetermined constant, and wherein said related pels are at displaced locations x-K1 D and x+K2 D in said first and second versions, respectively, where x is the vector location of the pel in said presently processed picture and D is the vector representing said displacement estimate per time τ.
8. The invention defined in claim 7 wherein said displacement estimate is recursively updated such that an update term is added to each estimate to form the next estimate, where said update term is a function of the intensity difference at said displaced locations.
9. The invention defined in claim 8 wherein said apparatus further includes means for comparing said intensity difference at said displaced location with the intensity difference at the same location x in said other versions, and
means for operating said selecting means only if said displaced location intensity difference is smaller than said same location intensity difference.
11. The invention defined in claim 10 further including:
means for storing an estimate Di of said displacement, and
means for recursively updating said estimate to form a new estimate Di+1 by adding a correction term which is a joint function of (a) the intensity difference at said corresponding location, and (b) the spatial gradient of said intensity difference.
12. The invention defined in claim 11 wherein said apparatus further includes:
means for storing the intensity values of pels in said preceding and following pictures, and
means for addressing said stored values in accordance with Di to obtain the intensities at said corresponding locations.
14. The method defined in claim 13 further including the steps of:
storing a present estimate Di of said displacement, and
recursively updating said estimate for each element in said picture.
15. The method defined in claim 14 further including the step of operating said updating means only in moving areas in said picture.
16. The method defined in claim 15 further including the steps of:
computing a frame difference FD(x) indicating the intensity difference at spatially corresponding locations in preceding and succeeding versions, and
computing a displaced frame difference [DFD(x,D)] indicating the intensity difference at the related locations determined by said displacement estimate,
wherein said selecting step includes selecting said displaced locations if said displaced frame difference is smaller than said frame difference and selecting said corresponding locations otherwise.
17. The method defined in claim 13 wherein said determining step further includes:
storing the intensity values for pels in said preceding and succeeding versions, and addressing selected ones of said stored values in response to said present displacement estimate.
19. The method defined in claim 18 wherein said first and second other versions occur at intervals K1 τ before and K2 τ after said picture being processed, where K1 and K2 are positive integers and τ is a predetermined constant, and wherein said related pels are at displaced locations x-K1 D and x+K2 D in said first and second versions, respectively, where x is the vector location of the pel in said presently processed picture and D is the vector representing said displacement estimate per time τ.
20. The method defined in claim 19 wherein said displacement estimating step includes recursive updating such that an update term is added to each estimate to form the next estimate, where said update term is a function of the intensity difference at said displaced locations.
21. The method defined in claim 20 further including the steps of comparing said intensity difference at said displaced location with the intensity difference at the same location x in said other versions, and
precluding said selecting step if said displaced location intensity difference is larger than said same location intensity difference.
23. The method defined in claim 22 further including the steps of:
storing an estimate Di of said displacement, and
recursively updating said estimate to form a new estimate Di+1 by adding a correction term which is a joint function of (a) the intensity difference at said corresponding location, and (b) the spatial gradient of said intensity difference.
24. The method defined in claim 23 further including the steps of:
storing the intensity values of pels in said preceding and following pictures, and
addressing said stored values in accordance with Di to obtain the intensities at said corresponding locations.

This invention relates generally to interpolation of video signals and, in particular, to interpolation of video signals using motion estimation.

In various schemes for interframe coding of television pictures, it is advantageous to drop or discard information from some fields or frames by subsampling the video signal at a fraction of the normal rate. This is done in order to prevent overflow of the data rate equalization buffers disposed in the transmission path, or simply to increase the efficiency of the encoder by removing redundant information. At the receiver, a reconstructed version of the information contained in the nontransmitted fields or frames is obtained by interpolation, using information derived from the transmitted fields. Simple linear interpolation may be performed by averaging the intensity information defining picture elements (pels) in the preceding and succeeding transmitted fields at fixed locations which are most closely related to the location of the picture element that is presently being processed. In certain instances, the interpolation may be performed adaptively, such that the pels used to form certain reconstructed or estimated intensity values are selected from two or more groups having different spatial patterns or such that the information obtained from pels in the same relative spatial positions in the prior and succeeding frames are combined in two or more different ways.

Both the aforementioned fixed and adaptive interpolative techniques are adequate to estimate and thus recover the nontransmitted picture information when little motion occurs in the picture. However, where objects, particularly those with a high degree of detail, are moving quickly in the field of view of the television camera, dropping fields or frames in the encoder and subsequent reconstruction using interpolation often causes blurring and other objectionable visual distortion. Accordingly, the broad object of the present invention is to enable improved estimation of intensity information defining elements in a picture using interpolative techniques on information derived from preceding and succeeding versions of the picture. A specific object is to improve the reconstruction of a nontransmitted field of a video signal using information from previous and succeeding fields, so as to eliminate annoying distortion and flicker.

The foregoing and additional objects are achieved in accordance with the instant invention by estimating the intensity information defining elements in a picture (which may be a nontransmitted field or other portion of a video signal) based on information defining pels in related locations in preceding and succeeding versions of the same picture, using an interpolative technique which takes account of the motion of objects in the picture to identify the related locations. More specifically, apparatus for estimating the desired intensity information includes a recursive motion estimator for providing an indication of the displacement of objects between the two available versions of the picture which precede and follow the picture being processed and an interpolator arranged to utilize information defining pels at the appropriate displaced locations within the preceding and succeeding versions to form an estimate of the desired information. In a preferred embodiment, an adaptive technique is used to switch between displacement compensated interpolation and fixed position interpolation, depending upon which produces the best results in the local picture area.

The features and advantages of the present invention will be more readily understood from the following detailed description when read in light of the accompanying drawing in which:

FIG. 1 is a representation of a series of video fields indicating the locations of picture elements used to form estimates of nontransmitted information in accordance with prior art fixed position interpolative techniques;

FIG. 2 is a similar representation of a series of video fields indicating the displaced pel locations in the preceding and succeeding frames used to estimate the information defining the presently processed picture element in accordance with the present invention;

FIG. 3 is a block diagram of apparatus arranged in accordance with the present invention for reconstructing information defining pels in a nontransmitted field of a video signal by processing information derived from preceding and succeeding versions of the picture using motion compensated interpolation; and

FIG. 4 illustrates spatial interpolation performed in interpolators 305 and 306 of FIG. 3.

One embodiment of the present invention, which permits reconstruction of a nontransmitted field of a video signal using information derived from transmitted preceding and succeeding fields, will be better appreciated by consideration of FIG. 1, which illustrates the time-space relationship of a sequence of television fields 101-105, each of which can be thought of as a "snap-shot" or version of a moving picture which is electrically represented by the video signal being processed. Vector 106 indicates the direction of time progression, such that field 101 occurs first and is followed in succession by fields 102 . . . 105. The time interval between successive fields is given by τ, and is generally 1/60th of a second for conventional video encoding. Each field is obtained by scanning the picture being processed along a plurality of generally parallel scan lines, such as lines 110-115 in field 104. In order to conserve bandwidth, a conventional video signal generator is arranged to interlace the scan lines in each pair of successive fields. Thus, each line in an odd numbered field is offset from the corresponding line in the previous (and next) field by half the distance between adjacent lines. The NTSC standard requires a total of 525 scan lines for each pair of fields, which together constitute a frame.

Assuming that even numbered fields 102 and 104 shown in FIG. 1 were encoded for transmission using conventional subsampling and/or other compression techniques, and that these fields have been reconstructed at the receiver, it is known to reconstruct information defining pels in the nontransmitted odd fields 101, 103 and 105 by interpolation. As used herein, "information" can include intensity information describing the different color components (red, green and blue) of a composite signal or combinations thereof, such as luminance and chrominance information. Using "intensity" generally in the foregoing sense, to reconstruct or estimate the intensity value IE for a pel E on line 120 in field 103, it is typical to use intensity information from spatially corresponding locations in the transmitted preceding field 102 and the succeeding field 104. Since the scan lines in the even and odd fields are offset from one another, the intensity values in fields 102 and 104 at the precisely corresponding spatial location of pel E are not available. However, intensity values for pels on the scan lines just above and just below line 120 may be used. Thus, the intensity of pel E can be estimated as the average (IA +IB +IC +ID)/4 of the intensities of pels A and B in field 104 and pels C and D in field 102. As stated previously, this fixed position interpolation procedure for reconstructing the nontransmitted fields is generally satisfactory, as long as objects in the picture are relatively still. However, in areas of the picture which are changing quickly, the reconstructed version generally appears noticeably blurred and distorted. This significantly reduces the utility of the subsampling, and limits the number of fields which may be dropped at the transmitter and successfully recovered at the receiver.

The motion compensated interpolation strategy of the present invention, again considered in the context of reconstruction of a nontransmitted field using information from preceding and succeeding fields which are available at the receiver, can be explained by reference to FIG. 2, which again depicts the time-space relationship of a series of fields 201 . . . 205. For the sake of generality, it is assumed that K1 field intervals τ including field 202 intervene between the previous transmitted field 201 and the present (nontransmitted) field 203, and that K2 field intervals including field 204 intervene between field 203 and the succeeding transmitted field 205. K1 and K2 are, of course, positive integers. In order to obtain an estimate of the intensity value of each pel in nontransmitted field 203, it is first necessary to form an estimate D of the displacement per field interval of moving objects in the picture between the transmitted fields 201 and 205 which bracket field 203. The underscore used for the variable D and hereinbelow indicates a vector having components in the horizontal (picture element to element) and vertical (scan line to line) directions. It is assumed here that objects in the pictures being processed are in simple uniform translation during this period. Second, the intensity values at the displaced locations in the previous and succeeding transmitted fields which "correspond" to the location in the field being processed are determined. As used here, the "correspondence" indicates locations at which the same object is expected to be in different versions of the picture. Finally, the desired intensity value is derived using interpolation or averaging.

To illustrate, if the position of a presently processed pel 250 in field 203 is denoted by vector x, then the location of the "corresponding" displaced pel 260 in field 201 is given by x-K1 D and the intensity at that location is written I(x-K1 D,t-K1 τ). Similarly, the location in field 205 of pel 270 which contains the object depicted in pel 250 is given by x+K2 D, and the intensity at this location is I(x+K2 D,t+K2 τ). In this example, the desired intensity value I(x,t) for pel 250 is determined by interpolation, such that: ##EQU1## From Equation (1) it is seen that interpolation produces a weighted average of intensity values from fields displaced timewise from the present field 203. If K1 >K2, field 203 is closer in time to field 205, and more weight is given to the intensity value information derived from the latter. On the other hand, if K2 >K1, more weight is given to the intensity value I(x-K1 D,t-K1 t) from field 201. When alternate field subsampling is used, K1 =K2 =1 and the interpolated intensity value I(x,t) is a simple average of I(x+D,t+τ) and I(x-D,t-τ).

With respect to formation of the displacement estimate D, it must be understood that the magnitude and direction of this vector varies, in a real television scene, as a function of both time and space. Accordingly, the intensity values at pels 250, 260 and 270 are not likely to be exactly equal. For convenience, a displaced frame difference DFD(x,D) which is a function both of location x and displacement estimate D, is defined such that:

DFD(x,D)=I(x+K2 D,t+K2 τ)-I(x-K1 D,t-K1 τ)(2)

To estimate the value of D, it is advantageous to minimize |DFD(x,D)|2 recursively for every pel position x within the moving area of the picture. This is analogous to minimizing mean square error (since DFD is an error indicator) and can be done using a steepest descent technique. Thus: ##EQU2## In Equations (3)-(5), Di is a present estimate of the displacement vector D and Di+1 is the next estimate, with the recursion being performed for each picture element i=1, 2, . . . . The symbol ∇D indicates a gradient or spatial rate of change calculated assuming a displacement vector D. In the horizontal picture direction, the rate of change can be determined from "element differences" ED(x,t), i.e., the intensity differences between successive picture elements on a single scan line evaluated at the location x in the field occurring at time t. The rate of change in the vertical direction is similarly determined from "line differences" LD(x,t) which are intensity differences between pels in the same horizontal position on difference scan lines, again evaluated at the location x in the field occuring at time t. Scaling factor ε is used in Equations (3) through (5) to limit large changes; ε is always less than one and is preferably in the range of 0.1 to 0.001. Further details concerning the recursive displacement estimation technique described herein may be obtained from applicants' U.S. Pat. No. 4,218,703 issued Aug. 19, 1980.

The displacement recursion specified in Equations (3)-(5) is carried out only in the moving areas of the picture. These areas can be identified when the frame difference, denoted FD(x), has a magnitude which exceeds a preselected threshold value. The frame difference is defined as the intensity difference, at pel location x, as measured in the previous and succeeding frames. Thus:

FD(x)=I(x,t+K2 τ)-I(x,t-K1 τ). (6)

While it is possible to implement the interpolation specified in Eq. (1) and the displacement estimation specified in Eq. (5), several simplifications can significantly reduce circuit complexity. For example, the displacement estimates calculated in Equations (3)-(5) require several multiplications for each iteration. This can be reduced by considering only the sign of the two right-hand terms, i.e., ##EQU3## where the SIGN function is defined by ##EQU4## where T is a small non-negative number. A second simplification results by use of spatial gradients in only one transmitted field rather than in both the previous and succeeding fields. This modification simplifies Eq. (5) as follows: ##EQU5##

Yet another modification is quite desirable in order to simplify the hardware implementation described below. In this modification, the present displacement estimate Di is used to compute the intensity value I(x,t) in Equation (1), instead of the next displacement estimate Di+1 which more precisely belongs in the intensity value equation. This modification permits the same set of intensity values to be used for both the computation of the displacement estimate and the interpolation of the missing field intensity values.

While it is not essential in practicing the present invention, an adaptive technique is preferred in the interpolative recovery of nontransmitted fields, such that "displacement compensated interpolation" in accordance with the present invention is used instead of conventional "fixed position" interpolation only when it produces better results. Switching between the two types of interpolation is accomplished under the control of adaption logic which compares the magnitude of the frame difference FD(x) and the displaced frame difference DFD(x,D) to determine which is smaller. If DFD(x,D)<FD(x), displacement compensation is better, and Equation (1) is used in the interpolation. If the frame difference is smaller, the interpolated intensity I(x,t) value is computed conventionally using the same location in the previous and succeeding transmitted fields, as follows: ##EQU6##

A block diagram of apparatus arranged to estimate the intensity values of elements in a picture (such as a nontransmitted field of a video signal) using either motion compensated interpolation or fixed position interpolation is shown in FIG. 3. Intensity information representing the versions of the picture which precede and follow the picture being estimated, obtained, for example, by decoding transmitted information representing fields such as fields 201 and 205 of FIG. 2, is entered in random access memories 301 and 302, respectively, via lines 370 and 371. The information is stored within these memories such that intensity values for specific addressed groups of pels can be recovered. For this purpose, each of the memories 301 and 302 includes address inputs 303 and 304, respectively, which receive the integer portions of K1 Di and K2 Di, which indicate the position in fields 201 and 205 of the same object which is depicted in the pel for which an intensity is being estimated. The products of the displacement estimate Di stored in a delay element 310 and the factors K1 and K2, respectively, are formed by multipliers 331 and 332. The intensity values for several (usually four) picture elements nearest the addressed displaced locations are output from memories 301 and 302 and applied to a pair of interpolators 305 and 306 via lines 307 and 308, respectively.

Interpolators 305 and 306 are each arranged to use the intensity values output from memories 301 and 302 and the fractional part of the displacement estimates K1 Di and K2 Di received on lines 365 and 366, respectively, to compute the intensity values I(x-K1 Di, t-K1 τ) and I(x+K2 Di,t+K2 τ). This procedure, which is essentially intrafield interpolation "in space", is used because the displacement estimates usually do not indicate a single pel location, but rather a position between pels; a second interpolation step described below, which is interfield interpolation "in time", actually calculates the nontransmitted intensity values being reconstructed. To determine the intensity at the in-between locations, the fractional portions of the displacement estimates are resolved into horizontal and vertical components. The intensity values for pels which bracket the specified location both vertically and horizontally are chosen, and the in-between values computed by linear interpolation. An example of this interpolation is given below. The resulting displaced interpolated intensity values are output on lines 309 and 310.

In order to determine the desired intensity value I(x,t) in accordance with Eq. (1), the intensity values at the displaced locations in fields 201 and 205 are timewise interpolated. For this purpose the outputs on lines 309 and 310 are weighted by factors K2 and K1 in multipliers 311 and 312, and a sum of the weighted values is formed in adder 313. The sum is then scaled by 1/(K1 +K2) in multiplier 314. From the foregoing, it is seen that more emphasis is given to the intensity value of the field closest in time to field 203, and less emphasis is given the more remote field. The output of multiplier 314 on line 316 which represents the motion compensated interpolated intensity value specified in Eq. (1), is applied to one input of switch 315.

Field memories 301 and 302 are also arranged to make available on lines 317 and 318, respectively, the intensity values I(x,t-K1 τ) and I(x,t+K2 τ) for pels in the transmitted fields which are in the same spatial position as the pel presently being processed. To obtain an estimate of I(x,t) by fixed position interpolation, the intensity values are again likewise weighted by forming the sum of K1 times the intensity in field 205 and K2 times the intensity in field 201, and by dividing the sum by 1/(K1 +K2). This is accomplished by applying the signals on lines 317 and 318 to inputs of adder 319 via multipliers 341 and 342 with coefficients K1 and K2, respectively. The output of adder 319 is applied, in turn, to a multiplier circuit 330 via line 320, where the sum is multiplied by the factor 1/(K1 +K2). The resulting value represents the intensity estimate specified in Eq. (10), which is applied to a second input of switch 315.

As mentioned previously, the position of switch 315 is adaptively controlled so as to select either the motion compensated interpolated value on line 316 or the fixed position interpolated value output from multiplier 330, depending upon the relative magnitudes of the frame difference FD(x) and the displaced frame difference DFD(x,D). The magnitude of FD(x) is obtained by forming the differences between I(x,t-K1 τ) and I(x,t+K2 τ) in a subtractor 325 and applying the subtractor output on line 326 to a magnitude circuit 327, which disregards sign information. The magnitude of DFD(x,D) is obtained by forming the difference between I(x-D,t-K1 τ) and I(x+D,t+K2 τ) in a subtractor circuit 322 and applying the difference to a magnitude circuit 324. |FD(x)| and |DFD(x,D)| are compared in a subtractor circuit 321, and a sign bit output is used to control the position of switch 315. Thus, when the frame difference FD(x) is smaller than the displaced frame difference DFD(x,D) in the local area of the picture being processed, switch 315 is arranged to couple the output of multiplier 330, representing a fixed position interpolation, through the switch to output line 390. On the other hand, if the displaced frame difference is smaller, switch 315 is positioned to couple the output of multiplier 314 representing motion compensated interpolation through to output line 390. The estimated intensity value available on line 390 can be accumulated in a memory, not shown, and the entire process described above repeated for the remaining picture elements in the field. When the entire field has been reconstructed, the contents of the memory may be applied to a display medium in the appropriate time position with respect to the transmitted fields, by multiplexing apparatus, not shown.

As mentioned previously, the displacement estimate Di is stored in a one pel delay element 350, and recursively updated for each pel. To implement the updating, the output of delay element 350 is applied to one input of an adder 351, which receives an update or correction term as its second input on line 352. The output of adder 351 is the next displacement estimate Di+1, which is, in turn, coupled back to the input of delay element 350 to yield the next estimate. Displacement estimates are updated in accordance with Eq. (5) only in the moving area of the picture, and a "zero" update is used otherwise. For this purpose, the position of switch 353 is controlled by the output of comparator 354, the latter serving to determine whether or not |FD(x)| output from magnitude circuit 327 exceeds a predetermined threshold value T. If the threshold is not exceeded, the picture area being processed is not moving. In this circumstance, switch 353 is positioned as shown in FIG. 3 so as to couple a "0" update to adder 351. On the other hand, if the output of comparator 354 indicates that the frame difference does exceed T, a moving area in the picture has been detected, switch 353 is repositioned, and the displacement correction term (from multiplier 360) is coupled through switch 353 to adder 351. The magnitude of the update term is calculated in accordance with Eq. (5) by multiplying second outputs of interpolators 305 and 306 on lines 343 and 344, which represent the displaced element and line differences in the previous and succeeding fields, by K1 and K2, respectively, in multipliers 368 and 369 and forming the sum of these products in adder 367. The sum is multiplied, in turn, by the displaced frame difference output from subtractor 322, and this product is scaled by the factor ε in multiplier 360 before being applied to the second input of switch 353. The manner in which the element and line differences are formed in interpolators 305 and 306 is explained below, in connection with FIG. 4.

After each nontransmitted field has been reconstructed by interpolation in accordance with the present invention, it is necessary to update multiplier coefficients K1 and K2 (when either or both exceeds one) before the next nontransmitted field is processed. For example, in FIG. 2, assuming that fields 201 and 205 are transmitted and fields 202, 203, and 204 are not, then K1 =K2 =2 when field 203 is being processed. When field 202 is processed, K1 =1 and K2 =3. On the other hand, when field 204 is processed, K1 =3 and K2 =1. Updating of the coefficients K1 and K2 is easily carried out by storing their values in random access memories and by reading out appropriate coefficients under control of clocking circuitry not shown. Information needed to control clocking is derived from sync signals recovered from the transmitted video fields.

An example of the spatial (intrafield) interpolation performed by interpolators 305 and 306 is illustrated graphically in FIG. 4. Locations P, Q, R and S represent four picture elements in a transmitted field of the signal being processed, and IP, IQ, IR and IS represent the intensity values at these locations. For convenience, it is assumed that location Q is at the origin of an orthogonal coordinate system, and locations P, R and S are at coordinates (0,1), (1,1) and (1,0), respectively. If the displacement estimate Di shown by vector 401 has a horizontal component with a fractional portion x, 0<x<1 and a vertical component with a fractional portion y, 0<y<1, then the intensity value at location (x,0) is obtained by linear interpolation such that:

I(x,0) =(1-x)IQ +(x)Is (11)

and the intensity value at location (x,1) is given by:

I(x,1) =(1-x)IP +(x)IR. (12)

The intensity at location (x,y) is also obtained by interpolation, such that: ##EQU7## From Eq. (14), it is seen that I(x,y) is a weighted sum of the intensities of the pels surrounding the location specified by the displacement estimate, with more weight being given to the closest pels. If desired, other interpolation weights can be used, or additional samples can be used to contribute to the weighting pattern.

The manner in which element and line differences ED and LD are formed in interpolators 305 and 306 can also be illustrated by reference to FIG. 4. For example, if IP, IQ, IR and IS represent the intensity values for pels in field 201 at time t-K1 τ and at spatial locations which surround the location indicated by vector 401, then the element difference ED can be represented by 1/2[(IR -IP)+(IS -IQ)] and the line difference LD can be represented by 1/2[(IQ -IP)+(IS -IR)]. This calculation averages the differences, in both the horizontal (element) and vertical (line) directions, for pels surrounding the location for which an intensity value is being estimated. Alternatively, a simple calculation can use a single difference (IR -IP) for ED and (IQ -IP) for LD. In either event, interpolators 305 and 306 may include suitable arithmetic circuits for forming the desired differences.

The timewise interpolation performed by multipliers 311, 312 and 314 can be further illustrated by several examples. If a series of fields is designated a, b, c, d . . . and if every third field a, d, h . . . is transmitted, then the intensity value Ib in field b is reconstructed using intensity values Ia and Id from the transmitted fields a and d as follows:

Ib =≃(2Ia +Id)

The intensity value Ic in field c is given by:

Ic =≃(Ia +2Id).

As a second example, if every fourth field a, e, i . . . in the series is transmitted, the reconstructed intensity value Ib in field b is given by:

Ib =1/4(3Ia +Ie)

Similarly, the reconstructed values for fields c and d are:

Ic =1/4(2Ia +2Ie), and

Id =1/4(Ia +3Ie).

Various modifications and adaptations may be made to the present invention by those skilled in the art. Accordingly, it is intended that the invention be limited only by the appended claims. For example, while the preceding description primarily described reconstruction of nontransmitted interlaced fields, it should be clearly understood that the present invention enables efficient reconstruction of information defining a picture or a portion of a picture using similar information derived from preceding and succeeding versions of the picture which include the same spatial area.

Netravali, Arun N., Robbins, John D.

Patent Priority Assignee Title
10250885, Dec 06 2000 Intel Corporation System and method for intracoding video data
10326993, Jul 15 2002 InterDigital VC Holdings, Inc Adaptive weighting of reference pictures in video encoding
10404994, Jul 03 2009 TAHOE RESEARCH, LTD Methods and systems for motion vector derivation at a video decoder
10701368, Dec 06 2000 Intel Corporation System and method for intracoding video data
10721472, Jul 15 2002 InterDigital VC Holdings, Inc. Adaptive weighting of reference pictures in video encoding
10863194, Jul 03 2009 TAHOE RESEARCH, LTD Methods and systems for motion vector derivation at a video decoder
11102486, Jul 15 2002 InterDigital VC Holdings, Inc. Adaptive weighting of reference pictures in video encoding
11765380, Jul 03 2009 TAHOE RESEARCH, LTD Methods and systems for motion vector derivation at a video decoder
4482970, Nov 06 1981 Grumman Aerospace Corporation Boolean filtering method and apparatus
4496972, May 10 1980 DEUTSCHE FORSCHUNGSANSTALT FUR LUFT-UND RAUMFAHRT E V Method for the representation of video images or scenes, in particular aerial images transmitted at reduced frame rate
4543607, Oct 28 1982 Quantel Limited Video processors
4563703, Mar 19 1982 Quantel Limited Video processing systems
4612441, Aug 09 1984 ARMY, UNITED STATES OF AMERICA AS REPRESENTED BY THE SECRETARY OF THE, THE Moving object detection system using infrared scanning
4630114, Mar 05 1984 ANT Nachrichtentechnik GmbH Method for determining the displacement of moving objects in image sequences and arrangement as well as uses for implementing the method
4651207, Mar 05 1984 ANT Nachrichtentechnik GmbH Motion adaptive interpolation of television image sequences
4663665, Jan 07 1985 Nippon Hoso Kyokai TV system conversion apparatus
4667233, Sep 17 1984 NEC Corporation Apparatus for discriminating a moving region and a stationary region in a video signal
4668987, Mar 09 1984 Fujitsu Limited Apparatus for band compression processing of a picture signal
4679084, Jun 20 1986 RCA LICENSING CORPORATION, TWO INDEPENDENCE WAY, PRINCETON, NJ 08540, A CORP OF DE Method and apparatus for freezing a television picture
4692801, May 20 1985 Nippon Hoso Kyokai Bandwidth compressed transmission system
4695882, Jan 30 1984 KDDI Corporation Movement estimation system for video signals using a recursive gradient method
4703350, Jun 03 1985 Polycom, Inc Method and apparatus for efficiently communicating image sequences
4709393, Mar 19 1982 Quantel Limited Video processing systems
4710809, Jun 29 1985 DEUTSCHE FORSCHUNGSANSTALT FUR LUFT-UND RAUMFAHRT E V Method for the representation of video images or scenes, in particular aerial images transmitted at reduced frame rate
4716453, Jun 20 1985 AT&T Bell Laboratories Digital video transmission system
4717956, Aug 20 1985 NORTH CAROLINA STATE UNIVERSITY, RALEIGH, N C , A CONSTITUENT INSTITUTION OF N C Image-sequence compression using a motion-compensation technique
4720743, Nov 09 1984 NEC Corporation and Nippon Telegraph and Telephone Corporation Predictine coding/decoding system for block-formed picture signals
4727422, Jun 03 1985 Polycom, Inc Method and apparatus for efficiently communicating image sequence having improved motion compensation
4771331, Mar 08 1986 ANT Nachrichtentechnik GmbH Motion compensating field interpolation method using a hierarchically structured displacement estimator
4791487, Jun 28 1985 Canon Kabushiki Kaisha Picture signal conversion device
4838685, Apr 03 1987 Massachusetts Institute of Technology Methods and apparatus for motion estimation in motion picture processing
4858005, Feb 17 1987 Independent Broadcasting Authority System for encoding broadcast quality television signals to enable transmission as an embedded code
4862264, Dec 24 1985 British Broadcasting Corporation Method of coding a video signal for transmission in a restricted bandwidth
4958226, Sep 27 1989 MULTIMEDIA PATENT TRUST C O Conditional motion compensated interpolation of digital motion video
4979037, Apr 15 1988 Sanyo Electric Co., Ltd. Apparatus for demodulating sub-nyquist sampled video signal and demodulating method therefor
4982285, Apr 27 1989 Victor Company of Japan, LTD Apparatus for adaptive inter-frame predictive encoding of video signal
4985768, Jan 20 1989 Victor Company of Japan, LTD Inter-frame predictive encoding system with encoded and transmitted prediction error
4987480, Jul 11 1989 MASSACHUSETTS INSTITUTE OF TECHNOLOGY, A MA CORP Multiscale coding of images
4999705, May 03 1990 AT&T Bell Laboratories Three dimensional motion compensated video coding
5049991, Feb 20 1989 Victor Company of Japan, LTD Movement compensation predictive coding/decoding method
5055925, Mar 24 1989 INNOVUS PRIME LLC Arrangement for estimating motion in television pictures
5055927, Sep 13 1988 DEUTSCHE THOMSON-BRANDT GMBH, D-7730 VILLINGEN-SCHWENNINGEN Dual channel video signal transmission system
5057921, Mar 31 1989 Thomson Consumer Electronics Process and device for temporal image interpolation, with corrected movement compensation
5081525, Jan 30 1989 Hitachi Denshi Kabushikigaisha Opto-electric converting image pickup element and image pickup apparatus employing the same
5107348, Jul 11 1990 CITICORP NORTH AMERICA, INC , AS AGENT Temporal decorrelation of block artifacts
5113255, May 11 1989 Matsushita Electric Industrial Co., Ltd. Moving image signal encoding apparatus and decoding apparatus
5191413, Nov 01 1990 International Business Machines; International Business Machines Corporation System and method for eliminating interlace motion artifacts in captured digital video data
5258836, May 08 1991 NEC Corporation Encoding of motion picture signal
5301018, Feb 13 1991 Ampex Corporation Method and apparatus for shuffling image data into statistically averaged data groups and for deshuffling the data
5339108, Apr 09 1992 Ampex Corporation Ordering and formatting coded image data and reconstructing partial images from the data
5434623, Dec 20 1991 Ampex Corporation Method and apparatus for image data compression using combined luminance/chrominance coding
5581302, Nov 30 1994 National Semiconductor Corporation Subsampled frame storage technique for reduced memory size
5592226, Jan 26 1994 TRUSTEES OF PRINCETON UNIVERSITY, THE Method and apparatus for video data compression using temporally adaptive motion interpolation
5600731, May 09 1991 Intellectual Ventures Fund 83 LLC Method for temporally adaptive filtering of frames of a noisy image sequence using motion estimation
5623313, Sep 22 1995 France Brevets Fractional pixel motion estimation of video signals
5627601, Nov 30 1994 National Semiconductor Corporation Motion estimation with bit rate criterion
5644361, Nov 30 1994 NATIONAL SEMICONDUCTOR, INC Subsampled frame storage technique for reduced memory size
5682209, Nov 13 1995 GRASS VALLEY US INC Motion estimation using limited-time early exit with prequalification matrices and a predicted search center
5943096, Mar 24 1995 National Semiconductor Corporation Motion vector based frame insertion process for increasing the frame rate of moving images
5970504, Jan 31 1996 Mitsubishi Denki Kabushiki Kaisha Moving image anchoring apparatus and hypermedia apparatus which estimate the movement of an anchor based on the movement of the object with which the anchor is associated
6144972, Jan 31 1996 Mitsubishi Denki Kabushiki Kaisha Moving image anchoring apparatus which estimates the movement of an anchor based on the movement of the object with which the anchor is associated utilizing a pattern matching technique
6192080, Dec 04 1998 Mitsubishi Electric Research Laboratories, Inc Motion compensated digital video signal processing
6269174, Oct 28 1997 HANGER SOLUTIONS, LLC Apparatus and method for fast motion estimation
6331874, Jun 29 1999 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Motion compensated de-interlacing
6618439, Jul 06 1999 Mstar Semiconductor, Inc Fast motion-compensated video frame interpolator
6621864, Mar 24 1995 National Semiconductor Corporation Motion vector based frame insertion process for increasing the frame rate of moving images
6650624, Oct 30 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Cable modem apparatus and method
6731818, Jun 30 1999 Intel Corporation System and method for generating video frames
6735338, Jun 30 1999 Intel Corporation System and method for generating video frames and detecting text
6753865, Jun 30 1999 Intel Corporation System and method for generating video frames and post filtering
6760378, Jun 30 1999 Intel Corporation System and method for generating video frames and correcting motion
6961314, Oct 30 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Burst receiver for cable modem system
6987866, Jun 05 2001 U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT Multi-modal motion estimation for video sequences
7103065, Oct 30 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Data packet fragmentation in a cable modem system
7120123, Oct 30 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Pre-equalization technique for upstream communication between cable modem and headend
7139283, Oct 30 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Robust techniques for optimal upstream communication between cable modem subscribers and a headend
7342963, Aug 24 2000 France Telecom Method for calculating an image interpolated between two images of a video sequence
7412114, Mar 25 2003 Kabushiki Kaisha Toshiba Method of generating an interpolation image, an interpolation image generating apparatus, and an image display system using the same
7512154, Oct 30 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Data packet fragmentation in a wireless communication system
7512179, Jan 26 2001 France Telecom Image coding and decoding method, corresponding devices and applications
7519082, Oct 30 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Data packet fragmentation in a wireless communication system
7564902, Nov 22 2002 RAKUTEN GROUP, INC Device, method and program for generating interpolation frame
7620254, Aug 07 2003 DYNAMIC DATA TECHNOLOGIES LLC Apparatus and method for motion-vector-aided interpolation of a pixel of an intermediate image of an image sequence
7706444, Dec 06 2000 Intel Corporation System and method for intracoding video data
7720152, Oct 01 2002 InterDigital VC Holdings, Inc Implicit weighting of reference pictures in a video decoder
7738562, Jun 30 1999 Intel Corporation System and method for generating video frames and correcting motion
7801217, Oct 01 2002 INTERDIGITAL MADISON PATENT HOLDINGS Implicit weighting of reference pictures in a video encoder
7821954, Oct 30 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Methods to compensate for noise in a wireless communication system
7843847, Oct 30 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Compensating for noise in a wireless communication system
7899034, Oct 30 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Methods for the synchronization of multiple base stations in a wireless communication system
8005072, Oct 30 1998 AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD Synchronization of multiple base stations in a wireless communication system
8189670, Nov 22 2002 RAKUTEN GROUP, INC Device, method and program for generating interpolation frame
8755440, Sep 27 2005 Qualcomm Incorporated Interpolation techniques in wavelet transform multimedia coding
8908764, Dec 06 2000 Intel Corporation System and method for intracoding and decoding video data
9301310, Oct 30 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Robust techniques for upstream communication between subscriber stations and a base station
9432682, Dec 06 2000 Intel Corporation System and method for intracoding video data
9549191, Jul 15 2002 InterDigital VC Holdings, Inc Adaptive weighting of reference pictures in video encoding
9930335, Jul 15 2002 InterDigital VC Holdings, Inc Adaptive weighting of reference pictures in video encoding
9930343, Dec 06 2000 Intel Corporation System and method for intracoding video data
9955179, Jul 03 2009 TAHOE RESEARCH, LTD Methods and systems for motion vector derivation at a video decoder
RE34965, Jan 20 1989 Victor Company of Japan Limited Inter-frame predictive encoding system with encoded and transmitted prediction error
RE35158, Apr 27 1989 Victor Company of Japan Limited Apparatus for adaptive inter-frame predictive encoding of video signal
RE35910, May 11 1989 Matsushita Electric Industrial Co., Ltd. Moving image signal encoding apparatus and decoding apparatus
RE36999, Sep 27 1989 Sony Corporation Video signal coding method
RE45169, Jun 30 1999 Intel Corporation System method and for generating video frames and correcting motion
Patent Priority Assignee Title
4218703, Mar 16 1979 Bell Telephone Laboratories, Incorporated Technique for estimation of displacement and/or velocity of objects in video scenes
4218704, Mar 16 1979 Bell Telephone Laboratories, Incorporated Method and apparatus for video signal encoding with motion compensation
4232338, Jun 08 1979 Bell Telephone Laboratories, Incorporated Method and apparatus for video signal encoding with motion compensation
4307420, Jun 07 1979 Nippon Hoso Kyokai Motion-compensated interframe coding system
///////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Apr 02 1981NETRAVALI ARUN N BELL TELEPHONE LABORATORIES, INCORPORATED, A CORP OF N Y ASSIGNMENT OF ASSIGNORS INTEREST 0038790319 pdf
Apr 03 1981ROBBINS JOHN D BELL TELEPHONE LABORATORIES, INCORPORATED, A CORP OF N Y ASSIGNMENT OF ASSIGNORS INTEREST 0038790319 pdf
Apr 13 1981Bell Telephone Laboratories, Incorporated(assignment on the face of the patent)
Mar 29 1996AT&T CorpLucent Technologies, INCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0116580857 pdf
Feb 22 2001LUCENT TECHNOLOGIES INC DE CORPORATION THE CHASE MANHATTAN BANK, AS COLLATERAL AGENTCONDITIONAL ASSIGNMENT OF AND SECURITY INTEREST IN PATENT RIGHTS0117220048 pdf
Oct 26 2006JPMORGAN CHASE BANK, N A F K A THE CHASE MANHATTAN BANK , AS ADMINISTRATIVE AGENTLucent Technologies IncTERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS0185050493 pdf
Nov 28 2006Lucent Technologies IncMULTIMEDIA PATENT TRUST C OASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0185730978 pdf
Date Maintenance Fee Events
Sep 12 1986M170: Payment of Maintenance Fee, 4th Year, PL 96-517.
Sep 20 1986ASPN: Payor Number Assigned.
Oct 15 1990M171: Payment of Maintenance Fee, 8th Year, PL 96-517.
Sep 26 1994M185: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
May 10 19864 years fee payment window open
Nov 10 19866 months grace period start (w surcharge)
May 10 1987patent expiry (for year 4)
May 10 19892 years to revive unintentionally abandoned end. (for year 4)
May 10 19908 years fee payment window open
Nov 10 19906 months grace period start (w surcharge)
May 10 1991patent expiry (for year 8)
May 10 19932 years to revive unintentionally abandoned end. (for year 8)
May 10 199412 years fee payment window open
Nov 10 19946 months grace period start (w surcharge)
May 10 1995patent expiry (for year 12)
May 10 19972 years to revive unintentionally abandoned end. (for year 12)