A plurality of motion detecting units of which at least a template block size, i.e., the number of estimation pixels, or a search area size is different among the motion detecting units are used adaptively according to the characteristic of, or the prediction coding type of, a target picture. This enables efficient motion vector detection to be accomplished without increasing the amount of hardware or power consumption.
|
1. An apparatus for detecting a motion vector according to a block matching method, comprising:
a plurality of motion vector detecting units each receiving pixel data of a template block included in a current picture and pixel data of a search area included in a reference picture for performing a prescribed operational process according to said block matching method to detect a motion vector of said template block based on a result of the process, said plurality of motion vector detecting units including motion vector detecting units differing in size of at least one of said template block and said search area from each other, the size of said template block representing a number of pixels to be used for motion vector estimation among pixels in a macroblock serving as a unit in said block matching method, and the size of said search area representing a size of a range for searching the motion vector; control circuitry for setting the search areas of said plurality of motion vector detecting units so as to allow a motion vector search range for a whole of said plurality of motion vector detecting units in the reference picture to be varied in accordance with a characteristic of the current picture including said template block; and motion vector determining circuitry receiving motion vector information from said plurality of motion vector detecting units to determine a final motion vector for said template block.
2. The motion vector detecting apparatus according to
3. The motion vector detecting apparatus according to
said control circuitry includes a circuit for allocating, when said current picture is a bi-directionally predictive picture for which the motion vector is detected using two reference pictures different in time, a reference picture out of the two reference pictures that is closer in time to said current picture to a motion vector detecting unit larger in the template block size, and also for allocating another reference picture that is more distant in time from said current picture of said two reference pictures to a motion vector detecting unit smaller in the template block size.
4. The motion vector detecting apparatus according to
said control circuitry includes a circuit for allocating, when said current picture is a bi-directionally predictive picture for which the motion vector is detected using two reference pictures different in time, a reference picture closer in time to said current picture out of the two reference pictures to said first motion vector detecting unit, and also for allocating another reference picture that is more distant in time from said current picture to said second motion vector detecting unit.
5. The motion vector detecting apparatus according to
said control circuit includes a circuit for allocating, when said current picture is a bi-directionally predictive picture for which the motion vector is detected using two reference pictures different in time, a reference picture of said two reference pictures that is closer in time to said current picture to the first motion vector detecting unit, and also for allocating another reference picture of said two reference pictures to the second motion vector detecting unit.
6. The motion vector detecting apparatus according to
7. The motion vector detecting apparatus according to
8. The motion vector detecting apparatus according to
9. The motion vector detecting apparatus according to
10. The motion vector detecting apparatus according to
11. The motion vector detecting apparatus according to
12. The motion vector detecting apparatus according to
|
1. Field of the Invention
The present invention relates to a motion vector detecting apparatus, and more particularly to a motion vector detecting apparatus used in a digital motion picture compression system.
2. Description of the Background Art
To transmit a huge amount of image data efficiently at high speed, the image data are compressed to reduce the data amount to be transferred. One of such image data compression method is an MPEG method that deals with motion pictures. In this MPEG method, a motion vector is detected on a picture block basis according to a block matching method. This motion vector as well as a difference value between pixels in a current picture block and in a predictive picture block is transmitted. Since the difference value is used as the data to be transmitted, if the predictive picture block and the current picture block are well-consistent (i.e., if the degree of motion is small), the difference value is small, which leads to a reduced amount of data to be transmitted. Conversely, if the predicted picture block and the current picture block are poorly consistent, the data amount to be transmitted will increase.
As the block size decreases, precision in detection will increase, but resultant transmission data amount will increase. Typically, in the block matching method in the MPEG method, a macroblock having a size of 16 pixels by 16 pixels is used as a unit block for motion vector detection. In the MPEG method, picture data are transmitted on a frame basis, each frame consisting of a plurality of macroblocks.
When pixel data of current picture block CB is expressed as "aij" and pixel data of reference picture block RB as "bij", the estimation value is calculated from the following equation:
Thus, a huge number of calculations are necessary to obtain the estimation value. Moreover, after obtaining the estimation value for every reference block RB in search area SE, the motion vector should be determined, for which another huge number of calculations are required. To perform this motion vector detection at high speed, a variety of operation algorithms have been proposed. Operation algorithms in the motion picture compression system according to the MPEG method are described in the following articles.
P. Pirsch et al., "VLSI Architecture for Video Compression--A Survey", Proc. IEEE Vol. 83, No. 2, pp.220-246, 1995.
M. Yoshimoto et al., "ULSI Realization of MPEG2 Realtime Video Encoder and Decoder--An Overview", IEICE Trans. Electron., Vol. E78-C, No. 12, pp.1668-1681, 1995.
Tanaka et al., "MPEG2 Encoding LSIs About To Change Audiovisual Equipment For Household Use", Nikkei Electronics, No. 711, Mar. 9, 1998.
LSIs for motion detecting operations designed to detect motion vectors at high speed are described in the following articles.
K. Ishihara et al., "A-Half-Pel Precision MPEG2 Motion Estimation Processor with Concurrent Three-Vector Search", ISSCC Digest of Technical Papers, pp.288-289, 1995.
A. Ohtani et al., "A Motion Estimation Processor for MPEG2 Video Real Time Encoding at Wide Search Range", Proc. IEEE Custom Integrated Circuits Conference, pp.405-408, 1995.
A. Hanami et al., "A 165-GOPS Motion Estimation Processor with Adaptive Dual-Array Architecture for High Quality Video-Encoding Applications", Proc. IEEE Custom Integrated Circuits Conference, pp.169-172, 1998.
An all-sample full-search method enables the most accurate motion vector detection, in which method the above-described estimation values are calculated for all motion vector candidates within a search area. Specifically, an estimation value is obtained by calculating differences for all the pixel data in a picture block under search (i.e., a template block) and in a reference picture block. A positional vector of the reference picture block having the minimal estimation value among the estimation values estimated for all the estimation points in the search area is determined as a motion vector.
This all-sample full-search method, however, requires a huge amount of calculations and takes a long time to determine the motion vector. Thus, for high-speed motion vector detection through reduction of the computational amount, it is required to narrow the search area to reduce the number of estimation points. This means that, when a motion vector detecting apparatus is implemented with a single LSI, it is necessary to narrow a motion vector search range of this motion vector detecting apparatus of one LSI. Consequently, in order to search a motion vector over a wide range according to the all-sample full-search method, a plurality of LSIs (motion vector detecting apparatuses) should be operated in parallel. This leads to an increased number of LSIs to be used, resulting in an increase in power consumption as well as in apparatus scale.
To reduce the amount of computation in the all-sample full-search method, a variety of approaches have been proposed as follow: a sub-sampling approach, in which the differential operation is performed for only data of a part of pixels at each searching position (for each motion vector candidate, or at each estimation point); an algorithmic searching approach, in which the differential operation between a template block and a reference picture block is performed at only a part of coordinate positions in a search area according to a specific algorithm; and a combination of the sub-sampling approach and the algorithmic searching approach. A motion picture compression apparatus with a motion vector detecting circuit using the combined approach of sub-sampling and algorithmic searching is described, for example, in the following article.
M. Mizuno et al., "A 1.5 W Single-Chip MPEG2 MP@ML Encoder with Low-Power Motion Estimation and Clocking", ISSCC Digest of Technical Papers, pp. 256-257, 1997.
In these motion vector searching approaches, however, a size of search area (motion vector search range) is fixed, and the motion vector is searched according to an algorithm that is statically determined independent of the characteristics of input current picture data. Therefore, such a situation may arise that, for some current picture data, it is possible to detect a motion vector having an estimation value substantially equal to that obtained using the all-sample full-search method, but for other current picture data, a motion vector having an estimation value considerably greater than that derived by the all-sample full-search method is detected, thereby hindering efficient data compression. In the MPEG method, a current picture is coded using one of the following coding schemes: intra-frame (or intra-field) predictive coding; unidirectional predictive coding in which a picture is predicted from a picture preceding in time; and bi-directional predictive coding in which a predictive picture is created using two pictures preceding and succeeding in time.
For prediction of B picture 402, predictions 411 and 412 are performed. Prediction 411 is performed using I picture 401 preceding in time, and prediction 412 is performed using P picture 404 succeeding in time. B pictures are not used for prediction. Prediction 413 for P picture 404 is performed using I picture 401 preceding in time.
Between B picture 402 and P picture 404 there exists B picture 403. Therefore, the time difference (hereinafter, referred to as an "inter-frame distance") between B picture 402 and I picture 401 is not the same as that between P picture 404 and B picture 402. As the inter-frame distance is larger, the degree of motion increases. Thus, it is necessary to set a prediction range as large as possible. This means that a wider search range is required for detecting the motion vector. Conversely, if the inter-frame distance is small, the degree of motion is small, and thus, an optimal motion vector can be detected using a small prediction range. Accordingly, in the case of the P picture having a long inter-frame distance for unidirectional prediction 413, it is necessary to set a search area as large as possible. On the contrary, in the case of forward (or backward) prediction in bi-directional prediction 411 or 412, it is desirable that the search area is determined taking the inter-frame distance into consideration to be allocated to each respective reference picture.
Conventionally, however, motion vector detection has been performed in a fixed manner, without taking into consideration the inter-frame distance at the time of prediction. This causes a problem that an unnecessarily large or too narrow search area is set for a current picture of a search target, which makes the motion vector detection inefficient and inaccurate.
An object of the present invention is to provide a motion vector detecting apparatus that enables efficient motion vector detection in both unidirectional and bi-directional predictions, without increasing an amount of hardware.
Another object of the present invention is to provide a motion vector detecting apparatus that improves the ratio between efficiency in motion vector detection and the amount of hardware, for both unidirectional and bi-directional predictions.
A motion vector detecting apparatus according to the present invention includes a plurality of motion vector detecting units that receive template block pixel data and search area pixel data and perform a prescribed operational process according to a block matching method to detect a motion vector for the template block according to the results of process. The plurality of motion vector detecting units include motion vector detecting units different in either or both of template block size and search area size. The template block size indicates the number of pixels to be used for motion vector estimation among the pixels in a macroblock, where the macroblock is a unit for the process according to the block matching method. The search area size represents a motion vector search range.
The motion vector detecting apparatus according to the present invention further includes: a control circuit that sets search areas for the plurality of motion vector detecting units so that the search area for the plurality of motion vector detecting units as a whole in a reference picture is varied according to characteristics of a current picture including the template block pixels; and a motion vector determining circuit that receives motion vector data from the plurality of motion vector detecting units to determine a final motion vector for the template block.
Since the search area for the plurality of motion vector detecting units as a whole is allowed to vary according to the characteristics (or the type of predictive coding) of the current picture of interest, it is possible to optimize a motion vector search range depending on the inter-frame distance. This enables efficient motion vector detection.
Furthermore, since the plurality of detecting units having different template block sizes and/or different search area sizes are used, it is possible to readily change the overall search area size as well as precision in searching. Thus, the motion vector can be detected efficiently, without increasing the amount of hardware.
The foregoing and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
First Embodiment
For the final detection result FRM generated from final motion determining unit FMD#, a plurality of motion vectors may be output in parallel for respective predictive coding methods, or, one optimal, final motion vector may be generated from a plurality of detection results, dependent on an applied system.
Motion detecting unit MD# further includes an adder 3 that adds estimation value components (e.g., absolute difference values) output in parallel from calculating unit 2, and a comparison unit 4 that receives the sum from adder 3 for comparison with a sum obtained in a past cycle, and determines a positional vector of a search window block giving the minimum sum to be a motion vector candidate. Detection result RM is output from comparison unit 4.
Delay buffers DL#0-DL#n transmit data for one pixel in each estimation value calculating cycle to corresponding columns of element processor columns PEL#0-PEL#n. Each of delay buffers DL#0-DL#n has a first-in first-out arrangement, and provides a delay time corresponding to the number of stages included therein.
Element processor columns PEL#0-PEL#n transfer, in one direction, search window pixel data received from corresponding delay buffers DL#0-DL#n. Search window pixel data shifted out from columns of element processors PEL#0-PEL#n are sent to delay buffers DL#1-DL#n disposed adjacent in their upstream sides. From the column of element processors PEL#n, the search window pixel data are shifted out pixel by pixel every time the operation for calculating an evaluation value is completed. On the other hand, new search window pixel data PY are input to delay buffer DL#0 pixel by pixel.
In addition, template block pixel data PX are provided to respective columns of element processors PEL#0-PEL#n. Thus, in the columns of element processors PEL#0-PEL#n, the template block pixels reside all the time, whereas the search window pixels are shifted one pixel by one pixel.
Element processor PE may be configured to contain a plurality of registers 5a and a plurality of registers 5b and to calculate the absolute difference values for a plurality of template block pixels in a time division multiplex manner by absolute difference value circuit 5c. Here, for simplicity, element processor PE is shown storing one pixel of template block pixel data and one pixel of search window pixel data.
Therefore, at the initial state, pixel data of a positional vector (0, -4) in a search window are stored in the columns of element processors. In this state, absolute difference values between the pixels in template block 6 and corresponding pixels in the corresponding search window block are calculated as estimation value components. These absolute difference values are sent in parallel from absolute difference value circuit 5c in
When the calculation of this estimation value is completed, search window pixel data for one pixel is shifted in from an outside. According to this shifting operation, a shift of search window pixel data by one pixel is performed in each of delay buffers DL#0-DL#n and each of columns of element processors PEL#0-PEL#n. This shifting operation is always in one direction, and as a result, pixel data of one pixel at the upper left corner of search window 7 is shifted out, as shown in FIG. 6A.
Since the search window pixel data are all shifted by one pixel while the pixel data of template block 6 reside in the columns of element processors PEL#0-PEL#n, the columns of element processors PEL#0-PEL#n now store pixel data in a search window block of a positional vector (0, -3). Next, in a similar manner, each absolute difference value is calculated in each element processor PE in the columns of element processors to obtain an estimation value. When the calculation of the estimation value is completed, the shifting-in operation of search window pixel data PY is performed again. This operation is repeated seven times in total, so that the search window block is moved to the bottom of the search window 7, as shown in FIG. 6B. The positional vector of this search window block is (0, 3). After calculating the estimation value for this positional vector (0, 3), three pixels in the search window pixel data are consecutively shifted in, and thus, the next search window is formed.
This state is the same as that shown in
In this motion detecting unit, the comparison unit detects a motion vector based on the estimation values received in a predetermined number of operation cycles. As the number of estimation points (positional vectors in the search window block) increases, so does the number of operation cycles, and in response, the time required for motion vector detection is lengthened. Basically in the present invention, the motion detecting units are used adaptively based on the characteristics of a current picture including the template block.
Conversely, if the picture of interest is a bi-directionally predictive picture (B picture), motion detecting units MD#1-MD#n are divided into two groups DGB and DGF, as shown in FIG. 7B. The number of the motion detecting units included in each of groups DGB, DGF is determined appropriately depending on a system configuration. To motion detecting units MD#1-MD#i included in group DGB, a backward picture succeeding the B picture in time is allocated, and motion detecting units MD#j-MD#n included in group DGF, a forward picture preceding the B picture in time is allocated. Forward prediction and backward prediction are performed in parallel in groups DGF and DGB, and high-speed bi-directional prediction is accomplished. Furthermore, since the forward picture and the backward picture have different inter-frame distances with respect to the B picture, search areas for groups DGF and DGB may be adapted to differ in range and/or location from each other, taking the inter-frame distances into consideration. Thus, motion vector searching can be performed more effectively.
The motion detecting units MD#1-MD#n can be used adaptively depending on the characteristics of the picture to be searched, and the search area as well as the search precision can be optimized for both the unidirectional and bi-directional predictions. Therefore, the amount of hardware of motion detecting units MD#1-MD#n can also be optimized.
Meanwhile, for the bi-directionally predictive picture, motion vectors for the forward and backward pictures are output in parallel from the final motion determining unit.
Motion detecting units MD#1-MD#3 include element processors disposed corresponding to respective estimation pixels as shown in
As shown in
In search areas SA1 and SA3, coarse search is performed by sub-sampling the estimation points. On the other hand, in search area SA2, fine (full) search is performed in which all the estimation points are used upon calculation of an estimation value. In this case, since each sub-search area (i.e., each search area allocated to each motion detecting unit) does not overlap with one another, the entire search area for the template block has a horizontal range from -96 to +95, allowing a motion vector to be searched over a wide area. In the central region of the entire search area, fine search is performed, so that the estimation values can be calculated accurately. As described above, full search (or fine search) is performed in search area SA2 where the motion vector is very likely to exist, while coarse search is performed in search areas SA1 and SA3 where the possibility that the motion vector will exist is small. This enables efficient motion vector search for a P picture having a large degree of motion and a long inter-frame distance.
As shown in
Buffer memories 43 to 48 each is set at an output high impedance state when unselected. The template block pixel data sub-sampled by sub-sampling circuit 49 is provided to motion detecting units MD#1 and MD#3 included in motion vector detecting apparatus 1. The template block pixel data read out from current picture memory 40 is supplied to motion detecting unit MD#2. The sub-sampled search window pixel data from sub-sampling circuit 50 is provided to motion detecting unit MD#1, and the sub-sampled search window pixel data from sub-sampling circuit 51 is provided to motion detecting unit MD#3. Sub-sampling circuit 49 performs 4 to 1 sub-sampling in a horizontal direction to generate one pixel per four pixels. Sub-sampling rates of sub-sampling circuits 50 and 51 may also be 1/4, or, they can be any other rate.
A control circuit 55 is provided to control reading and writing of pixel data. Now, an operation of the motion vector detecting system shown in
From current picture memory 40, template block pixel data are read out in a raster scan order. Sub-sampling circuit 49 performs the 4 to 1 sub-sampling operation on the pixel data received, and generates one pixel for each four pixels in a horizontal direction. In this sub-sampling operation, the one pixel may be generated by applying an operational process to four pixels (to generate an arithmetic average value or weighted mean value, for example), or, one pixel of data in a specific location may simply be selected out of the four pixels. What is needed is to generate one pixel per four pixels in the horizontal direction. The template block pixel data from sub-sampling circuit 49 are sent to motion detecting units MD#1 and MD#3 included in motion vector detecting apparatus 1. Thus, motion detecting units MD#1 and MD#3 each store template block pixel data of the size of 4 pixels in a horizontal direction and 16 pixels in a vertical direction. Pixel data read out from current picture memory 40 are stored in motion detecting unit MD#2. Thus, motion detecting unit MD#2 stores template block pixel data of the size of 16 pixels by 16 pixels.
At the time of writing of the template block pixel data, as shown in
Pictures are written alternately into reference picture memories 41 and 42 on a picture basis. In the case where the pixel data stored in current picture memory 40 is pixel data of a bi-directionally predictive picture (B picture), no picture is stored into reference picture memory 41 or 42. Thus, pictures stored into reference picture memories 41 and 42 are unidirectionally predictive pictures (P pictures) or intra-frame (or intra-field) predictive pictures (I pictures).
When the current picture is a unidirectionally predictive picture (P picture), the pixel data from reference picture memory 41 whichever has data written later are employed. If the current picture is a bi-directionally predictive picture (B picture), both reference picture memories 41 and 42 are used. Which of the reference pictures stored in reference picture memories 41 and 42 is closer in time with respect to the bi-directionally predictive picture is determined by control circuit 55. In the MPEG method, two consecutive bi-directionally predictive pictures (B pictures) are provided either between I picture and P picture or between two P pictures. Thus, when the first one of the two bi-directionally predictive picture is to be processed, the reference picture that has been written more recently into either reference picture memory 41 or 42 is more distant in time, and when the second one of the bi-directionally predictive pictures is to be processed, the same picture that has been written more recently is closer in time. This is as shown in the picture sequence in
After the picture data are once written into reference picture memory 41 in a raster scan order, search area pixel data are read out from the memory 41 in the order of pixels aligned in the vertical direction, to be stored into buffer memories 43-45. Similarly, pixel data are read out from reference picture memory 42 sequentially in the vertical direction on a screen when it is activated. In buffer memories 43-48, pixel data of the search window size are stored. Sub-sampling circuits 50 and 51 each perform sampling operation on received pixel data, at a prescribed sub-sampling rate. Since sub-sampling circuits 50 and 51 received pixel data that are read out from buffer memories 43 and 44, and 46 and 47 along the vertical direction on a screen, sub-sampling circuits 50 and 51 perform sub-sampling operation using pixel data having a delay of a vertical search window size.
Delay stages 56-58 each is formed of a first-in, first-out memory or a shift register, and delays the received pixel data by a time corresponding to the size of the search block in the vertical direction. The delay time provided by delay stages 56-58 is expressed as 2·r+M cycles, where the template block and the search window block have the vertical size of M rows and the search range has the size of 2·r in the vertical direction.
As shown in
By using sub-sampling circuits 50 and 51, buffer memories 43-48 can all be made into the same configuration, and the search window pixel data can all be read out at the same timing. When pixel data are read out sequentially in the vertical direction from reference picture memories 41 and 42, delay stages 56-58 can be used as each of buffer memories 43-48. In any case, motion detecting units MD#1 and MD#3 operate at a one fourth frequency of motion detecting unit MD#2 to calculate estimation values. Thus, even if the number of estimation points in coarse search differ from that in fine search, the operation cycle for calculating a motion vector for one template block can be made the same. This is because, as shown in
In
Whether the target picture or the current picture is a B picture or a P picture is determined at an encoder appropriately, and the determined result is placed at a picture header as its picture type. For a P picture, address generating circuit 55c generates to two reference picture memories 41 and 42 an address for a search area, with the template block address AD received from current picture memory read control circuit 55b being its central address (real rear point). Reference picture memory read control circuit 55e sequentially reads out pixel data from reference picture memories 41 and 42 according to the address signal received from address generating circuit 55c. Which reference picture memory is to be accessed according to the address signal received from address generating circuit 55c is determined by memory distance decision circuit 55d.
In the MPEG method, memory distance decision circuit 55d identifies the order of the current picture in the group of pictures (GOP) according to a temporal reference value, and determines which of reference picture memory 41 and 42 stores the pixel data closer in time. Normally, the temporal reference value is placed after a start code of a picture layer and transmitted, and represents the order of the picture within a group of picture GOP.
In the cycle #5, P picture P3 is coded referring to data of P picture P2 stored in reference picture memory 2 (42). In the coding of a P picture, a reference picture stored in a reference picture memory that is different from the reference picture memory storing pixel data of the target P picture is used as a reference picture for coding. Thus, it is possible to readily identify the reference picture memories based on these characteristics.
Address generating circuit 55c generates addresses sequentially using a counter, for example, with three consecutive addresses AD+(-96, -32 ), AD+(-32, -32), and AD+(32, -32) being leading addresses, according to a result of a determination by a B/P determining circuit, to read out search window pixel data (in this case, one reference picture memory is accessed). In the case of a B picture, address generating circuit 55c generates addresses AD+(-64, -32) and AD+(0, -32) for the reference picture memory storing pixel data of the picture that is distant in time, and generates address AD+(-32, -32) for the reference picture memory storing the pixel data of the closer-in-time picture. Here, address AD represents the center point of the template block, or a real rear point (0, 0). Thus, both reference picture memories 41 and 42 are accessed.
Buffer memory read control circuit 55f selectively activates buffer memory groups according to the output signal of memory distance decision circuit 55d that is activated when a B picture is coded, and generates addresses so as to sequentially read out vertically consecutive pixel data.
As explained above, according to the second embodiment of the present invention, the search areas for motion detecting units are changed depending on whether the target picture is a bi-directionally predictive picture or a unidirectionally predictive picture. In addition, the motion detecting units are used adaptively according to the characteristics of the target picture. Thus, motion vector detection can be performed efficiently.
Also in the third embodiment, motion detecting units MD#1 and MD#3 each have a template block of 16 pixel rows and 4 pixel columns, and element processors are placed corresponding to these pixels. The template block size of motion detecting unit MD#2 is 16 pixel rows by 16 pixel columns, and element processors are disposed corresponding to the pixels. Thus, the amount of hardware is reduced in each of motion detecting units MD#1 and MD#3 to approximately one fourth compared to the case in which three motion detecting units each corresponding to 16 pixel rows by 16 pixel columns are used in parallel.
When the search areas shown in
In addition, the amount of hardware can be reduced to approximately ½ times compared to the case where three motion detecting units each with a search area horizontally from -32 to +31 and vertically from -32 to +31 are used, and accordingly, power consumption can be reduced. For such reconfiguration of search areas as well as adaptive use of motion detecting units, the same configuration as in
Picture characteristic designating signal B/P indicates whether the target picture is a B picture or a P picture. Address calculating circuit 55ca generates a leading address, and pixel data in the search window are read out sequentially in the vertical direction. When the bottom of the search window is reached, pixel data on the next column of the search window are read out.
In the third embodiment, coarse search or fine search (full search) may be performed in motion detecting units MD#1 and MD#3. Horizontal and vertical components are those for the macroblock. When the coarse search is performed, the configuration shown in
As explained above, according to the third embodiment of the present invention, motion detecting units having different search areas are used adaptively according to the characteristics of the target picture. Thus, efficient motion vector detection can be realized without increasing the amount of hardware.
Motion detecting unit MD#B has 128 estimation pixels and has a specific search area of a range horizontally from -48 to +47 and vertically from -16 to +15.
Motion detecting unit MD#B has 128 estimation points, and performs motion vector search using a template block consisting of pixels arranged in 8 pixel columns by 16 pixel rows. Motion detecting unit MD#B includes, as seen from
As shown in
The value of the center point (i, j) of the search areas for this unidirectionally predictive picture is determined according to the history of the motion vectors. Final motion determining unit FMD# determines a final motion vector as follows.
If the motion vector is detected in a region where search areas SAA and SAB are overlapping with each other, i.e., if it is detected within search area SAB, the result of detection through full search by motion detecting unit MD#B is selected preferentially.
If the motion vector is detected by motion detecting unit MD#A in a region where the two search areas do not overlap, the detection result is compared to that of motion detecting unit MD#B, and according to the result of comparison, the motion vector designating a block with greater correlation is selected. Here, the following equation is used as the criterion.
where
MAD(1) is an estimation value of motion detecting unit MD#A,
MAD(2) is an estimation value of motion detecting unit MD#B,
and
a, b are arbitrary constants.
As motion detecting units MD#A and MD#B have different numbers of estimation pixels, constant "a" is applied for correction to normalize the estimation value (so that the number of estimation pixels are made effectively equal), and constant "b" is applied for offset. When the function f is positive, it means that the estimation value of the motion vector obtained by motion detecting unit MD#A is larger, and the motion vector detected by motion detecting unit MD#B is selected. If the function f is negative, the estimation value of motion vector from motion detecting unit MD#A is smaller and indicates greater correlation, and the motion vector detected by motion detecting unit MD#A is selected.
Thus, using two motion detecting units, efficiency or precision in motion vector search can be considerably improved as compared with the case where a single motion detecting unit is utilized. More specifically, compared to the case in which a motion detecting unit with the same amount of hardware and a similar number of estimation pixels as motion detecting unit MD#B is solely used, the search range can be significantly widened through coarse search, resulting in efficient motion vector search and improved motion vector searching efficiency. In addition, as compared with the case where a motion detecting unit (or motion predictor) having the same amount of hardware and using the similar number of estimation pixels as the motion detecting unit MD#A is solely used, full search method is utilized with the search area size substantially maintained. Therefore, more precise motion vector search becomes possible, and precision in motion vector search is significantly improved.
Motion detecting unit MD#A has 64 estimation pixels and motion detecting unit MD#B has 128 estimation pixels. Thus, in order to sub-sample a template block of 16 pixels by 16 pixels in a horizontal direction, sub-sampling circuits 60 and 61 are provided for sub-sampling data of horizontally aligned pixels in the pixel data from current picture memory 40. Here, sub-sampling circuits 60 and 61 are cascaded. Sub-sampling circuit 60 performs horizontally 2 to 1 sub-sampling, and applies the sub-sampled template block pixel data to motion detecting unit MD#B.
Receiving the sub-sampled pixel data from sub-sampling circuit 60, sub-sampling circuit 61 further performs horizontally 2 to 1 sub-sampling, and applies the resultant sub-sampled template block pixel data to motion detecting unit MD#A.
Allocation of search areas as well as distribution of search window pixel data are performed under the control of control circuit 70. When the target picture is a unidirectionally predictive picture, the search window pixel data stored in either reference picture memory 41 or 42 are utilized. For example, if the picture data stored in reference picture memory 41 are being utilized, buffer memories 62 and 63 are used. The search window pixel data from buffer memory 62 is horizontally sub-sampled by sub-sampling circuit 66, and coarse search is performed at motion detecting unit MD#A. The search window pixel data read out from buffer memory 63 are sent to motion detecting unit MD#B, where full search is performed.
If the target picture is a bi-directionally predictive picture, both reference picture memories 41 and 42 are used. If a reference picture that is closer in time to the target picture is stored in reference picture memory 41, buffer memories 63 and 64 are used. Specifically, the data of the reference picture that is closer in time are sent via buffer memory 63 to motion detecting unit MD#B and full search is performed therein. The data of the reference picture that is more distant in time are sent via buffer memory 64 and sub-sampling circuit 66 to motion detecting unit MD#A, where coarse search is performed.
Center position determining circuit 70c calculates, for example, an average value of motion vectors over a prescribed number of operation cycles or over a plurality of pictures, and adds this averaged motion vector to the most recent motion vector to determine the center position. In this calculation, the average value may be obtained by applying appropriate weights to the motion vectors of the prescribed number of operation cycles or of the plurality of pictures, according to their distances from the target template block or the current picture.
Address generating circuit 70d calculates the center position (a, b) of the search area based on address AD received from current picture memory read control circuit 70b, and utilizes the center position (i, j) obtained from center position determining circuit 70c to calculate a corrected center position (a-i, b-j) of the search area. Using this center position as a real rear point, address generating circuit 70d generates addresses for search areas SAA and SAB. When picture characteristic designating signal B/P indicates that the target picture is a bi-directionally predictive picture, address generating circuit 70d ignores the center position information output from center position determining circuit 70c, to generate the addresses for search areas SAA and SAB simply according to template block address AD from current picture memory read control circuit 70b.
Memory distance decision circuit 70e determines which of reference picture memories 41 and 42 stores a reference picture having a shorter inter-frame distance, according to the sequence shown in
With control circuit schematically shown in
Region determination circuit 81 outputs a signal "1" (at an H level) when motion vector MV1 is outside search area SAB. Calculating circuit 82 outputs a signal "1" (at an H level) when the result of calculation is negative. Region determination circuit 81 compares a horizontal component H and a vertical component V of motion vector MV1 with horizontal and vertical components of search area SAB, and determines whether motion vector MV1 is within search area SAB or not according to the result of comparison. Calculating circuit 82 simply performs operations according to the decision function f, and outputs a signal indicating the sign of the result of calculation. If the result of calculation is represented by 2's complement notation, the sign of the result can be determined from the most significant bit.
AND circuit 83 outputs a signal at an H level when the signals from region determination circuit 81 and calculating circuit 82 are both "1" at the H level. Therefore, gate circuit 84 is rendered conductive only when motion vector MV1 detected by motion detecting unit MD#A is outside search area SAB and the result of calculation by calculating circuit 82 is negative. Otherwise, gate circuit 85 is rendered conductive, and motion vector MV2 detected by motion detecting unit MD#B is output as final motion vector FMV.
The configuration shown in
As described above, according to the fourth embodiment of the present invention, motion detecting units are used adaptively according to the predicting method applied to the current picture as well as the inter-frame distance, and their search areas are allocated accordingly. Therefore, it is possible to perform motion vector detection efficiently.
In addition, the center point of the search area is offset from the position of the real rear point, so that the entire screen image moves in the same direction. Accordingly, a motion vector can be detected efficiently in the case where an image on a part of the screen slightly moves.
For example, SAA center position determining circuit 70ca and SAB center position determining circuit 70cb receive and store motion vectors over a plurality of pictures from motion detecting units MD#A and MD#B, respectively, and determine center positions for respective search areas SAA and SAB according to the histories of the motion vectors for the template blocks at the same position over the plurality of pictures. In this case, they may be configured to receive motion vector information MVA and MVB from motion detecting units MD#A and MD#B, respectively, within one picture, to further modify the center positions initially set.
As described above, two center position determining circuits are provided for determining center positions of respective search areas, and addresses for the search window pixel data are generated separately. Thus, it is possible to differentiate from each other the center positions of search areas for motion detecting units MD#A and MD#B. Accordingly, in the case where two moving objects move in different directions, respective search areas can be allocated for the different directions, resulting in efficient motion vector detection.
Upon the bi-directional prediction, search areas SAA and SAB are set in different reference pictures. In this case, the motion vector search areas for respective template blocks of these two reference pictures have center points that are set at the same point (including the real rear point) or set at different points.
For a bi-directionally predictive picture, the center points of the search areas may also be set at the same position, or alternatively, the center points for respective search areas may be offset from the real rear point (0, 0). The control circuit for the sixth embodiment is readily implemented from that in the fourth embodiment, by simply fixing the center position from the center position determining circuit to the real rear point (0, 0).
As described above, according to the sixth embodiment of the present invention, the center points of two search areas are both set at the real rear point for the unidirectionally predictive picture. Accordingly, it is possible to search a motion vector about the real rear point where motions are likely to concentrate, and thus, efficient motion vector detection is realized without increasing the amount of hardware.
In the seventh embodiment, when the target picture is a bi-directionally predictive picture, the center points of the search areas may be set desirably, since the search areas are set for different reference pictures.
To achieve the search area allocation as shown in
By setting one of the center points of search areas at the real rear point, a configuration for changing the position of the search area becomes unnecessary, and thus, the amount of hardware is reduced.
Motion detecting unit MD#C detects a motion vector in an integer precision, and its specific search area has a range horizontally from -64 to +63 and vertically from -32 to +31. It has64 estimation pixels, and a template block of a size of 16 pixel rows by 4 pixel columns.
Motion detecting unit MD#D detects a motion vector in a fractional precision (half-pel precision), and its specific search area has a range horizontally from -4 to +3 and vertically from -2 to +1. It has 128 estimation pixels, and utilizes a template block having a size of 8 pixel columns and 16 pixel rows.
In the configuration shown in
Since the reference picture with a long inter-frame distance is allocated to motion detecting unit MD#C of the integer precision, the motion vector can be detected over a wide area, so that efficient motion vector detection is realized for a picture having a large degree of motion. In this search, coarse search (sub-sampling of estimation points) is performed, which reduces the time required for the motion vector detection.
In contrast, for the reference picture having a shorter inter-frame distance, motion detecting unit MD#D of the half-pel precision exhaustively searches through a narrow area, and precisely detects the motion vector for the picture with a small degree of motion.
Accordingly, in motion detecting unit MD#D of the half-pel precision, an interpolation circuit 90 is provided at a preceding stage of the calculating unit, to produce interpolated pixels for interpolation of the search area pixels. The interpolated pixels produced by this interpolation circuit and the original pixels are sent as the search area pixel data to the calculating unit.
The search area pixel data are shifted out pixel by pixel from calculating unit 2, as explained in conjunction with
After the vector is detected in the integer precision at calculating unit 2, final comparison unit 95 latches the motion vector candidate value output from comparison unit 4 and corresponding estimation value, and compares the estimation value with that received from comparison unit 94 to determine a final motion vector. Accordingly, in parallel with the calculating operation at calculating unit 92 in the fractional precision, calculating unit 2 can perform calculation of estimation values for the next template block.
When the target picture is a unidirectionally predictive picture, search area pixel data PY sent to calculating unit 2 are also sent to motion detecting unit MD#C in parallel. When the target picture is a bi-directionally predictive picture, the search area for the half-pel search is determined based on the result of motion detection by motion detecting unit MD#C, and corresponding data are read out from the reference picture memory.
The motion vector detecting system for the eighth embodiment has a configuration identical to that shown in
When buffer memories each store pixel data of a size of a respective search area, buffer memory read control circuit 106 controls reading of the buffer memories such that vertically aligned pixel data are sequentially read out from the pixel data in the search area. When the buffer memories each have a capacity enough to store the pixel data of the search window and also have a first-in first-out configuration, buffer memory read control circuit 106 simply controls activation/inactivation of the buffer memories.
Reference picture memory read control circuit 105 reads out the pixel data written in a raster scan order in the sequence of vertically consecutive pixels in the search area, and stores them sequentially in the buffer memories. The configuration of the buffer memory is a mere example. The buffer memory may be formed of a random access memory, and may have a configuration in which address conversion is performed between the writing and reading of pixel data such that the pixel data are stored in a raster scan order and then sequentially read out in the vertical direction on a unit of column of the search window.
Referring to the configuration of the control unit shown in
If picture characteristic designating signal B/P indicates that the target picture is a unidirectionally predictive picture, SAD address generating circuit 103 ignores template block address AD. SAC address generating circuit 102 generates the address for motion detecting unit MD#C according to template block address AD. Reference picture memory read control circuit 105 accesses the reference picture memory written less recently according to picture characteristic designating signal B/P, and sequentially reads out the pixel data for search area SAC and stores them in the corresponding buffer memory. Buffer memory read control circuit 106 activates the corresponding buffer memory according to picture characteristic designating signal B/P, and sequentially reads out the pixel data for search area SAC to motion detecting unit MD#C. When motion detecting unit MD#C completes its motion detecting operation, SAD address generating circuit 103 is activated to generate the address for search area SAD based on motion vector MV detected in an integer precision. Thereafter, reference picture memory read control circuit 105 again accesses the corresponding reference picture memory according to the address received from SAD address generating circuit 103, and reads out necessary pixel data to corresponding buffer memory. Buffer memory read control circuit 106 reads the pixel data from the corresponding buffer memory again, and sends the search area pixel data to motion detecting unit MD#D for motion detection in a half-pel precision.
In the eighth embodiment, the center of the search area is set at the real rear point (0, 0) upon processing of a unidirectionally predictive picture. This is because the motion vector is detected within search area SAD in the half-pel precision, and thus, it is unnecessary to offset the center point from the real rear point.
As described above, according to the eighth embodiment of the present invention, motion detecting units of an integer precision and of a half-pel precision are adaptively used dependent on the characteristic (or method of predictive coding) of a picture of interest. Therefore, efficient motion vector detection is realized.
In the above configurations, a template block sub-sampled into 4 pixel columns and 16 pixel rows is used when the number of estimation pixels is 64, supposing the case of coding a frame picture. If the target picture is a field picture, however, a template block sub-sampled into 8 pixel rows and 8 pixel columns may be used for the same 64 estimation pixels.
Furthermore, for the configuration of motion detecting unit MD#D of a half-pel precision, a processor disclosed by Ishihara et al. in the above ISSCC may be employed. The precision for detection of motion detecting unit MD#D may be other fractional precision, such as ¼ precision.
As described above, according to the present invention, motion detecting units having different numbers of estimation pixels are provided and are adaptively used according to the characteristics of a target picture. This enables efficient motion vector detection, and also restricts an increase in the amount of hardware, and hence, an increase in power consumption.
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
Matsumura, Tetsuya, Ishihara, Kazuya, Hanami, Atsuo, Kumaki, Satoshi
Patent | Priority | Assignee | Title |
10116959, | Jun 03 2002 | Microsoft Technology Licesning, LLC | Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation |
10158878, | Feb 09 2011 | LG Electronics Inc. | Method for storing motion information and method for inducing temporal motion vector predictor using same |
10284843, | Jan 25 2002 | Microsoft Technology Licensing, LLC | Video coding |
10306259, | Oct 16 2007 | LG Electronics Inc.; Korea Advanced Institute of Science and Technology | Method and an apparatus for processing a video signal |
10313680, | Jan 08 2014 | Microsoft Technology Licensing, LLC | Selection of motion vector precision |
10448046, | Feb 09 2011 | LG Electronics Inc. | Method for storing motion information and method for inducing temporal motion vector predictor using same |
10523965, | Jul 03 2015 | HUAWEI TECHNOLOGIES CO , LTD | Video coding method, video decoding method, video coding apparatus, and video decoding apparatus |
10587891, | Jan 08 2014 | Microsoft Technology Licensing, LLC | Representing motion vectors in an encoded bitstream |
10609409, | Feb 09 2011 | LG Electronics Inc. | Method for storing motion information and method for inducing temporal motion vector predictor using same |
10820013, | Oct 16 2007 | LG Electronics Inc.; Korea Advanced Institute of Science and Technology | Method and an apparatus for processing a video signal |
10848781, | Feb 09 2011 | LG Electronics Inc. | Method for storing motion information and method for inducing temporal motion vector predictor using same |
6950470, | Dec 12 2000 | LG Electronics Inc. | Method and apparatus for optimizing of motion estimation |
7050502, | Sep 18 2001 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for motion vector detection and medium storing method program directed to the same |
7362808, | Dec 09 2002 | Samsung Electronics Co., Ltd. | Device for and method of estimating motion in video encoder |
7363121, | Jan 07 2003 | Garmin International, Inc. | System, method and apparatus for searching geographic area using prioritized spatial order |
7386050, | Oct 15 2003 | Electronics and Telecommunications Research Institute | Fast half-pel searching method on the basis of SAD values according to integer-pel search and random variable corresponding to each macro block |
7386373, | Jan 07 2003 | Garmin International, Inc.; Garmin International, Inc | System, method and apparatus for searching geographic area using prioritized spatial order |
7453940, | Jul 15 2003 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | High quality, low memory bandwidth motion estimation processor |
7590180, | Dec 09 2002 | Samsung Electronics Co., Ltd. | Device for and method of estimating motion in video encoder |
7596243, | Sep 16 2005 | Sony Corporation; Sony Electronics Inc. | Extracting a moving object boundary |
7620108, | Sep 16 2005 | Sony Corporation; Sony Electronics Inc. | Integrated spatial-temporal prediction |
7698058, | Jan 07 2003 | Garmin International, Inc. | System, method and apparatus for searching geographic area using prioritized spatial order |
7778328, | Aug 07 2003 | Sony Corporation; Sony Electronics INC | Semantics-based motion estimation for multi-view video coding |
7782957, | Jun 15 2005 | Novatek Microelectronics Corp. | Motion estimation circuit and operating method thereof |
7792191, | Mar 08 2001 | LAMBERT EVEREST LTD | Device and method for performing half-pixel accuracy fast search in video coding |
7865019, | Jul 27 2005 | Siemens Corporation | On optimizing template matching via performance characterization |
7885335, | Sep 16 2005 | Sont Corporation; Sony Electronics Inc. | Variable shape motion estimation in video sequence |
7894522, | Sep 16 2005 | Sony Corporation; Sony Electronics Inc. | Classified filtering for temporal prediction |
7894527, | Sep 16 2005 | Sony Corporation; Sony Electronics Inc. | Multi-stage linked process for adaptive motion vector sampling in video compression |
7933331, | Jul 15 2003 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | High quality, low memory bandwidth motion estimation processor |
7957466, | Sep 16 2005 | Sony Corporation; Sony Electronics Inc. | Adaptive area of influence filter for moving object boundaries |
8005308, | Sep 16 2005 | Sony Corporation; Sony Electronics Inc. | Adaptive motion estimation for temporal prediction filter over irregular motion vector samples |
8059719, | Sep 16 2005 | Sony Corporation; Sony Electronics Inc. | Adaptive area of influence filter |
8107748, | Sep 16 2005 | Sony Corporation; Sony Electronics | Adaptive motion search range |
8165205, | Sep 16 2005 | Sony Corporation; Sony Electronics Inc. | Natural shaped regions for motion compensation |
8345761, | Aug 08 2006 | Canon Kabushiki Kaisha | Motion vector detection apparatus and motion vector detection method |
8416856, | Jul 28 2004 | Novatek Microelectronics Corp. | Circuit for computing sums of absolute difference |
8498337, | Oct 02 2006 | LG Electronics Inc. | Method for decoding and encoding a video signal |
8654852, | Jun 07 2010 | Himax Technologies Limited | Circuit for performing motion estimation and motion compensation |
8692888, | Jun 02 2008 | Canon Kabushiki Kaisha | Image pickup apparatus |
8750368, | Oct 16 2007 | LG Electronics Inc.; Korea Advanced Institute of Science and Technology | Method and an apparatus for processing a video signal |
8750369, | Oct 16 2007 | LG Electronics Inc.; Korea Advanced Institute of Science and Technology | Method and an apparatus for processing a video signal |
8761242, | Oct 16 2007 | LG Electronics Inc.; Korea Advanced Institute of Science and Technology | Method and an apparatus for processing a video signal |
8867607, | Oct 16 2007 | LG Electronics Inc.; Korea Advanced Institute of Science and Technology | Method and an apparatus for processing a video signal |
9185427, | Jun 03 2002 | Microsoft Technology Licensing, LLC | Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation |
9453570, | Dec 07 2009 | ArvinMeritor Technology, LLC | Differential lock with assisted return |
9521426, | Feb 09 2011 | LG Electronics Inc. | Method for storing motion information and method for inducing temporal motion vector predictor using same |
9571854, | Jun 03 2002 | Microsoft Technology Licensing, LLC | Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation |
9716898, | Feb 09 2011 | LG Electronics Inc. | Method for storing motion information and method for inducing temporal motion vector predictor using same |
9749642, | Jan 08 2014 | Microsoft Technology Licensing, LLC | Selection of motion vector precision |
9774881, | Jan 08 2014 | Microsoft Technology Licensing, LLC | Representing motion vectors in an encoded bitstream |
9813702, | Oct 16 2007 | LG Electronics Inc.; Korea Advanced Institute of Science and Technology | Method and an apparatus for processing a video signal |
9832482, | Feb 09 2011 | LG Electronics Inc. | Method for storing motion information and method for inducing temporal motion vector predictor using same |
9888237, | Jan 25 2002 | Microsoft Technology Licensing, LLC | Video coding |
9900603, | Jan 08 2014 | Microsoft Technology Licensing, LLC | Selection of motion vector precision |
9942560, | Jan 08 2014 | Microsoft Technology Licensing, LLC | Encoding screen capture data |
9973776, | Feb 09 2011 | LG Electronics Inc. | Method for storing motion information and method for inducing temporal motion vector predictor using same |
Patent | Priority | Assignee | Title |
5200820, | Apr 26 1991 | SHINGO LIMITED LIABILITY COMPANY | Block-matching motion estimator for video coder |
5400087, | Jul 06 1992 | Renesas Electronics Corporation | Motion vector detecting device for compensating for movements in a motion picture |
5576772, | Sep 09 1993 | Sony Corporation | Motion vector detection apparatus and method |
5619268, | Jan 17 1995 | Graphics Communication Laboratories | Motion estimation method and apparatus for calculating a motion vector |
5784108, | Dec 03 1996 | ZAPEX TECHNOLOGIES INC | Apparatus for and method of reducing the memory bandwidth requirements of a systolic array |
5930403, | Jan 03 1997 | ZAPEX TECHNOLOGIES INC | Method and apparatus for half pixel SAD generation utilizing a FIFO based systolic processor |
5949486, | Sep 03 1996 | Mitsubishi Denki Kabushiki Kaisha; Mitsubishi Electric Engineering Co., Ltd. | Unit for detecting motion vector for motion compensation |
6108039, | May 23 1996 | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | Low bandwidth, two-candidate motion estimation for interlaced video |
6122317, | May 22 1997 | Mitsubishi Denki Kabushiki Kaisha | Motion vector detector |
6212237, | Jun 17 1997 | Nippon Telegraph and Telephone Corporation | Motion vector search methods, motion vector search apparatus, and storage media storing a motion vector search program |
6289052, | Jun 07 1999 | RPX Corporation | Methods and apparatus for motion estimation using causal templates |
6307969, | Mar 11 1998 | Fujitsu Limited | Dynamic image encoding method and apparatus |
6332002, | Nov 01 1997 | LG Electronics Inc. | Motion prediction apparatus and method |
JP7250328, | |||
JP9154140, | |||
JP9163380, | |||
JP937269, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 07 1999 | HANAMI, ATSUO | Mitsubishi Denki Kabushiki Kaisha | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010345 | /0012 | |
Oct 07 1999 | MATSUMURA, TETSUYA | Mitsubishi Denki Kabushiki Kaisha | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010345 | /0012 | |
Oct 07 1999 | KUMAKI, SATOSHI | Mitsubishi Denki Kabushiki Kaisha | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010345 | /0012 | |
Oct 07 1999 | ISHIHARA, KAZUYA | Mitsubishi Denki Kabushiki Kaisha | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010345 | /0012 | |
Oct 22 1999 | Renesas Technology Corp. | (assignment on the face of the patent) | / | |||
Sep 08 2003 | Mitsubishi Denki Kabushiki Kaisha | Renesas Technology Corp | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014502 | /0289 | |
Apr 01 2010 | Renesas Technology Corp | Renesas Electronics Corporation | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 024973 | /0001 |
Date | Maintenance Fee Events |
Jun 06 2005 | ASPN: Payor Number Assigned. |
Dec 31 2007 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 21 2011 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Feb 26 2016 | REM: Maintenance Fee Reminder Mailed. |
Jul 20 2016 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jul 20 2007 | 4 years fee payment window open |
Jan 20 2008 | 6 months grace period start (w surcharge) |
Jul 20 2008 | patent expiry (for year 4) |
Jul 20 2010 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 20 2011 | 8 years fee payment window open |
Jan 20 2012 | 6 months grace period start (w surcharge) |
Jul 20 2012 | patent expiry (for year 8) |
Jul 20 2014 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 20 2015 | 12 years fee payment window open |
Jan 20 2016 | 6 months grace period start (w surcharge) |
Jul 20 2016 | patent expiry (for year 12) |
Jul 20 2018 | 2 years to revive unintentionally abandoned end. (for year 12) |