An image processing method and device thereof are provided. The device includes a capture device and a processor. The capture device is adopted for receiving a plurality of frames and comparing at least two adjacent frames to obtain an area selection signal according to a differential value there-between. The processor is connected to the capture device for receiving the area selection signal and to separate each of the adjacent frames into at least two areas. Each of the areas of the adjacent frames is performed by an image processing step respectively, and then the images of the areas are combined to obtain a resulted frame.
|
1. An image processing method adapted to an image processing device, the image processing method comprising:
receiving a plurality of frames;
comparing at least two adjacent frames to obtain an area selection signal;
separating each of the frames to at least two areas according the area selection signal, wherein the area selection signal is used for indicating a position of one of the two areas;
respectively performing an image process step to each of the areas; and
combining the corresponding processed areas to obtain a resulted frame.
13. An image processing device, comprising:
a capture device, for receiving a plurality of frames, comparing at least two adjacent frames to obtain an area selection signal according to the distribution situation of saw tooth image; and
a processor, connecting to the capture device, for receiving the area selection signal, separating each of the frames of at least the two adjacent frames to at least two areas according to the area selection signal which is used for indicating a position of one of the two areas, respectively performing an image process step to each of the areas, and combining the areas correspondingly processed to obtain a resulted frame.
2. The image processing method of
according to the area selection signal, an inverse telecine (IVTC) process mode is performed to the area when one of the areas is created by a pull down mode.
3. The image processing method of
4. The image processing method of
5. The image processing method of
6. The image processing method of
according to the area selection signal, a motion adaptive process mode is performed to the area when one of the areas is created by an interlace mode.
7. The image processing method of claim 1 6, wherein the progressive motion adaptive process mode comprises combining each of the two adjacent frames to obtain a complete frame.
8. The image processing method of
9. The image processing method of
scanning each corresponding vertical line of at least the two frames of the adjacent frames;
subtracting each corresponding pixels in at least two of the corresponding vertical lines; and
detecting distribution situation of saw-tooth images.
10. The image processing method of
scanning each corresponding horizontal line of at least the two frames of the adjacent frames;
comparing change of image position of the corresponding horizontal line in at least the two adjacent frames; and
detecting distribution situation of saw-tooth image.
11. The image processing method of
scanning each corresponding specific area block of at least the two frames of the adjacent frames;
subtracting each corresponding pixel in at least two of the corresponding specific area blocks; and
recording distribution situation of saw-tooth image in each of the specific area blocks.
12. The image processing method of
after comparing at least the two frames of the adjacent frames, deciding the area selection signal according to the distribution situation of saw tooth image.
14. The image processing device of
according to the area selection signal, a inverse telecine (IVTC) process mode is performed to the area when one of the areas is created by a pull down mode.
15. The image processing device of
16. The image processing device of
17. The image processing device of
18. The image processing device of
according to the area selection signal, a motion adaptive process mode is performed to the area when one of the areas is created by an interlace mode.
19. The image processing device of
20. The image processing device of
|
This application claims the priority benefit of Taiwan application serial no. 94113398, filed on Apr. 27, 2005. All disclosure of the Taiwan application is incorporated herein by reference.
1. Field of the Invention
This invention generally relates to an image processing method and device thereof, and especially to an image processing method and device thereof for processing images with running captions.
2. Description of Related Art
A conventional film mode, for example, a common record mode of a movie film, has 24 complete frames per second; therefore, the frame rate is 24 frame/s (or the frequency of the play is 24 Hz). Some other film mode records 30 complete frames per second, or the frequency of the play is 30 Hz. And the conventional broadcasting methods of visual signals, such as cable television and wireless television etc., generally comprise the broadcasting modes of NTSC (National Television System Committee) and PAL (Phase Alternative Line) etc. The broadcasting frequency of the NTSC is 60 Hz which means 60 interlaced frames per second are received at the end-user terminal from the television station. The broadcasting frequency of the PAL is 50 Hz. Wherein, the interlaced frames are, for example, in the odd-number frames, only the scan lines 1,3,5 . . . etc. (so called as the odd-number scan lines) display images. The even-number scan lines do not display any images. In the even-number frames, only the scan lines 2,4,6 . . . etc. (so called the even-number scan lines) display images; the odd-number scan lines do not have images; vice versa.
Therefore, for a film of 30 Hz or 24 Hz film mode, in order to transmit via the NTSC standard at 60 Hz, the processes of 2:2 pull down or 3:2 pull down should be performed before the transmission.
Besides, an original film at the film mode of 24 Hz, in order to transmit with the PAL standard at 50 Hz, the process of 2:2 pull down should be performed before the transmission.
Currently, a higher quality playback mode of an image playback device of an end-user terminal, for example, a High Definition TV (HDTV), in order to get better display definition, will first detect whether the received frames are the interlaced frames processed with 2:2 pull down or 3:2 pull down; further, by the Inverse Telecine (IVTC) process, the interlaced frames will be converted into complete frames before they are played back. For example, after receiving the interlaced frames 201e to 224o shown in
However, in the conventional broadcasting mode of the visual signal, running captions are frequently added. In general, the running captions are added directly to the interlaced broadcasting frames, for example, added directly to the interlaced frames 101e, 101o, 102e, 102o to 130e and 130o as shown in
An object of the present invention is to provide an image processing method, for separating the received frame to at least two different areas, further respectively performing different image processes to the different areas, to obtain a better resulted frame.
Another object of the present invention is to provide an image processing device, for separating the received frame to at least two different areas, further separately performing different image processes to the different areas, to obtain a better resulted frame.
The present invention provides an image processing method, which comprises the following steps. First, a plurality of frames is received. Further, at least two adjacent frames are compared to obtain an area selection signal. Further, each of the frames is separated to at least two areas according the area selection signal. Next, an image process is performed respectively to each of the areas. Finally, the corresponding processed areas are combined to obtain a resulted frame.
According to an embodiment, the present invention provides an image processing device comprising, for example, a capture device and a processor. The capture device is adopted for receiving a plurality of frames, and for comparing at least two adjacent frames to obtain an area selection signal according to a differential value there-between. The processor is connected to the capture device for receiving the area selection signal and to separate each of the adjacent frames into at least two areas according to the area selection signal. Each of the areas is performed by an image processing step respectively. Moreover, the corresponding processed areas are combined to obtain a resulted frame.
In accordance with an embodiment of the present invention, the image processing step comprises that, for example, according to the area selection signal, an inverse telecine (IVTC) process mode is performed to the area, when one of the areas is created by a pull down mode. Further, the inverse telecine process mode comprises, for example, performing an inverse process mode of the pull down mode. Further, one of the areas obtained by the pull down mode comprises a frame which is created from a frame of film mode by the pull down mode. Further, the pull down mode comprises 2:2 pull down mode, 3:2 pull down mode or other pull down modes with any proportion.
In accordance with an embodiment of the present invention, the method of respectively performing the image processing steps to each of the areas comprises that, for example, according to the area selection signal, a motion adaptive process mode is performed to the area when one of the areas is created by an interlace mode. Further, the motion adaptive process mode comprises combining each two of the adjacent frames to obtain a complete frame. Further, one of the areas, which is created by the interlace mode, comprises a running caption.
In accordance with an embodiment of the present invention, a method of comparing at least two adjacent frames comprises: each corresponding vertical line of the two adjacent frames is scanned; further, the changes of the image positions of the corresponding vertical lines in the two adjacent frames are compared to obtain an area selection signal.
In accordance with another embodiment of the present invention, a method of comparing at least two frames of the adjacent frames comprises: each corresponding specific area block of the two frames of the adjacent frames is scanned; further, the changes of the corresponding image positions of the corresponding specific area blocks are compared to obtain an area selection signal.
In accordance with another embodiment of the present invention, a method of obtaining the area selection signal comprises: after comparing at least the two frames of the adjacent frames, the area selection signal is decided according to a differential value of the frames.
The above is a brief description of some deficiencies in the prior art and advantages of the present invention. Other features, advantages and embodiments of the invention will be apparent to those skilled in the art from the following description, accompanying drawings and appended claims.
In accordance with an embodiment of the present invention, the method of comparing two or more frames of the adjacent frames comprises, for example, a scanning method. The scanning method, for example, includes horizontal scanning, vertical scanning or area block scanning etc. In an embodiment of the present invention, the vertical scanning method comprises: first, each corresponding vertical line of at least two frames of the adjacent frames is scanned; a subtraction is performed between each corresponding pixel of the corresponding vertical lines; the differential values obtained from the subtraction of each pixel in each vertical line are accumulated; the differential value of the accumulated value of each vertical line is compared.
In accordance with another embodiment of the present invention, the horizontal scanning method comprises: first, each corresponding horizontal line of at least two frames of the adjacent frames is scanned; a subtraction is performed between each corresponding pixel in the corresponding horizontal lines; the differential values obtained from the subtraction of the each pixel in each vertical line are accumulated; the differential value of the accumulated value of each horizontal line is compared.
In accordance with another embodiment of the present invention, the area block scanning method comprises: first, each corresponding specific area block of at least two frames of the adjacent frames is scanned; a subtraction is performed between each corresponding pixel in the corresponding specific area blocks; the differential values obtained from the subtraction of each pixel in each specific area block are accumulated; the differential value of the accumulated value of each specific area block is compared.
When the differential value obtained from any of the above mentioned scanning methods is beyond a predetermined value, for example, an area selection signal 732 can be obtained as shown in
Further, as shown in
Therefore, the image processing method of the present invention not only maintains high definition of film mode in the original frame, but also obtains clearly and easily distinguishable running captions. In addition, the effects of the saw-tooth phenomenon occurred in each frame or at the intersection areas between the running captions and the frames can be avoided.
It is important to note that although the illustrated embodiment herein refers to the explanation of the present invention, it is to be understood that the embodiment is presented by way of example and not by way of limitation. In other embodiments of the present invention, for example, the frame is not necessarily separated to two areas. In contrast, according to the resource mode, the received frame can be separated to at least two different areas, for example, to at least the image areas formed by the pull down mode and the running caption area formed by the interlace mode; further, the different image processes are respectively performed to the areas formed by different modes.
Further, in accordance with another embodiment of the present invention, for example, when all the received images are formed with one same mode as show in
Further, in accordance with another embodiment of the present invention, for example, when the received images are formed with other modes, for example, as show in
Further, the present invention provides an image processing device.
In summary, in the image processing method and device of the present invention, an area selection signal is obtained according to the received frame; the frame is separated to at least two different areas according to the source mode of the received frame, which the source mode is detected according to the area selection signal; further, the different image processes are respectively performed to the different areas. Therefore, the image processing method in the present invention maintains higher definition of film mode in the original frame, obtains clearly and easily distinguishable running captions. In addition, the saw-tooth phenomenon occurred in each frame or at the intersection areas between the running captions and the frames is avoided.
The above description provides a full and complete description of the preferred embodiments of the present invention. Various modifications, alternate construction, and equivalent may be made by those skilled in the art without changing the scope or spirit of the invention. Accordingly, the above description and illustrations should not be construed as limiting the scope of the invention which is defined by the following claims.
Chen, Chang-Lun, Chiu, Chui-Hsun, Wang, Ho-Lin, Chen, Tsui-Chin, Huang, Hsiao-Ming, Wang, Dze-Chang
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5398071, | Nov 02 1993 | Texas Instruments Incorporated | Film-to-video format detection for digital television |
6201577, | Oct 10 1997 | TAMIRAS PER PTE LTD , LLC | Film source video detection |
7170562, | May 19 2003 | Macro Image Technology, Inc. | Apparatus and method for deinterlace video signal |
7239353, | Dec 20 2002 | Samsung Electronics Co., Ltd. | Image format conversion apparatus and method |
7446817, | Feb 18 2004 | Samsung Electronics Co., Ltd. | Method and apparatus for detecting text associated with video |
20040085480, | |||
20050253964, | |||
KR19943201, | |||
KR200190466, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 30 2012 | Novatek Microelectronics Corp. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Sep 14 2017 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Dec 30 2017 | 4 years fee payment window open |
Jun 30 2018 | 6 months grace period start (w surcharge) |
Dec 30 2018 | patent expiry (for year 4) |
Dec 30 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 30 2021 | 8 years fee payment window open |
Jun 30 2022 | 6 months grace period start (w surcharge) |
Dec 30 2022 | patent expiry (for year 8) |
Dec 30 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 30 2025 | 12 years fee payment window open |
Jun 30 2026 | 6 months grace period start (w surcharge) |
Dec 30 2026 | patent expiry (for year 12) |
Dec 30 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |