A three-dimensional video is appropriately displayed. A frame processing device includes: a frame extraction module configured to sequentially extract image frames for left eye and for right eye from a three-dimensional video signal; a frame group generation module configured to alternately generate frame groups for left eye and for right eye based on the extracted image frames for left eye and for right eye; a frame storage module configured to sequentially store the alternately generated frame groups for left eye and for right eye; a video processing module configured to sequentially extract the image frames from the frame storage module, and sequentially rewriting a display screen; and a display invalidation module configured to invalidate the display on the display screen during periods of rewriting the image frame for left eye to that for right eye and the image frame for right eye to that for left eye.
|
5. A frame processing method, comprising:
extracting a first image frame from a first video signal and a second image frame from a second video signal from a three-dimensional video signal containing the first and the second video signals;
sequentially extracting a plurality of third image frames from a two-dimensional video signal containing the third image frames;
storing the first, the second, and the third image frames extracted by the first and the second frame extraction modules;
generating an interpolation frame from successive third image frames stored in the frame storage module, and inserting the interpolation frame between the successive third image frames to increase a display output frame frequency;
sequentially rewriting a display on a display screen by either: (1) alternating between a first procedure of rewriting the display with the first image frame stored in the frame storage module a plural number of times to increase the display output frame frequency, and a second procedure of rewriting the display with the second image frame stored in the frame storage module the plural number of times to increase the display output frame frequency, or (2) rewriting the display on the display screen with the third image frames and the interpolation frame;
invalidating the display on the display screen during a first rewriting period in which the display rewrite module rewrites one of the first image frame and the second image frame with a different one of the first image frame and the second image frame; and
not invalidating the display on the display screen during a second rewriting period in which the display rewrite module rewrites one of the first image frame and the second image frame with the same one of the first image frame and the second image frame.
4. A television receiving apparatus, comprising:
a first frame extraction module configured to extract a first image frame from a first video signal and a second image frame from a second video signal from a three-dimensional video signal containing the first and the second video signals;
a second frame extraction module configured to sequentially extract a plurality of third image frames from a two-dimensional video signal containing the third image frames;
a frame storage module configured to store the first, the second, and the third image frames extracted by the first and the second frame extraction modules;
an inserting module configured to generate an interpolation frame from successive third image frames stored in the frame storage module and insert the interpolation frame between the successive third image frames to increase a display output frame frequency;
a display rewrite module configured to sequentially rewrite a display on a display screen by either: (1) alternating between a first procedure of rewriting the display with the first image frame stored in the frame storage module a plural number of times to increase the display output frame frequency, and a second procedure of rewriting the display with the second image frame stored in the frame storage module the plural number of times to increase the display output frame frequency, or (2) rewriting the display on the display screen with the third image frames and the interpolation frame; and
a display module configured to invalidate the display on the display screen during a first rewriting period in which the display rewrite module rewrites one of the first image frame and the second image frame with a different one of the first image frame and the second image frame, and configured to not invalidate the display on the display screen during a second rewriting period in which the display rewrite module rewrites one of the first image frame and the second image frame with the same one of the first image frame and the second image frame.
1. A frame processing device, comprising:
a first frame extraction module configured to extract a first image frame from a first video signal and a second image frame from a second video signal from a three-dimensional video signal containing the first and the second video signals;
a second frame extraction module configured to sequentially extract a plurality of third image frames from a two-dimensional video signal containing the third image frames;
a frame storage module configured to store the first, the second, and the third image frames extracted by the first and the second frame extraction modules;
an inserting module configured to generate an interpolation frame from successive third image frames stored in the frame storage module and insert the interpolation frame between the successive third image frames to increase a display output frame frequency;
a display rewrite module configured to sequentially rewrite a display on a display screen with the extracted image frames by either: (1) alternating between a first procedure of rewriting the display with the first image frame stored in the frame storage module a plural number of times to increase the display output frame frequency, and a second procedure of rewriting the display with the second image frame stored in the frame storage module the plural number of times to increase the display output frame frequency, or (2) rewriting the display on the display screen with the third image frames and the interpolation frame; and
a display module configured to invalidate the display on the display screen during a first rewriting period in which the display rewrite module rewrites one of the first image frame and the second image frame with a different one of the first image frame and the second image frame, and configured to not invalidate the display on the display screen during a second rewriting period in which the display rewrite module rewrites one of the first image frame and the second image frame with the same one of the first image frame and the second image frame.
8. A frame processing device, comprising:
a first frame extraction module configured to sequentially extract image frames for a left eye and image frames for a right eye from a three-dimensional video signal which includes a video signal for the left eye and a video signal for the right eye;
a first frame group generation module configured to alternately generate a frame group for the left eye in which a plurality of image frames for left eye are arranged and a frame group for the right eye in which a plurality of image frames for the right eye are arranged, based on the sequentially extracted image frames for the left eye and for the right eye;
a frame storage module configured to sequentially rewrite the frame group for the left eye and the frame group for the right eye;
a display rewrite module configured to sequentially read the frame group for the left eye and the frame group for the right eye stored in the frame storage module in an order of storing, and sequentially rewriting a display on a display screen with the frame groups;
a display module configured to display on the display screen during a first period in which the display rewrite module rewrites the image frame for the left eye with a next image frame for the left eye, and during a second period in which the display rewrite module rewrites the image frame for the right eye with a next image frame for the right eye;
a signal input module configured to be capable of inputting a two-dimensional video signal representing two-dimensional video and the three-dimensional video signal individually thereinto;
a signal determination module configured to determine whether the video signal inputted by the signal input module is the three-dimensional video signal or the two-dimensional video signal;
a second frame extraction module configured to sequentially extract image frames for two-dimensional video from the inputted two-dimensional video signal if the signal determination module determines that the video signal inputted by the signal input module is the two-dimensional video signal; and
a second frame group generation module configured to generate an interpolation frame to increase frame frequencies, based on a relationship between successive image frames in the third image frames stored in the frame storage module, and to insert the interpolation frame between the successive image frames,
wherein the frame storage module is configured to sequentially store the image frame for two-dimensional video sequentially extracted by the second frame extraction module,
wherein the second frame group generation module is configured to generate the interpolation frame based on the image frame for two-dimensional video sequentially stored in the frame storage module, and
wherein the display rewrite module is further configured to rewrite the display on the display screen with the image frames for the two-dimensional video, which are sequentially extracted from the two-dimensional video signal and are stored in the frame storage module, and the interpolation frame.
2. The frame processing device according to
wherein the display screen is a display screen of a display device having a light source and a display element performing display by receiving supply of light from the light source; and
wherein the display module comprises a display invalidation module configured to stop light emission of the light source during a first period in which the display rewrite module rewrites the first image frame with the second image frame and during a second period in which the display rewrite module rewrites the second image frame with the first image frame.
3. The frame processing device according to
a signal input module configured to be capable of inputting the two-dimensional video signal and the three-dimensional video signal thereinto; and
a signal determination module configured to determine whether the video signal inputted by the signal input module is the three-dimensional video signal or the two-dimensional video signal.
6. The frame processing method according to
wherein the display screen is a display screen of a display device having a light source and a display element performing display by receiving supply of light from the light source, and
wherein the invalidating the display comprises stopping light emission of the light source during a first period in which the display rewrite module rewrites the first image frame with the second image frame and during a second period in which the display rewrite module rewrites the second image frame with the first image frame.
7. The frame processing method according to
inputting the two-dimensional video signal and the three-dimensional video signal; and
determining whether the inputted video signal is the three-dimensional video signal or the two-dimensional video signal.
|
This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2008-335308, filed on Dec. 26, 2008; the entire contents of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to a frame processing device processing an image frame, a television receiving apparatus including the frame processing device, and a frame processing method.
2. Description of the Related Art
A television receiving apparatus is known which includes a display function for three-dimensional video. As the display mode to realize the three-dimensional video, various display modes are proposed. In one of them, images viewed by left and right eyes of a viewer are independently transmitted. Further, the image for left eye and the image for right eye are alternately displayed by a display device while changing the plane of polarization. Further, the viewer wears glasses which alternately switch the plane of polarization in cooperation with the display to view the independent images by the left and right eyes.
Here, a digital broadcast receiver is proposed which determines whether video data is the one corresponding to the three-dimensional broadcast or the one corresponding to an ordinary broadcast, and appropriately displays any of the video data on its monitor (refer to, for example, JP-A 10-257525 (KOKAI)).
Incidentally, to display the above-described three-dimensional video, it is important to alternately display the image for left eye and the image for right eye without mixing them. Here, generally, the previously displayed image is sequentially rewritten every scanning line from top to bottom of the screen in updating the image in a display device. Accordingly, for example, at the timing of scanning a middle portion of the screen, there is a display state in which the upper half of the display screen displays the image for right eye and the lower half displays the mage for left eye. When the viewer views this screen, the viewer recognizes an image without appearance of solidity.
An object of the present invention is to provide a frame processing device, a television receiving apparatus and a frame processing method each capable of appropriately displaying three-dimensional video.
A frame processing device according to an aspect of the present invention includes: a frame extraction module configured to sequentially extract image frames for left eye and image frames for right eye from a three-dimensional video signal containing a video signal for left eye and a video signal for right eye; a frame group generation module configured to alternately generate a frame group for left eye in which a plurality of image frames for left eye are arranged and a frame group for right eye in which a plurality of image frames for right eye are arranged, based on the image frames for left eye and for right eye sequentially extracted by the frame extraction module; a frame storage module configured to sequentially store the frame group for left eye and frame group for right eye alternately generated by the frame group generation module; a display rewrite module configured to sequentially extract the image frames respectively from the frame group for left eye and the frame group for right eye stored in the frame storage module, and sequentially rewriting a display on a display screen to the extracted image frames; and a display in validation module configured to invalidate the display on the display screen during a first period during which the display rewrite module rewrites the image frame for left eye to the image frame for right eye, and during a second period during which the display rewrite module rewrites the image frame for right eye to the image frame for left eye.
More specifically, the display screen is in the non-display state during the period of rewriting the image frame for left eye to the image frame for right eye and during the period of rewriting the image frame for right eye to the image frame for left eye. In other words, the display screen is selectively brought to the display state only during the period of rewriting the image frame for left eye to the subsequent image frame for left eye and during the period of rewriting the image frame for right eye to the subsequent image frame for right eye.
Accordingly, even in the transition period of rewriting the display screen, display of the image frame for left eye and the image frame for right eye in a mixed manner is avoided, thereby making it possible to appropriately display the three-dimensional video. Further, a television receiving apparatus including such a frame processing device can be constructed.
Further, a frame processing method according to an aspect of the present invention includes: sequentially extracting image frames for left eye and image frames for right eye from a three-dimensional video signal containing a video signal for left eye and a video signal for right eye; alternately generating a frame group for left eye in which a plurality of image frames for left eye are arranged and a frame group for right eye in which a plurality of image frames for right eye are arranged, based on the sequentially extracted image frames for left eye and for right eye; sequentially storing the alternately generated frame group for left eye and frame group for right eye; sequentially extracting the image frames respectively from the frame group for left eye and the frame group for right eye stored in the frame storage module, and sequentially rewriting a display on a display screen to the extracted image frames; and invalidating the display on the display screen during a first period of rewriting the image frame for left eye to the image frame for right eye, and during a second period of rewriting the image frame for right eye to the image frame for left eye.
Hereinafter, embodiments for carrying out the present invention will be described with reference to the drawings.
The television receiving apparatus 10 of this embodiment is a digital television broadcast receiving apparatus as illustrated in
Further, the DTV apparatus 10 is configured such that a first memory card 19, for example, an SD (Secure Digital) memory card, an MMC (Multimedia Card) and a memory stick can be attached thereto and detached therefrom so that recording and reproduction of information such as programs and photographs is performed to/from the first memory card 19. Furthermore, the DTV apparatus 10 is configured such that a second memory card (IC card) 20 on which, for example, contract information and the like are recorded can be attached thereto and detached therefrom so that recording and reproduction of information is performed to/from the second memory card 20.
Moreover, the DTV apparatus 10 includes a first LAN (Local Area Network) terminal 21, a second LAN terminal 22, a USB (Universal Serial Bus) terminal 23 and an i.LINK® terminal 24. Among them, the first LAN terminal 21 is used as a LAN compatible HDD (Hard Disk Drive) dedicated port. More specifically, the first LAN terminal 21 is used to record and reproduce information by Ethernet® to/from the LAN compatible HDD that is a NAS (Network Attached Storage) connected thereto.
By providing the first LAN terminal 21 as the LAN compatible HDD dedicated port as described above, recording of program information with high-vision image quality can be stably performed on the HDD independent of other network environments and network usage. Further, the second LAN terminal 22 is used as a general LAN compatible port using Ethernet®, and is connected with devices such as a LAN compatible HDD, a DVD (Digital Versatile Disk) recorder built in a PC (Personal Computer) HDD and the like via, for example, a hub and used for transmitting information to/from such devices.
Note that the above-described PC has a function for operating as a server device of contents in a home network, and can be used as a UPnP (Universal Plug and Play) compatible device having a service of providing URI (Uniform Resource Identifier) information necessary for access to the contents. Further, since digital information communicated via the second LAN terminal 22 in the above-described DVD recorder is only for a control system, it is necessary to provide a dedicated analog transmission path in order to transit analog video and audio information to/from the DTV apparatus 10. Furthermore, the second LAN terminal 22 enables connection to the Internet (network) via a router (broadband router) or the like connected to the hub.
The USB terminal 23 is used as a general USB compatible port and connected with USB devices such as a cellular phone, a digital camera, a card reader/writer for a memory card, an HDD, a keyboard and the like, for example, via the hub and used to transmit information to/from the USB devices. The i.LINK® terminal 24 is serially connected with, for example, an AV-HDD, a D (Digital)-VHS (Video Home System) and the like, and used to transmit information to/from these devices.
Here, the DTV apparatus 10 of this embodiment can receive the digital television broadcast (two-dimensional video broadcast) and three-dimensional video broadcast (digital three-dimensional broadcast) which will be described later in detail, for BS/CS/terrestrial digital broadcasts. Further, the DTV apparatus 10 receives broadcast wave in which two-dimensional or three-dimensional video data and audio data (and text information) are multiplexed, via antennas 25 and 26 as illustrated in
Specifically, as illustrated in
The PSK demodulator 27b demodulates the broadcast signal selected in the tuner 27a by the control signal from the controller 30 to obtain a transport stream containing the desired program, and outputs it to a TS decoder 27c. The TS decoder 27c performs TS decoding processing of the signal multiplexed by the transport stream (TS) based on the control signal from the controller 30, and outputs a PES (Packetized Elementary Stream) obtained by depacketing the digital video signal and audio signal of the desired program to an STD buffering signal processing module 31. Further, the TS decoder 27c outputs section information sent by the digital broadcast to the signal processing module 31.
On the other hand, the terrestrial digital television broadcast signal received by the antenna 26 for receiving the terrestrial broadcast is supplied to a tuner 28a for the terrestrial digital broadcast via an input terminal 28. The tuner 28a selects a broadcast signal of a desired channel by a control signal from the controller 30, and outputs the selected broadcast signal to an OFDM (Orthogonal Frequency Division Multiplexing) demodulator 28b. The OFDM demodulator 28b demodulates the broadcast signal selected in the tuner 28a by the control signal from the controller 30 to obtain a transport stream containing the desired program, and outputs it to a TS decoder 28c.
The TS decoder 28c performs TS decoding processing of the signal multiplexed by the transport stream (TS) based on the control signal from the controller 30, and outputs a PES obtained by depacketing the digital video signal and audio signal of the desired program to the STD buffer in the signal processing module 31. Further, the TS decoder 28c outputs section information sent by the digital broadcast to the signal processing module 31. Further, an analog demodulator 28e demodulates an analog broadcast signal selected in a tuner 28d, and outputs it to the signal processing module 31. Furthermore, the PSK demodulator 27b or the OFDM demodulator 28b demodulates the broadcast wave of the three-dimensional video selected by the tuner 27a or the tuner 28a. Moreover, the TS decoder 27c or the TS decoder 28c decodes the demodulated stream of the three-dimensional video and outputs it to the signal processing module 31.
Here, at the television viewing, the above-described signal processing module 31 selectively performs predetermined digital signal processing on the digital video signal and audio signal individually supplied from the TS decoder 27c and the TS decoder 28c, and outputs them to a graphics processing module 32 and an audio processing module 33. Further, when reproducing contents other than the television broadcast, the signal processing module 31 selects a reproduction signal of contents inputted from the controller 30, performs predetermined digital signal processing on it, and outputs it to the graphics processing module 32 and the audio processing module 33.
Further, the signal processing module 31 outputs to the controller 30 various kinds of data and electronic program guide (EPG) information for obtaining a program, program attribute information (program category and the like), closed caption information (service information, S1, or PSI) from among the section information inputted from the TS decoder 27c (28c). The controller 30 including a ROM 30a, a RAM 30b, a non-volatile memory 30c and so on performs image generation processing to display an EPG and a closed caption from the inputted information, and outputs the generated image information to the graphics processing module 32.
The graphics processing module 32 includes a frame rate conversion circuit 59 described later in detail. Further, the DTV apparatus 10 of this embodiment includes a frame synchronization signal generation circuit 36 which gives a timing when the frame rate conversion circuit 59 generates an image frame. The graphics processing module 32 synthesizes a digital video signal supplied from the AV decoder in the signal processing module 31, an OSD signal generated in an OSD (On Screen Display) signal generation module 34, image data by a data broadcast, an EPG generated by the controller 30, and a closed caption signal, and outputs the resulting signal to a video processing module 35. Further, when displaying a closed caption by a closed caption broadcast, the graphics processing module 32 superimposes closed caption information on the video signal based on the control from the controller 30.
The digital video signal outputted from the graphics processing module 32 is supplied to the video processing module 35. The video processing module 35 converts the inputted digital video signal to an analog video signal in a format displayable on the video display device 12. The video processing module 35 then outputs the analog video signal to the video display device 12 to cause the video display device 12 to display video, and derives it to the outside via a video output terminal 38. Further, the audio processing module 33 converts the inputted digital audio signal to an analog audio signal in a format reproducible by the speaker 14. The audio processing module 33 then outputs the analog audio signal to the speaker 14 to cause the speaker 14 to reproduce audio, and derives it to the outside via an audio output terminal 37.
Here, all of the operations of the DTV apparatus 10 including the above-described various kinds of receiving operations are comprehensively controlled by the controller 30. The controller 30 has a CPU (Central Processing Unit) and so on built therein. The controller 30 receives operation information from the operation module 15, or receives operation information sent from the remote controller 16 via the light receiving module 18. Further, the controller 30 controls the modules so that contents of the received operation information are reflected. The controller 30 mainly uses the ROM (Read Only Memory) 30a storing a control program executed by the CPU, the RAM (Random Access Memory) 30b providing a work area to the CPU, and the non-volatile memory 30c in which various kinds of setting information and control information are stored.
Further, the controller 30 is connected to a card holder 40 to which the first memory card 19 can be attached via a card I/F (Interface) 41. Thus, the controller 30 can transmit information to/from the first memory card 19 attached to the card holder 40 via the card I/F 41. Further, the controller 30 is connected to a card holder 42 to which the second memory card 20 can be attached via a card I/F 43. Thus, the controller 30 can transmit information to/from the second memory card 20 attached to the card holder 42 via the card I/F 43.
Further, the controller 30 is connected to the first LAN terminal 21 via a communication I/F 45. Thus, the controller 30 can transmit information to/from the LAN compatible HDD 25 connected to the first LAN terminal 21 via the communication I/F 45. In this case, the controller 30 has a DHCP (Dynamic Host Configuration Protocol) server function, and conducts control by assigning an IP (International Protocol) address to the LAN compatible HDD 25 connected to the first LAN terminal 21. Further, the controller 30 is connected to the second LAN terminal 22 via a communication I/F 46. Thus, the controller 30, as illustrated in
Further, the controller 30 is connected to the USB terminal 23 via a USB I/F 47. Thus, the controller 30 transmits information to/from the devices connected to the USB terminal 23 via the USB I/F 47. Further, the controller 30 is connected to the i.LINK® terminal 24 via an i.LINK® I/F 48. Thus, the controller 30 can transmit information to/from the devices connected to the i.LINK® terminal 24 via the i.LINK® I/F 48.
Next, a configuration of the frame processing module 50 according to this embodiment will be described based on
When the DTV apparatus 10 including the above-described frame processing module 50 receives the three-dimensional video broadcast, generally, the video display device 12 alternately displays the image for left eye and the image for right eye while switching them every 1/240 sec. In cooperation with this, a viewer wears three-dimensional video viewing glasses 62 illustrated in
More specifically, the frame processing module 50 includes, as illustrated in
The signal input circuit 51 functioning as a signal input module determines the kind of the inputted video signal, that is, whether the inputted video signal is the three-dimensional video signal made by multiplexing the video signal for left eye and the video signal for right eye or an ordinary two-dimensional video signal representing two-dimensional video. Here, the input interface of the signal input circuit 51 having the signal determination circuit 52 is composed of an HDMI (High-Definition Multimedia Interface) receiver. Thus, the signal determination circuit 52 can extract the kind of the inputted video signal in authentication of connection to thereby obtain the format of the inputted video signal. Further, the signal determination circuit 52 can determine the kind of the video signal based on the information described in the header of the data stream obtained by the TS decoder 27c or the TS decoder 28c.
Here, there are various encoding modes for the case of the three-dimensional video signal. For example, as illustrated in
When the signal determination circuit 52 determines that a two-dimensional video signal has been inputted by the signal input circuit 51, the frame extraction module (second frame extraction module) 56 sequentially extracts image frames F1, F2, and so on for the two-dimensional video from the inputted two-dimensional video signal as illustrated in
The frame interpolation circuit 57a of the frame group generation module 57 generates, as illustrated in
When the signal determination circuit 52 determines that a two-dimensional video signal has been inputted by the signal input circuit 51, the switch circuit 58 selects the output from the frame group generation module 57 having the frame interpolation circuit 57a as illustrated in
The frame extraction module 53 sequentially extracts, as illustrated in
A case in which the three-dimensional video signal is composed of the image frame E3 exemplified in
A case in which the three-dimensional video signal is composed of an image frame E1 (E2) exemplified in
The frame group generation module 55 alternately generates, as illustrated in
Here, the three-dimensional video viewing glasses 62 have a so-called liquid crystal shutter or the like which alternately switches between transmission and non-transmission of video on the left eye side and the right eye side. The three-dimensional video viewing glasses 62 switch between transmission and non-transmission on the left eye side and the right eye side every 1/240 sec in a manner to synchronize with the timing of starting rewrite of the image frames for left eye and for right eye on the display screen 12c.
The video processing module 35 includes the video signal output module 35a. The video signal output module 35a outputs a video signal corresponding to the image frame extracted from the frame storage module 55a to the video display device 12 side having the display screen 12c. This makes it possible to drive display pixels of the video display device 12 by the video signal to update the display image. In other words, the video processing module 35 functioning as a display rewrite module extracts image frames one by one in the storage order from the frame group L11 (L12) for left eye and the frame group R11 (R12) for right eye stored in the frame storage module 55a. The video processing module 35 then rewrites the display on the display screen to the extracted image frames in sequence (the order of the image frames L1, L2, R1, R2, L3, L4, R3, R4 and so on).
More specifically, as illustrated in
Here, as illustrated in
In other words, the display invalidation module 61 brings the display screen 12c into the non-display state during the periods of rewriting the image frame for left eye to that for right eye and the image frame for right eye to that for left eye. On the other hand, the display invalidation module 61 selectively brings the display screen 12c into the display state only during the period during which the image frame for left eye (for right eye) is being rewritten to the subsequent image frame for left eye (for right eye). This can prevent video without appearance of solidity from being displayed.
Next, processing by the frame processing module 50 configured as described above will be described based on a flowchart illustrated in
As illustrated in
On the other hand, when the video signal inputted into the signal input circuit 51 is a three-dimensional video signal (YES at S2), the frame extraction module 56 sequentially extracts, as illustrated in
Subsequently, the frame storage module 55a sequentially stores the frame group L11 (L12) for left eye and the frame group R11 (R12) for right eye alternately generated by the frame group generation module 55 (S8). Subsequently, the video processing module 35 sequentially extracts the image frames one by one from the frame groups for left eye and for right eye, and outputs a video signal corresponding to the extracted image frame from the video signal output module 35a to sequentially rewrite the display screen 12c (S9). Further, the display invalidation module 61 invalidates the display on the display screen 12c, for example, by turning off the backlight 12a during the periods of rewriting the image frame for left eye to that for right eye and the image frame for right eye to that for left eye (S10).
As has been described, the DTV apparatus 10 including the frame processing module 50 according to this embodiment brings the display screen 12c into the non-display state during the period during which the image frame for left eye is being rewritten to the image frame for right eye and during the period during which the image frame for right eye is being rewritten to the image frame for left eye. In other words, the DTV apparatus 10 selectively brings the display screen 12c into the display state only during the period during which the image frame for left eye is being rewritten to the subsequent image frame for left eye and during the period during which the image frame for right eye is rewritten to the subsequent image frame for right eye. Accordingly, with the DTV apparatus 10, even in the transition period of rewriting the display screen, display of the image frame for left eye and the image frame for right eye in a mixed manner is avoided, thereby making it possible to appropriately display three-dimensional video. Further, in the DTV apparatus 10 of this embodiment, the number of frames of moving image is increased by generation of interpolation frames in the two-dimensional video, thus allowing a viewer to view an image with less afterimage effect.
(Other Embodiments)
Embodiments of the present invention are not limited to the above-describe embodiment, but can be extended or changed, and the extended and changed embodiments are also included in the technical scope of the present invention. For example, an example in which output is performed at a frequency four times the original frame frequency has been illustrated in the above-described embodiment. In contrast, the frequency may be n times (multiple times) such as 6 times, 8 times, 10 times, 12 times or the like as illustrated in
Patent | Priority | Assignee | Title |
8810628, | Oct 16 2009 | Sony Corporation | Image processing apparatus and image processing method |
8848041, | Sep 17 2008 | Samsung Electronics Co., Ltd. | Method and apparatus for displaying stereoscopic image |
9402086, | Apr 09 2010 | INTERDIGITAL MADISON PATENT HOLDINGS | Method for processing stereoscopic images and corresponding device |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 20 2009 | YAMADA, MASAHIRO | Kabushiki Kaisha Toshiba | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 022795 | /0278 | |
Jun 08 2009 | Kabushiki Kaisha Toshiba | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 05 2016 | REM: Maintenance Fee Reminder Mailed. |
Dec 25 2016 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Dec 25 2015 | 4 years fee payment window open |
Jun 25 2016 | 6 months grace period start (w surcharge) |
Dec 25 2016 | patent expiry (for year 4) |
Dec 25 2018 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 25 2019 | 8 years fee payment window open |
Jun 25 2020 | 6 months grace period start (w surcharge) |
Dec 25 2020 | patent expiry (for year 8) |
Dec 25 2022 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 25 2023 | 12 years fee payment window open |
Jun 25 2024 | 6 months grace period start (w surcharge) |
Dec 25 2024 | patent expiry (for year 12) |
Dec 25 2026 | 2 years to revive unintentionally abandoned end. (for year 12) |