A video sequence control system that includes an input video frame buffer and an output video frame selection component is described. The input video frame buffer receives input video frames from a video source. The output video frame selection component determines the video frame to be output according to a scheduler that provides timing information and modulation information. The timing information includes information regarding when a video frame will be output. The modulation information varies dependent upon the frame types available to be output, wherein the available frame types include at least image frames from the input video frame buffer and functional frames, wherein the at least image frames and functional frames are output according to a pattern defined by the scheduler. In addition based on the timing information, a synchronization output signal is output corresponding to the output of the functional frame.
|
6. A video sequence control system comprising:
an input video frame buffer to receive, at a first frame rate, a video steam having a plurality of input video frames from a video source;
a scheduler; and
a modulator to:
generate an output video stream based on the input video stream, wherein the output video stream includes the input video frames from the input video frame buffer and functional frames;
determine a video frame of the output video stream to be output according to a scheduler that provides timing information and modulation information provided by the scheduler, wherein the timing information includes information regarding when each video frame of the output video stream is to be output, wherein the modulation information provides information on how to modify the color and spatial pattern of the input video frames; and
output, at a second frame rate different from the first frame rate, the output video stream to a projector, wherein the video sequence control system is to output a synchronization output signal to an image capture device, and wherein the synchronization output signal indicates a timing of the output of the functional frames.
10. A method of controlling the output of video frames, comprising:
receiving, at a first frame rate, a sequence of input video frames from a video source at an input video frame buffer of a video sequence control system;
determining when to output the input video frames based on timing information provided by a scheduler of an output selection component of the video sequence control system;
determining whether the input video frames are to be modified before output based on modulation information provided by the scheduler, wherein the modulation information varies dependent upon the frame types available to be output, wherein the available frame types include the input video frames from the input video frame buffer and functional frames;
generating an output video stream based on the input video stream and the functional frames;
outputting, at a second frame rate different from the first frame rate, the output video stream to a protector based on a pattern defined by the scheduler; and
outputting a synchronization output signal to an image capture device, wherein the output synchronization signal corresponds to a timing of the output of the functional frames.
1. A video sequence control system comprising:
an input video frame buffer to receive, at a first frame rate, an input video stream having a plurality of input video frames from a video source;
a scheduler; and
a modulator to:
generate an output video stream based on the input video stream, wherein the output video stream includes the input video frames from the input video frame buffer and functional frames;
determine a video frame of output video stream to be output according to timing information and modulation information provided by the scheduler, wherein the timing information includes information regarding when the video frame is to be output, wherein the modulation information varies dependent upon the frame types available to be output, wherein the available frame types include the input video frames and the functional frames; and
output, at a second frame rate different from the first frame rate, the output video stream to a projector based on a pattern defined by the scheduler, wherein the video sequence control system is to output a synchronization output signal to an image capture device, and wherein the synchronization output signal indicates a timing of the output of the functional frames.
2. The video sequence control system recited in
3. The video sequence control system recited in
4. The video sequence control system recited in
5. The video sequence control system recited in
7. The video sequence control system recited in
8. The video sequence control system recited in
9. The video sequence control system recited in
11. The method recited in
12. The method recited in
13. The method recited in
|
This application is a U.S. National Stage Application of and claims priority to International Patent Application No. PCT/US2012/027004, filed on Feb. 28, 2012, and entitled “SYSTEM AND METHOD FOR VIDEO FRAME SEQUENCE CONTROL”.
When displaying a projected image and simultaneously capturing images of the projected scene and the display, there can be crosstalk between the projected images and the captured content. The crosstalk can reduce the image quality (i.e. brightness, color) of the captured image frames and additionally cause distracting flicker on the display. Various attempts have been made to reduce crosstalk in the projection/image capture systems. For example, the display may be changed from a passive screen display screen an active switchable diffuser to reduce crosstalk. However, although crosstalk may be reduced when using the active switchable diffuser for the display, an active diffuser display screen can limit the useful duty cycle of both the projector and the image capture device used in the system.
The figures depict implementations/embodiments of the invention and not the invention itself. Some embodiments are described, by way of example, with respect to the following Figures.
The drawings referred to in this Brief Description should not be understood as being drawn to scale unless specifically noted.
For simplicity and illustrative purposes, the principles of the embodiments are described by referring mainly to examples thereof. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one of ordinary skill in the art, that the embodiments may be practiced without limitation to these specific details. Also, different embodiments may be used together. In some instances, well known methods and structures have not been described in detail so as not to unnecessarily obscure the description of the embodiments.
The application describes a video sequence control system 100 comprising: an input video frame buffer 102 for receiving a video stream 104 of video frames 106 from a video source 110; and an output video frame selection component 112 for determining a video frame to be output according to a scheduler 116 that provides timing information 118 and modulation information 119, wherein the timing information 118 includes information regarding when each video frame will be output, wherein modulation information varies dependent upon frame type to be output, wherein frame type includes at least the image frames from the input video frame buffer and functional frames, wherein the at least the image frames and functional frames are output according to a pattern defined by the scheduler, wherein based on the timing information 118 a synchronization output signal 124 is output corresponding to the output of the functional frame.
The video sequence controller takes a video stream input from a video source and outputs a (1) modified video stream and (2) a synchronization output signal. Referring to
Whether an input frame or other frame is modified to create the video output stream and how frames slotted for modification are changed is dependent upon frame type. At least two frame types are available for output, including input video frames and functional frames. Input video frames refer to the input video that are not modified and directly output in the output video frame sequence. The functional frames are frames other than the input video frames. The functional video frames often have a purpose other than the output of video images. For example, in the implementation described in
Referring to
For the example shown in
Assume for purposes of example, that the number 0 in the schedule is representative of an image frame and that the number 1 is representative of the black crosstalk reduction frame. In the example shown in
In an alternative example, instead of the modulator modifying each pixel of the corresponding image frame, a software or hardware implemented representation of the crosstalk reduction frame is used. For example, say for a black (crosstalk reduction) frame is required for every pixel in the video frame. The pixel value of zero could be stored and the value repeated for each pixel location in the video frame when a black frame is present on the schedule. For this case, the entire image frame may not need to be stored. In one case a single pixel value is stored. The timing information in the schedule can be used to determine when the crosstalk reduction frame is to be output. When the crosstalk reduction frame is output, the pixel value of zero is repeated for every pixel in the frame.
The scheduler 116 of the output video frame selection component 112 includes timing information 118 and modulation information 119. In one example, the timing information 118 includes information regarding when each video frame will be output. For example, timing information 119 includes information about the number of frames to be output within a certain time period and when each frame in the scheduled sequence of video frames will be output. In one example, the scheduler determines the output sequence of the video frames at least in part based on the of the frame rate of the video input and the required video output. Thus for example, if the frame rate of the video source is 30 Hz and the required video output is 120 Hz, then for each video input image frame—four video frames are output. In the example shown in
Referring to
The frame rate and the video sequence pattern (sequence of frame types) effects whether the input video frames are dropped, inserted or remain the same in the output video frames 136. For the video sequence pattern shown in the visual representation of the schedule 117 in
In one example, the scheduler 116 includes frame type information 120, that provides information regarding the type of video frame that is currently being processed (the video frame in the visual representation of the schedule that the pointer 126 is pointing to.) In one example, the scheduled frame types include at least crosstalk reduction frame types and image frame types. In one example, modification information is associated with the frame type information to provide information on whether the frame type is selected for output with no modification or whether the frame type is modified. Further, if the frame type is to be modified—how each frame type is modified. For example, if the frame type is a crosstalk reduction frame—the image frame might be modified to be a uniform black frame, where the black frame is output in the output video stream for display.
In one example, the scheduler 117 of the output video frame selection component 112 includes modification information 119, timing information 118, frame type information 120, and sequence order information 122 which the scheduler 117 uses to determine whether input image frames are modified, how the input frames are modified, when the modified or unmodified frames are output, and the sequence order that the frame types are output. In one example, the sequence order information 112 provides information to the modification component 114 of the order and frame type that the video frames are output. In the example shown in
In one example, the timing information 118 provides information regarding when and how often the selected (where schedule pointer 126 is currently pointing to) video frame is output. In one example, the timing information 118 is used to determine when a synchronization output signal 124 is output. The synchronization output signal 124 should correspond to the timing of the output of the crosstalk reduction frame.
The video sequence control system 100 acts as a specialized video processor. It can modify frame rate by inserting or removing frames and can modify image frames. In one example, the video sequence control system is a standalone device positioned between the video source 110 and the devices that it is outputting video frames and/or synchronization signals to. For example, in
In one example, the crosstalk reduction frame 112 is a dimmed or reduced intensity frame, by dimmed or reduced intensity we mean the crosstalk reduction frame has an intensity lower than the corresponding image frames in the same video sequence. In the example shown in
In the example shown in
In the example shown in
Referring to
In one example, the synchronization output signal 124 is directly coupled to the image capture device 146. In the example shown in
In one example, the synchronization output signal 124 provides both a synchronization signal and a command signal to the image capture device 146. In one example, the synchronization output signal 124 provides information to the image capture device of when the crosstalk reduction frame is being projected. The command signal line can provide information to the image capture device of what camera function is to perform. In one example, the command signal to the image capture device provides a command to the image capture device to open or close its shutter. In the example shown in
In one example, a synchronization output signal 124 is sent to the image capture device 146 to instruct it to open its shutter (take a picture) when the crosstalk rejection frames are projected onto the display screen. For example, assuming the black crosstalk reduction frame 142a is projected at a time tproj1. Then the camera opens its shutter at this time, captures an image and outputs a corresponding photograph 160a. The corresponding photograph is of the display, the scene in front of the display and for the case of a see-through screen, the viewer behind the display screen. Similarly assuming a second black crosstalk rejection frame 142b is projected at time tproj2, then the camera opens its shutter at this time, captures an image and outputs the corresponding photograph 160b.
In one example, instead of the crosstalk reduction frame having the highest possible darkness or intensity value (100%)—a black frame, a dimmed image may be implemented. For example, a gray image having some percentage of the possible intensity range. For example, the gray crosstalk reduction frame could have an intensity of 50%. In another example, the dimming can be applied at continuous varying levels. For example, the modified output video could consist of modulated versions of the input video frames where the modulated versions of the input video frames are at 75%, 50% and 25% of the intensity of the input image.
In another example, the darkness of the crosstalk reduction frame might be reduced from its highest possible intensity (black) to a lower intensity level. One case where this might be a reasonable option would be the case where the projected image frames are likely to be very bright. For the case where the projected images go between a very bright projected image to a black video frame (the crosstalk reduction frame), the high amount of change between the two projected images that the human eye perceives—increases the perception of flashing and flicker. To reduce this effect, in one example, the crosstalk reduction frame color could be changed from black to a gray color (lower intensity value.) In another example, where the display screen is not bright (in some cases where a see-through display is used), it is not necessary to have the crosstalk reduction frame be completely black in order to substantially reduce crosstalk. In one example, fully software-controllable dimming allows crosstalk reduction to be turned on/off or to vary continuously between zero and full. In the example shown in
One potential application of the video output sequence 100 shown in
In one example, instead of taking video of the weatherman standing in front of a green screen display, the weatherman is photographed standing in front of a weather map. The modulation pattern shown in the schedule in
In image processing, there are advantages to using more than one color channel background—for example when you are trying to subtract out a person standing in front of a background. In one example, the image is modulated to present different color video frames that can be used as different backdrops when performing image processing. For example, if an image is captured of a person against a black background and the person is wearing a black jacket—the black jacket will not show up well in the captured image. However, if an image of the same person is against a green background—the black jacket will show up. Alternating image capture against at least two substantially different color backgrounds—means no matter what color the person is wearing, for example, it will be easy to determine all of the colors in the image.
Although the shape in
For purposes of example, consider the two sequential video frames in the schedule video frames 128 and 130. Each of the video frames in the schedules has four rectangular regions 128a-128d and 130a-130d. Each of sequential video frames have two rectangular regions having a red color channel (128b, 128c and 130a, 130d) and two rectangular regions having a combined green+blue color channel (128a, 128d and 130a, 130d). The color patterns of the two sequential video frames alternate—for example, if the color channel is red in video frame 128, it is combined green+blue in video frame 130 and vice versa. Thus, video frame region 128b is red and in its next (sequential) video frame the video frame region 130b is combined+blue. Similarly, video frame region 128a is green+blue, while the corresponding sequential video frame 130a is red. Similar to the alternating color channel video frames in
Referring to
These methods, functions and other steps described may be embodied as machine readable instructions stored on one or more computer readable mediums, which may be non-transitory. Exemplary non-transitory computer readable storage devices that may be used to implement the present invention include but are not limited to conventional computer system RAM, ROM, EPROM, EEPROM and magnetic or optical disks or tapes. Concrete examples of the foregoing include distribution of the programs on a CD ROM or via Internet download. In a sense, the Internet itself is a computer readable medium. The same is true of computer networks in general. It is therefore to be understood that any interfacing device and/or system capable of executing the functions of the above-described examples are encompassed by the present invention.
Although shown stored on main memory 606, any of the memory components described 606, 608, 614 may also store an operating system 630, such as Mac OS, MS Windows, Unix, or Linux; network applications 632; and a display controller component 630. The operating system 630 may be multi-participant, multiprocessing, multitasking, multithreading, real-time and the like. The operating system 630 may also perform basic tasks such as recognizing input from input devices, such as a keyboard or a keypad; sending output to the display 620; controlling peripheral devices, such as disk drives, printers, image capture device; and managing traffic on the one or more buses 604. The network applications 632 includes various components for establishing and maintaining network connections, such as software for implementing communication protocols including TCP/IP, HTTP, Ethernet. USB, and FireWire.
The computing apparatus 600 may also include an input devices 616, such as a keyboard, a keypad, functional keys, etc., a pointing device, such as a tracking ball, cursors, mouse 618, etc., and a display(s) 620. A display adaptor 622 may interface with the communication bus 604 and the display 620 and may receive display data from the processor 602 and convert the display data into display commands for the display 620.
The processor(s) 602 may communicate over a network, for instance, a cellular network, the Internet, LAN, etc., through one or more network interfaces 624 such as a Local Area Network LAN, a wireless 402.11x LAN, a 3G mobile WAN or a WiMax WAN. In addition, an interface 626 may be used to receive an image or sequence of images from imaging components 628, such as the image capture device.
The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the invention. The foregoing descriptions of specific embodiments of the present invention are presented for purposes of illustration and description. They are not intended to be exhaustive of or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations are possible in view of the above teachings. The embodiments are shown and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents:
Tan, Kar-Han, Jam, Mehrban, Sobel, Irwin E
Patent | Priority | Assignee | Title |
10171781, | Nov 13 2015 | Canon Kabushiki Kaisha | Projection apparatus, method for controlling the same, and projection system |
11582422, | Feb 24 2021 | GN AUDIO A S | Conference device with multi-videostream capability |
11889228, | Feb 24 2021 | GN AUDIO A/S | Conference device with multi-videostream capability |
Patent | Priority | Assignee | Title |
7583302, | Nov 16 2005 | BEJING XIAOMI MOBILE SOFTWARE CO ,LTD ; BEIJING XIAOMI MOBILE SOFTWARE CO ,LTD | Image processing device having blur correction function |
20010053275, | |||
20050017939, | |||
20080001881, | |||
20090243995, | |||
20110025726, | |||
20110096146, | |||
20110187840, | |||
20110228048, | |||
20110234777, | |||
20120026286, | |||
20120098825, | |||
20120189212, | |||
20130077057, | |||
CN102196290, | |||
JP2003087601, | |||
WO2011071465, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 28 2012 | Hewlett-Packard Development Company, L.P. | (assignment on the face of the patent) | / | |||
Feb 29 2012 | TAN, KAR-HAN | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033509 | /0694 | |
Feb 29 2012 | SOBEL, IRWIN E | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033509 | /0694 | |
Feb 29 2012 | JAM, MEHRBAN | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033509 | /0694 |
Date | Maintenance Fee Events |
Sep 04 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Apr 01 2024 | REM: Maintenance Fee Reminder Mailed. |
Sep 16 2024 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Aug 09 2019 | 4 years fee payment window open |
Feb 09 2020 | 6 months grace period start (w surcharge) |
Aug 09 2020 | patent expiry (for year 4) |
Aug 09 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 09 2023 | 8 years fee payment window open |
Feb 09 2024 | 6 months grace period start (w surcharge) |
Aug 09 2024 | patent expiry (for year 8) |
Aug 09 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 09 2027 | 12 years fee payment window open |
Feb 09 2028 | 6 months grace period start (w surcharge) |
Aug 09 2028 | patent expiry (for year 12) |
Aug 09 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |