One of a video source device and a video sink device may: (a) deactivate a video processing function at the one device and send a command for causing the other of the video source device and the video sink device to activate the video processing function; (b) activate the video processing function at the one device and send a command for causing the other device to deactivate the video processing function; and (c) based on user input indicating whether (a) or (b) resulted in a preferred video image, effect (a) or (b). The one device may receive an indication of video processing functions of which the other device is capable, such that (a), (b) and (c) may be performed for each indicated video processing function of which the one device is also capable. A user interface including at least one selectable control for indicating whether a video image resulting from (a) or (b) is preferred may be displayed.
|
1. A method comprising, at one of a video source device and a video sink device:
(a) deactivating a video processing function at the one device and sending a command for causing the other of said video source device and said video sink device to activate the video processing function;
(b) activating the video processing function at the one device and sending a command for causing the other device to deactivate the video processing function; and
(c) based on user input indicating whether (a) or (b) resulted in a preferred video image, effecting (a) or (b).
17. A video sink device comprising a processor and memory interconnected with said processor, said memory storing instructions which, when executed by said processor, cause said video sink device to:
(a) deactivate a video processing function at the video sink device and send a command for causing a video source device to activate the video processing function;
(b) activate the video processing function at the video sink device and send a command for causing the video source device to deactivate the video processing function; and
(c) based on user input indicating whether (a) or (b) resulted in a preferred video image, effect (a) or (b).
20. A video source device comprising a processor and memory interconnected with said processor, said memory storing instructions which, when executed by said processor, cause said video source device to:
(a) deactivate a video processing function at the video source device and send a command for causing a video sink device to activate the video processing function;
(b) activate the video processing function at the video source device and send a command for causing the video sink device to deactivate the video processing function; and
(c) based on user input indicating whether (a) or (b) resulted in a preferred video image, effect (a) or (b).
12. A non-transitory machine-readable medium storing instructions that, when executed by a processor of one of a video source device and a video sink device, cause said one device to:
(a) deactivate a video processing function at the one device and send a command for causing the other of said video source device and said video sink device to activate the video processing function;
(b) activate the video processing function at the one device and send a command for causing the other device to deactivate the video processing function; and
(c) based on user input indicating whether (a) or (b) resulted in a preferred video image, effect (a) or (b).
22. A non-transitory machine-readable medium storing instructions that, when processed, cause the creation of a circuit capable of:
(a) deactivating a video processing function at one of a video source device and a video sink device and sending a command for causing the other of said video source device and said video sink device to activate the video processing function;
(b) activating the video processing function at the one device and sending a command for causing the other device to deactivate the video processing function; and
(c) based on user input indicating whether (a) or (b) resulted in a preferred video image, effecting (a) or (b),
wherein said circuit comprises said one device.
2. The method of
receiving an indication of video processing functions of which the other device is capable; and
performing (a), (b) and (c) for at least one indicated video processing function of which the one device is also capable.
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
11. The method of
13. The machine-readable medium of
receive an indication of video processing functions of which the other device is capable; and
perform (a), (b) and (c) for at least one indicated video processing function of which the one device is also capable.
14. The machine-readable medium of
15. The machine-readable medium of
16. The machine-readable medium of
18. The video sink device of
receive an indication of video processing functions of which the video source device is capable; and
perform (a), (b) and (c) for at least one indicated video processing function of which the video sink device is also capable.
19. The video sink device of
21. The video source device of
receive an indication of video processing functions of which the video sink device is capable; and
perform (a), (b) and (c) for at least one indicated video processing function of which the video source device is also capable.
23. The machine-readable medium of
24. The machine-readable medium of
25. The machine-readable medium of
26. The machine-readable medium of
|
This application claims the benefit of U.S. Provisional Application No. 61/015,322 filed on Dec. 20, 2007, the contents of which are hereby incorporated by reference hereinto.
The present disclosure relates video processing, and more particularly to adjusting video processing in a system having a video source device and a video sink device.
It is not uncommon for video source devices (i.e. devices capable of outputting a video signal comprising a video image, such as DVD-Video players, High-Density HD-DVD Video players, Blu-Ray disc players, set-top boxes, or PCs) and video sink devices (i.e. devices capable of receiving a video signal and applying further video processing to the signal, and possibly displaying the resulting video images, such as televisions or monitors, which may be analog or digital devices such as Cathode Ray Tubes (CRTs), flat panel displays such as Liquid Crystal Displays (LCDs) or plasma displays, or rear-projection displays such as Digital Light Processing (DLP) or Liquid Crystal on Silicon (LCoS) displays for example) to be purchased separately. For example, a consumer assembling a home entertainment system may purchase the video source device component from one manufacturer and the video sink device component from another manufacturer. The consumer's choice of components may be motivated by such factors as consumer preference, availability, or retailer promotions. The consumer may then interconnect the components within the home entertainment system so that the source device outputs a video signal to the sink device. The interconnection may be by way a cable and may conform to a known industry standard, such as VGA, composite/S-video or component out, Digital Visual Interface (DVI), High-Definition Multimedia Interface™ (HDMI™) or DisplayPort®, for example, or may be a wireless display interface (e.g. “wireless HDMI”).
Many contemporary video source devices are capable of performing various video processing functions such as frame-rate conversion, interlacing, de-interlacing, de-noise, scaling, color correction, contrast correction, gamma correction and detail enhancement for example. Each video processing function may be performed by a functional block of a video processor, which may be effected in hardware, software, firmware or combinations of these. A functional block may be implemented in different ways in different video source devices. That is, a functional block in one video source device may apply one video processing algorithm to achieve the desired video processing function while the same functional block in another video source device applies another video processing algorithm to achieve that video processing function, in a different way. For example, some interlacer blocks may apply a scan line decimation algorithm to interlace video while others apply a vertical filtering algorithm. The algorithm that is used by a functional block may be fixed or dynamically configurable. In the latter case, the algorithm that is used at any given time may depend upon such factors as the content of the video signal presently being processed or user preferences for example.
A video sink device may also be capable of applying various video processing functions to a received video signal, including some or all of the same video processing functions that the upstream video source device is capable of performing (referred to as “overlapping video processing functions”). The overlap may be by virtue of the fact that the video sink device is a modular component that is intended to be capable of interconnection with various types of video source devices whose video processing capabilities may vary. The video source device and video sink device may therefore each have different strengths and weaknesses from a video processing standpoint. For example, the source device may be capable of numerous frame-rate conversion functions that the sink device is incapable of executing, while the sink device is capable of numerous de-interlacing functions that the source device is incapable of executing.
It is known to provide consumers with a DVD containing test video images and video clips along with instructions for playing the DVD in a player connected to a television. The instructions may suggest DVD-Video player output settings for testing the DVD-Video player (e.g. 720p, 768p, 1080i or 1080p) as well as DVD-Video player output settings for testing the television (e.g. 480i), for various television types (e.g. 720p DLP, LCD or Plasma; 768p LCD or Plasma; 1024×1024 Plasma; or 1920×1080 DLP, LCD or Plasma). The instructions may also describe how the displayed images or clips should be evaluated for quality. Disadvantageously, it is up to the user to set the DVD-Video player output settings correctly. If settings are not correctly set, the evaluated quality of the displayed images or clips may be attributed to the wrong device (DVD-Video player or television). In view of the complex user interfaces of many DVD-Video players and the inexperience of many users in configuring output settings, the likelihood of an incorrect output setting is high. Moreover, even if the DVD-Video player output settings are correctly set, it is still the responsibility of the user to ultimately configure the DVD-Video player in the proper mode for optimal image quality based on the outcome of the test. Again, the likelihood of an erroneous configuration is relatively high.
A solution which mitigates or obviates at least some the above-noted disadvantages would be desirable.
In one aspect, there is provided a method comprising, at one of a video source device and a video sink device: (a) deactivating a video processing function at the one device and sending a command for causing the other of the video source device and the video sink device to activate the video processing function; (b) activating the video processing function at the one device and sending a command for causing the other device to deactivate the video processing function; and (c) based on user input indicating whether (a) or (b) resulted in a preferred video image, effecting (a) or (b).
In another aspect, there is provided a machine-readable medium storing instructions that, when executed by a processor of one of a video source device and a video sink device, cause the one device to: (a) deactivate a video processing function at the one device and send a command for causing the other of the video source device and the video sink device to activate the video processing function; (b) activate the video processing function at the one device and send a command for causing the other device to deactivate the video processing function; and (c) based on user input indicating whether (a) or (b) resulted in a preferred video image, effect (a) or (b).
In another aspect, there is provided a video sink device comprising a processor and memory interconnected with the processor, the memory storing instructions which, when executed by the processor, cause the video sink device to: (a) deactivate a video processing function at the video sink device and send a command for causing a video source device to activate the video processing function; (b) activate the video processing function at the video sink device and send a command for causing the video source device to deactivate the video processing function; and (c) based on user input indicating whether (a) or (b) resulted in a preferred video image, effect (a) or (b).
In another aspect, there is provided a video source device comprising a processor and memory interconnected with the processor, the memory storing instructions which, when executed by the processor, cause the video source device to: (a) deactivate a video processing function at the video source device and send a command for causing a video sink device to activate the video processing function; (b) activate the video processing function at the video source device and send a command for causing the video sink device to deactivate the video processing function; and (c) based on user input indicating whether (a) or (b) resulted in a preferred video image, effect (a) or (b).
In another aspect, there is provided a machine-readable medium storing instructions that, when processed, cause the creation of a circuit capable of: (a) deactivating a video processing function at one of a video source device and a video sink device and sending a command for causing the other of the video source device and the video sink device to activate the video processing function; (b) activating the video processing function at the one device and sending a command for causing the other device to deactivate the video processing function; and (c) based on user input indicating whether (a) or (b) resulted in a preferred video image, effecting (a) or (b), wherein the circuit comprises the one device.
Other aspects and features of the present disclosure will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying figures.
In the figures which illustrate an exemplary embodiment of this invention:
Referring to
The video source device 12 is an electronic device that outputs a video signal, comprising frames or fields for example, over interconnection 16. The device 12 may for example be a PC, DVD-Video player, HD-DVD Video player, Blu-Ray disc player, or set-top box receiving video signals from any of a coaxial cable, satellite dish, telephone line, broadband over power line, Ethernet cable, or analog or digital broadcast (e.g. VHF, UHF or HD) for example. As shown in
Buffer 20 stores video data, which may be in the form of frames or fields, upon which one or more of the functional blocks 22 and 24 selectively performs video processing functions. The video data that is stored may have been decoded into frames or fields by a decoder (not illustrated), which may be compliant with any one of a number of video encoding/compression standards, such as MPEG, MPEG 2, MPEG 4, divX, ITU Recommendation ITU-H.264, HDMI™, ATSC, PAL or NTSC television, digital television (e.g. ITU BT.601) or the like, upstream of the buffer within the video source device 12. The video signal upon which the decoder (if present) operates may be received by video source device 12 from an external source (e.g. cable head-end, satellite or broadcast), read by device 12 from a storage medium (e.g. a hard drive or optical disk such as a DVD), or generated by device 12 (e.g. by a software application such as a video game) for example. Buffer 20 may form part of a larger volatile memory within the video source device 12. Buffer 20 may have the capacity to store multiple contiguous frames or fields at once, in order to facilitate parallel operation of video processing functional blocks 22 and 24. Once video processing functions have been performed upon the video data, the processed video data is output from the buffer 20 over interconnection 16 as a video signal, typically (although not necessarily) by way of a video interface transmitter (not illustrated). The role of the video interface transmitter, if present, is to convert the video data into a video signal that complies with the operative video interface standard that governs interconnection 16 (e.g. the DVI, HDMI™, DisplayPort®, Digital Flat Panel (DFP) Interface, Open LVDS Display Interface (OpenLDI), or Gigabit Video Interface (GVIF) standard, or a wireless display interface). A frame buffer (not illustrated) may be interposed between the buffer and the video interface transmitter, for storing the processed video data prior to its transmission over interconnection 16.
Each of video processing functional blocks 22, 24 is hardware, software, or firmware (or a combination of these) block that performs a video processing function. Although only two blocks 22, 24 are illustrated in
For illustration, a number of exemplary video processing functions are identified below, with two or more video processing algorithms that could be performed to achieve the video processing function being briefly described for each function.
Scan-Rate Conversion (i.e. Frame-Rate Conversion)
Dropping/duplicating every N frames/fields—this is a simple form of scan-rate conversion in which one out of every N fields is dropped or duplicated. For example, the conversion of 60-Hz to 50-Hz interlaced operation may drop one out of every six fields. A possible disadvantage of this technique is apparent jerky motion referred to as “judder”.
3:2 Pulldown—this technique is commonly used when converting 24 frames/second content to NTSC (59.94-Hz field rate). The film speed is slowed down by 0.1% to 23.976 (24/1.001) frames/second. Two film frames generate five video fields.
Other Pulldown—other types of pulldown, e.g. 2:2, 24:1, and others, may be performed.
Temporal Interpolation—this technique generates new frames from the original frames as needed to generate the desired frame rate. Information from both past and future input frames may be used to optimally handle appearing and disappearing objects. When converting from 50-Hz to 60-Hz using temporal interpolation, there are six fields of 60-Hz video for every five fields of 50-Hz video. After both sources are aligned, two adjacent 50-Hz fields are mixed together to generate a new 60-Hz field.
Motion Compensation—motion compensation attempts to identify true motion vectors within the video data and to use this information during temporal interpolation to minimize motion artifacts. This can result in smooth and natural motion free from judder.
Interlacing
Scan Line Decimation—in this approach, every other active scan line in each noninterlaced frame is discarded.
Vertical De-Flicker Filtering—in this approach, two or more lines of noninterlaced data are used to generate one line of interlaced data. Fast vertical transitions are smoothed out over several interlaced lines.
De-Interlacing
Scan Line Duplication—scan line duplication duplicates the previous active scan line. Although the number of active scan lines is doubled, there is no increase in the vertical resolution.
Field Merging—this technique merges two consecutive fields together to produce a frame of video. At each field time, the active scan lines of that field are merged with the active scan lines of the previous field. The result is that for each input field time, a pair of fields combine to generate a frame. Moving objects may have artifacts, also called “combing,” due to the time difference between two fields.
Scan Line Interpolation—scan line interpolation generates interpolated scan lines between the original active scan lines. The number of active scan lines is doubled, but the vertical resolution is not. In a simple implementation, linear interpolation is used to generate a new scan line between two input scan lines. Better results, may be achieved by using a Finite Impulse Response (FIR) filter:
Motion Adaptive De-interlacing—in “per pixel” version of this approach, field merging is used for still areas of the picture and scan line interpolation is used for areas of movement. To accomplish this, motion, on a sample-by-sample basis, is detected over the entire picture in real time. Several fields of video at thus processed at once. As two fields are combined, full vertical resolution is maintained in still areas of the picture. A choice is made as to when to use a sample from the previous field (which may be in the “wrong” location due to motion) or to interpolate a new sample from adjacent scan lines in the current field. Crossfading or “soft switching” is used to reduce the visibility of sudden switching between methods. Some solutions may perform “per field” motion adaptive de-interlacing to avoid the need to make decisions for every sample, as is done in “per pixel” motion adaptive de-interlacing.
Motion Compensated De-interlacing—motion compensated (or “motion vector steered”) de-interlacing, which is several orders of magnitude more complex than motion adaptive de-interlacing, requires calculating motion vectors between fields for each sample and interpolating along each sample's motion trajectory. Motion vectors are also found that pass through each of any missing samples.
Diagonal Edge Interpolation—searches for diagonal lines and attempts to interpolate along those lines in order to remove apparent “staircase” effects.
Scaling
Pixel Dropping and Duplication—in this approach, which may be referred to as “nearest neighbor” scaling, only the input sample closest to the output sample is used. In pixel dropping, X out of every Y samples are thrown away both horizontally and vertically. A modified version of the Bresenham line-drawing algorithm is typically used to determine which samples not to discard. In pixel duplication, which can accomplish simple upscaling, X out of every Y samples are duplicated both horizontally and vertically.
Linear Interpolation—in this approach, when an output sample falls between two input samples (horizontally or vertically), the output sample is computed by linearly interpolating between the two input samples.
Anti-Aliased Resampling—this approach may be used to ensure that frequency content scales proportionally with the image size, both horizontally and vertically. In essence, the input data is upsampled and low-pass filtered to remove image frequencies created by the interpolation process. A filter removes frequencies that will alias in the resampling process.
Content-Adaptive Scaling—scaling is based in part on the data being scaled (in contrast to a universally-applied scaling algorithm)
Color Correction
Fleshtone correction, white-point correction, and color-saturation enhancement are all examples of different types of color correction algorithms that might be applied, in the alternative or in combination.
Detail Enhancement
Sharpness Enhancement—sharpness is increased through, e.g., examination of the brightness of adjacent pixels and enhancing contrast between them.
Edge Enhancement—detecting angles or edges within an image and amplifying them as a whole.
Super-Resolution—in order to improve the resolution of an image feature, information about the feature is collected over a sequence of frames in which the feature appears. That information may then be used to increase the sharpness of the feature in each of the frames.
It should be appreciated that the foregoing video processing algorithms are merely illustrative, and may differ in alternative embodiments.
The controller 30 is a component of video source device 12 that controls the operation of the video processing functional blocks 22, 24. In particular, the controller is capable of independently activating and deactivating each functional block 22, 24. In cases in which the video processing algorithm applied by a functional block is dynamically configurable, the controller 30 is also capable of dynamically configuring the video processing algorithm to be applied by the functional block. The controller 30 is further capable of receiving commands originating from video sink device 14 for activating or deactivating one or more video processing blocks 22, 24. The controller 30 may be implemented in hardware, software, firmware, or a combination of these, with the implementation of the controller 30 possibly varying based upon the nature of the video source device 12. For example, if the video source device 12 is a PC, then the controller 30 may be a graphics processing unit (GPU) within a graphics subsystem card that executes a video player software application or graphics driver. If the video source device 12 is a piece of consumer electronics equipment, such as a DVD-Video player, the controller 30 (as well as buffer 20 and functional blocks 22, 24) may form part of a hardware video processor component.
Operation of the video source device 12 as described herein may be governed by executable instructions loaded from a machine-readable medium 32, such as a optical disk, magnetic disk or read only memory chip for example, into memory of the video source device 12 (not expressly illustrated) for execution by a processor, which may be controller 30.
Video sink device 14 is an electronic device that receives a video signal comprising video data (e.g. frames or fields) over interconnection 16 and selectively applies further video processing to the video data. In the present embodiment, the video sink device 14 is a display device, such as a CRT, LCD, LCoS, DLP or plasma monitor or television for example, that is also capable of displaying video images based on the received, selectively processed video data. It should be appreciated that some video sink devices, such as intermediate video processors (e.g. DVDO® iScan™ VP50), may not incorporate a display, and that the capacity to display video images is not a required feature of video sink devices. The video sink device 14 of the present embodiment is controlled by a remote control unit 52, which may emit RF signals or infra-red (or near infra-red) beams of light to a receiver (not shown) on the device 14. Other embodiments may be controlled by way of buttons on a front panel of the device.
As illustrated in
Buffer 40 stores video data upon which one or more of the functional blocks 42 and 44 selectively performs video processing functions. The video data may comprise frames or fields for example. The video data in buffer 40 is initially received over interconnection 16 and, in the case where the interconnection 16 conforms to a video interface standard, may be decoded by a video interface receiver (not illustrated), prior to its storage in the buffer. Buffer 40 may form part of a larger volatile memory of the video sink device 14 (such as memory 54, described below), and/or may have the capacity to store multiple contiguous frames or fields at once, in order to facilitate parallel operation of video processing functional blocks 42 and 44. Once video processing functions (if any) have been performed upon the video data by blocks 42 or 44, the video data is displayed on display 41 as video images. A frame buffer (not illustrated) may be interposed between the buffer 40 and the display 41, for storing the video data prior to its display.
The video processing functional blocks 42, 44 are conceptually similar to the video processing functional blocks 22, 24 of video source device 12, although they are not necessarily implemented in the same way. Generally, each block 42, 44 is a hardware, software, or firmware (or a combination of these) block that performs a video processing function. Although only two blocks 42, 44 are illustrated in
The controller 50 is a component of video sink device 14 that controls the operation of the video processing functional blocks 42, 44. In particular, the controller is capable of independently activating and deactivating each functional block 42, 44 and, in cases in which the video processing algorithm applied by a functional block is dynamically configurable, of configuring the video processing algorithm to be applied when the block is active. The controller 50 may be implemented in hardware, software, firmware, or combinations of these. In the case where video sink device 14 is a piece of consumer electronics equipment, such as a television, the controller 50 (as well as buffer 40 and functional blocks 42, 44) may form part of a hardware video processor component of the device 14. The controller 50 is also capable of generating and sending commands to video source device 12 for activating or deactivating one or more video processing blocks 22, 24. Operation of the controller 50 is influenced by user input received from remote control device 52 responsive to presentation of a graphical user interface on display 41, as will be described.
Memory 54 is conventional memory used to store a representation of a graphical user interface (GUI) 56, possibly in addition to other data. The GUI 56 is for obtaining user input for use in determining which video processing functions should be performed by the video source device 12 and which video processing functions should be performed by the video sink device 14 in order to achieve the video processing outcome that is preferred by the user. In the present embodiment, the GUI 56 is a wizard that guides the user through a series of screens (or dialog boxes) as described below. The memory 54 may be a read-only memory, with the GUI 56 being loaded therein at the factory, or it may be another form of memory (e.g. flash memory or volatile memory). The GUI 56 may be loaded into memory 54 from a machine-readable medium 58, such as a optical disk, magnetic disk or read only memory chip for example. In some embodiments, the medium 58 may also store executable instructions (software) governing the operation of video sink device 14 as described herein, which may be executed by controller 50 or a separate processor.
The video interconnection 16 is an interconnection for carrying a video signal from the video source device 12 to the video sink device 14 and for carrying other information in the same and opposite directions. In the present embodiment, the video signal is carried on a first channel 60 while the other information is carried on a second channel 62 (referred to as the “command channel”). Other embodiments may lack a second channel 62, and the other information may be carried on the same channel 60 as the video signal. The “other information” includes an indication of the video processing functions provided from video source device 12 to video sink device 14 and one or more commands carried in the opposite direction for causing the device 12 to activate or deactivate one or more video processing functions. Physically, the interconnection 16 may be an electrical or optical cable, or it may simply be air between the devices 12 and 14 over which video data is wirelessly transmitted. The interconnection 16 may be governed by a proprietary signalling protocol. Alternatively, the interconnection 16 may conform to a known video interconnect standard, such as the DVI, HDMI™, DisplayPort®, DFP Interface, OpenLDI, or GVIF standard for example. When the interconnection 16 conforms to the HDMI™ standard, the command channel 62 may be the Consumer Electronics Control (CEC) channel, which is a single-wire, bidirectional, serial bus. In that case, the above-noted commands for causing the device 12 to activate or deactivate one or more video processing functions may be an extension to a set of existing, conventional commands sent over the channel, such as commands from one of the source and sink devices for causing the other of the source and sink devices to power up. If the interconnection 16 conforms to the DVI or HDMI™ standard, the command channel 62 may be the Display Data Channel (DDC). The DDC is governed by a standard promulgated by the Video Electronics Standards Association (VESA) and governs communication between a sink device and a graphics subsystem. The DDC standard (e.g. the Enhanced Display Data Channel (E-DDC™) Standard, Version 1.1, Mar. 24, 2004) provides a standardized approach whereby a video sink device can inform a video source device about its characteristics, such as maximum resolution and color depth, so as to permit the video source device to cause valid display configuration options to be presented to a user for example. When the interconnection 16 conforms to the DisplayPort® standard, the command channel 62 may be the Auxiliary Channel.
Responsive to entry of the command, the video sink device 14 may display a first screen 410 of GUI 54, as shown in
Referring to
Referring to
For each video processing function that is common to the video source device 12 and the video sink device 14 (as detailed at S304 of
More specifically, the controller 50 of the video sink device 14 (
That command is received by device 12 (S206,
The resulting video image, which is based on video data processed only by the active functional block of device 12, is then displayed on display 41 (S310), as shown in
Upon selection of control 426, the controller 50 (
The new video image, which is based on video data processed only by the sole active functional block of device 14, is then displayed on display 41 (S316), as shown in
Upon user selection of GUI control 436 (
If it is determined that more common video processing functions exist (S304), then operation S306 to S320 (
Ultimately, user input indicating a preference for the relevant functional block at either device 12 or device 14 will be received for each common video processing function. At this stage, controller 50 effects the preferences of the user by sending one or more commands to device 12 to cause it to activate and/or deactivate its video processing blocks (S322) and by activating and/or deactivating the same video processing functional blocks of the video sink device 14 in a complementary fashion (i.e. if the block at device 12 is activated, the corresponding block at device 14 is deactivated, and vice-versa), in accordance with the preferences of the user (S324). At the device 12, the command(s) are received (S214) and effected (S216). Operation 200 and 300 is thus concluded.
Advantageously, all that is required of the user in order for the devices 12 and 14 to be configured according to the preferences of the user is to answer the questions presented on the screens of GUI 56. The user is not required to manually configure either of devices 12 or 14 to achieve the desired result, nor even to know what video processing functions have been configured in order to achieve the desired result.
It is noted that the earlier-described operation at S202, S302 need not be responsive to the entry of a user command for adjusting video processing. Operation S202, S302 could instead be performed during initialization of the video source device 12, e.g. upon detection of the sink device 14 (and possibly only during that stage, e.g. if indication 70 is unlikely to change during the period of interconnection of the devices 12 and 14).
It should be appreciated that the activation/deactivation of a video processing function at device 12 or 14 may bear upon whether another video processing function may or may not be activated, or indeed must simultaneously be activated, at that device. For example, if a form of de-interlacing is activated that necessarily involves scaling, then it may be necessary to activate the scaling function simultaneously with the de-interlacing function. In another example, edge enhancement performed by the source might affect the contrast correction also performed by the source. In such cases, it may be necessary to modify the operation S206 to S212 and S306 to S320 of
Generally, it should also be appreciated that activation/deactivation of video processing functional blocks within system 10 may be influenced by factors other than operation 200 and 300. For example, a user of video source device 12 or video sink device 14 may be able to override any activation/deactivation of video processing resulting from operation 200 and 300. Thus, while operation 200 and 300 can assist in automatically selecting video processing algorithms to be performed within the system 10, it is not necessarily wholly determinative of the video processing functions that shall ultimately be performed there.
As will be appreciated by those skilled in the art, modifications to the above-described embodiment can be made without departing from the essence of the invention. For example, in the above embodiment, the video source device sends an indication of the video processing functions of which it is capable (S202,
The user interface illustrated in
It is not necessary for two separate user interface controls (e.g. GUI controls 442, 444 of
In some embodiments, the video sink device 14 may lack a display (e.g. it may be an intermediate video processor). In such cases, the video sink device 14 may output a video signal to a display device upon which video images are displayed to permit the user to observe the effects of video processing as performed by devices 12 and 14 in turn. In such cases, it may be necessary to deactivate video processing functions at the display device, so that that the effects of currently active functional block of device 12 or 14 upon the video image are not corrupted by local video processing at the display device that might detract from the user's capacity to assess the video image. To the extent that the video processing functions at the display device can be automatically deactivated, e.g. by sending commands over a command channel of an interconnection between video sink device 14 and the display device (in a similar fashion to that described above), this would shield the user from having to configure the display device manually. Moreover, if the user input indicative of a preference for image A or image B is thereafter received from a remote control unit or front panel of the display device, it may be necessary to communicate this user input from the display device to video sink device 14 over the command channel.
In another alternative, the role of the devices 12 and 14 in terms of the operation described in
In such an alternative embodiment, or in the originally described embodiment, the video sink device 14 may be a display device that conforms to a monitor instruction standard, such as the Monitor Control Command Set (MCCS) standard defined by the Video Electronics Standards Association (VESA). As is known in the art, the MCCS standard defines a set of instructions that permits the operation of a monitor to be controlled remotely over a command channel from a video source device. The types of operations that can be typically be controlled using MCCS include setting luminance, contrast, picture size, position, and color balance, or other settings that may conventionally be set using an on-screen display control mechanism. In some embodiments, this set of instructions may be extended under the MCCS standard to include the commands referenced in
It will further be appreciated that, in some embodiments, either or both of the video source device and video sink device may comprise a circuit. The circuit may for example be a standalone integrated circuit, or may form part of a larger integrated circuit, or may be subsumed within an electronic device. The circuit may be fabricated using fabrication equipment, such as the type of equipment found in a semiconductor fabrication plant or foundry for example. The equipment may generate circuits based on a set of instructions comprising hardware description language that describes the circuit. The fabrication equipment processes the instructions and, based on that processing, creates the circuit. This approach may be used to fabricate a circuit representative of a video source device or a circuit representative of a video sink device (or both).
This approach is schematically illustrated in
Other modifications will be apparent to those skilled in the art and, therefore, the invention is defined in the claims.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
6314479, | Aug 04 1997 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Universal multi-pin plug and display connector for standardizing signals transmitted between a computer and a display for a PC theatre interconnectivity system |
7542618, | Feb 07 2005 | Samsung Electronics Co., Ltd. | Apparatus and method for data processing by using a plurality of data processing apparatuses and recording medium storing program for executing the method |
7734143, | Jun 13 2005 | MAXELL HOLDINGS, LTD ; MAXELL, LTD | Image processing apparatus capable of adjusting image quality by using moving image samples |
20040194132, | |||
20110026779, | |||
EP1675382, | |||
EP1677249, | |||
JP2006222958, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 19 2008 | ATI Technologies ULC | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Date | Maintenance Schedule |
Apr 30 2016 | 4 years fee payment window open |
Oct 30 2016 | 6 months grace period start (w surcharge) |
Apr 30 2017 | patent expiry (for year 4) |
Apr 30 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 30 2020 | 8 years fee payment window open |
Oct 30 2020 | 6 months grace period start (w surcharge) |
Apr 30 2021 | patent expiry (for year 8) |
Apr 30 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 30 2024 | 12 years fee payment window open |
Oct 30 2024 | 6 months grace period start (w surcharge) |
Apr 30 2025 | patent expiry (for year 12) |
Apr 30 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |