A video format detector receives a video signal in a one of a plurality of possible video formats and produces a detected format. A cumulative distribution function generator generates a cumulative distribution function curve from a component of the video signal and a feature detector generates one or more feature vectors from the cumulative distribution function curve. A video classifier accepts the one or more feature vectors from the feature detector and generates a prediction of the video format in which the video signal was produced based, at least in part, on the received feature vectors.
|
7. A method for detecting a video format of an input video, comprising:
receiving a video signal in one of a plurality of possible video formats;
generating a cumulative distribution function curve from a component of the video signal;
generating one or more feature vectors from the cumulative distribution function curve;
generating, by a video classifier, a prediction of the video format in which the video signal was produced based, at least in part, on the one or more feature vectors;
filtering the prediction of the video format from the video classifier and output a filtered prediction of the video format;
determining whether the filtered prediction of the video format violates a threshold; and
outputting a detected video format based on whether the filtered prediction of the video format violates the threshold.
13. One or more computer-readable storage media comprising instructions, which, when executed by one or more processors of a video format detector, cause the video format detector to:
generate a cumulative distribution function curve from a component of a received video signal in one of a plurality of possible video formats;
produce one or more feature vectors from the cumulative distribution function curve;
determine, by a video classifier, a prediction of the video format in which the video signal was produced based, at least in part, on the one or more feature vectors;
filter the prediction of the video format from the video classifier and output a filtered prediction of the video format;
determine whether the filtered prediction of the video format violates a threshold; and
output a value indicating the video format based on whether the filtered prediction of the video format violates the threshold.
1. A video format detector, comprising:
a video input structured to receive a video signal in a one of a plurality of possible video formats;
a cumulative distribution function generator structured to generate a cumulative distribution function curve from a component of the video signal;
a feature detector structured to generate one or more feature vectors from the cumulative distribution function curve;
a video classifier structured to accept the one or more feature vectors from the feature detector and to generate a prediction of the video format in which the video signal was produced based, at least in part, on the received feature vectors;
a recursive filter configured to filter the prediction of the video format from the video classifier and output a filtered prediction of the video format; and
a threshold detector configured to receive the filtered prediction of the video format and output a detected video format based on whether the filtered prediction of the video format violates one or more thresholds.
2. The video format detector according to
3. The video format detector according to
4. The video format detector according to
5. The video format detector according to
6. The video format detector according to
8. The method according to
9. The method according to
10. The method according to
11. The method according to
12. The method according to
14. The one or more computer-readable storage media according to
|
This disclosure claims benefit of U.S. Provisional Application No. 62/830,213, titled “HIGH DYNAMIC RANGE VIDEO FORMAT DETECTION USING MACHINE LEARNING,” filed on Apr. 5, 2019, which is incorporated herein by reference in its entirety.
This disclosure is directed to systems and methods related to measuring video signals, and more particularly, to automatically detecting a format of a video signal.
Televisions have rapidly improved in display size and resolution over the original 1080×1920 high definition (HD) format with the advent of 4K and 8K consumer displays, which can support content from streaming data services with 4K resolution. It can be difficult, however, to perceive and fully appreciate these new high-resolution improvements on typical living room screen sizes at typical viewing distances, making further improvements on image resolution moot.
Instead of improving image resolution, which has diminishing returns, present advancement in video technology has focused on exploiting the wider color gamut (WCG) and, in particular, the much wider contrast and peak brightness High Dynamic Range (HDR) of modern displays. Unlike image resolution improvements, these improvements in HDR create a very noticeable improvement in viewer experience, which can easily be appreciated at typical living room viewing distances and lighting.
Although HDR formats differ from formats of the traditional Standard Dynamic Range (SDR), users of waveform monitors cannot depend solely on metadata to indicate an HDR versus SDR encoding format, since that ancillary metadata may be missing, incorrectly appended, or otherwise missed or misinterpreted. In some instances, viewing the video itself may suggest a particular encoding. Such detection by viewing may be inaccurate, however, since typical video varies with program content—often intentionally failing to utilize the full contrast range available in either an SDR or HDR encoding curve for reasons of artistic intent, for example. Thus, detecting HDR content from SDR content by merely viewing the video luminance signal itself is not only time consuming, but also prone to mistakes in practice.
Embodiments of the disclosure address these and other deficiencies of the prior art.
Aspects, features and advantages of embodiments of the present disclosure will become apparent from the following description of embodiments in reference to the appended drawings in which:
As mentioned above, advancement in video technology has focused on exploiting the WCG and the wider contrast and peak brightness of HDR in modern displays. The red, green, and blue signals (R′, G′, and B′, respectively) and luminance signal (Y′) encoding for new HDR formats differs from the traditional SDR “gamma” power law function, or gamma curve encoding, which has been used since the early days of black and white television to code the higher dynamic range into 8-bit (consumer) and 10-bit (professional) data samples. Some popular HDR encoding defines a perceptual quantization (PQ) curve that optimized the bit utilization to minimize visible steps (seen as contouring lines) in displayed luminance while extending the peak display brightness from SDR's 200 nits to over 10,000 nits. Another type of HDR encoding, called Hybrid Log Gamma (HLG), provides peak display brightness over 1000 nits.
Disclosed herein are systems, devices, and methods having machine learning to classify video content encoding format, such as, but not limited to, PQ HDR, HLG HDR, SDR, or any other encoding format. Embodiments of the disclosure can recognize and determine what type of the many available machine learning approaches, such as, but not limited to, Bayesian, decision tree, support-vector machine (SVM), convolutional neural network (CNN), etc. to use, if any, to learn and classify an encoding format. Embodiments of the disclosure may also generate an efficient and effect set of features to drive a machine learning process both for training and real-time classification, as well as generating a filtering process to mitigate irritating false alarm and false positive indications.
A multiplexer 104 can select which Y′ signal from the input signals to select. Optional preprocessing 106 may be performed in some embodiments. The optional preprocessing 106 may include, for example, resizing the input signal to a smaller image format or performing letter-box detection and cropping to an active picture, among many other preprocessing operations.
A probably density function (PDF) 108 can be performed to create a one-dimensional normalized histogram for the frame generating a PDF signal. An optional recursive histogram filter 110 can be applied to the one-dimensional normalized histogram in some embodiments. The recursive histogram filter 110 can mitigate false alarm and false positive indications. This can allow the PDF signal to be recursively updated to allow accumulation (averaging) over many frames without dependence on the contrast curve of just a few frames. The time constant, such as, for example, several seconds, sets the averaging time. The time constant should be fast enough to adapt to scene changes and detect, for example, an SDR commercial in an otherwise HDR video stream.
A cumulative distribution function (u) 112 of the luminance signal value histogram at selected video frames is performed as the video content is input in real-time generating a dynamically changing CDF curve. The dynamically changing CDF curve can be temporarily filtered and sampled at a pre-determined number of image area related sample points, for example, five image area related sample points, to determine a Y′ signal level for each. This is possible since the CDF is a monotonically increasing function relating particular pixel image area thresholds to encoded Y′ pixel values and, over time, follows the HDR or SDR encoding curve.
Feature detection 114 generates a feature vector based on the pre-determined number of image area related sample points. The feature detection 114 can determine the amount of luminance per image area based on the input code values determined by the CDF. The feature detection can generate a feature vector indicating the pre-determined number of image area related sample points based on the CDF.
The feature vector is sent to a classifier 116. The classifier 116 can be any machine learning device, such as those mentioned above. In the example shown in
After classification by the classifier 116, the output may optionally be further filtered in a recursive SDR detection filter 120 and threshold hysteresis 122 to mitigate false detections. If the output of the recursive SDR detection filter 120 violates a threshold, then the determined class is determined to be incorrect. For example, if the class is determined by the classifier 116 to be SDR, but the filtered output of the SDR detection filter 120 is greater than a threshold, such as 0.9, then the original classification is determined to be wrong and the output value is set for HDR. The output value will stay at HDR until the filtered detection output goes below another threshold, such as 0.5. At that point, the output switches to a binary zero, indicating SDR content.
The filtered output of the SDR threshold with hysteresis 122 can be received at multiplexer 124. The output from the preprocessor 106 may be either SDR content or HDR content. The output of the classifier 116 is sent to a look up table 126 or other process to convert the HDR content to SDR based on the class determined by the classifier 116.
The output of the SDR threshold with hysteresis 122, shown as SDRflt in
Trace 204 illustrates the filtered HDR class determined by the system 100 through the recursive SDR detection filter 120. As can be seen in trace 204, the beginning of the video clip includes a short segment of several seconds that mimics HDR formatting sufficiently to mislead the classifier 116. However, the post detection filtering 120 and threshold hysteresis 122 did not exceed the hysteresis threshold for HDR, so the binary detection result did not switch to HDR.
Trace 206 illustrates the confidence factor output by the classifier 116 to indicate the most likely class, which can be referred to as the confidence factor trace 206. The confidence factor trace 206 illustrates the original output of the classifier 116 initially indicates that the video clip was in an HDR format since the confidence exceeded 0.9. Since the threshold was not violated as shown by the filtered class trace 204, however, the binary output of the system 100 did not switch to HDR, even though the initial output of the classifier 116 indicated HDR content.
In
The R′G′B′C′Y′ inputs are forwarded to block 406 to create a one-dimensional, normalized histogram for each component for each frame. That is, block 406 will output five one-dimensional, normalized histogram for each frame. Similar to system 100, an optional recursive histogram filter 408 may be applied to each histogram output.
In block 410, a CDF is generated as an accumulated sum for each histogram output of block 406 (which may be optionally filtered at block 408). The feature detection block 412 performs the same as the feature detection block 114 of
In the system 400, a WCG classifier 418 may also be provided to classify whether the frame contains WCG. The WCG classifier 418 may contain two classes in some embodiments, one class for BT.709 color and the other for BT.2020 color, or WCG. The WCG classifier 418 will also receive a trained model 420 similar to the classifier 414.
The output of the classifier 414 and the output of the classifier 418 can both be received at the look up table 422 or other process to convert the HDR content to SDR and the color space from BT.2020 to BT.709 based on the determined classes.
Similar to the system 100 in
Each application module or sub-module may correspond to a particular function, method, process, or operation that is implemented by the module or sub-module. Such function, method, process, or operation may include those used to implement one or more aspects of the system and methods described herein.
The application modules and/or sub-modules may include any suitable computer-executable code or set of instructions (e.g., as would be executed by a suitably programmed processor, microprocessor, or CPU), such as computer-executable code corresponding to a programming language. For example, programming language source code may be compiled into computer-executable code. Alternatively, or in addition, the programming language may be an interpreted programming language such as a scripting language. The computer-executable code or set of instructions may be stored in (or on) any suitable non-transitory computer-readable medium. In general, with regards to the embodiments described herein, a non-transitory computer-readable medium may include almost any structure, technology or method apart from a transitory waveform or similar medium.
As described, the system, apparatus, methods, processes, functions, and/or operations for implementing an embodiment of the disclosure may be wholly or partially implemented in the form of a set of instructions executed by one or more programmed computer processors such as a central processing unit (CPU) or microprocessor. Such processors may be incorporated in an apparatus, server, client or other computing or data processing device operated by, or in communication with, other components of the system. As an example,
Any of the software components, processes or functions described in this application may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java, JavaScript, C++ or Perl using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions, or commands in (or on) a non-transitory computer-readable medium, such as a random-access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a CD-ROM. In this context, a non-transitory computer-readable medium is almost any medium suitable for the storage of data or an instruction set, aside from a transitory waveform. Any such computer readable medium may reside on or within a single computational apparatus, and may be present on or within different computational apparatuses within a system or network.
According to one example implementation, the term processing element or processor, as used herein, may be a central processing unit (CPU), or conceptualized as a CPU (such as a virtual machine). In this example implementation, the CPU or a device in which the CPU is incorporated may be coupled, connected, and/or in communication with one or more peripheral devices, such as display.
The non-transitory computer-readable storage medium referred to herein may include a number of physical drive units, such as a redundant array of independent disks (RAID), a floppy disk drive, a flash memory, a USB flash drive, an external hard disk drive, thumb drive, pen drive, key drive, a High-Density Digital Versatile Disc (HD-DV D) optical disc drive, an internal hard disk drive, a Blu-Ray optical disc drive, or a Holographic Digital Data Storage (HDDS) optical disc drive, synchronous dynamic random access memory (SDRAM), or similar devices or other forms of memories based on similar technologies. As mentioned, with regards to the embodiments described herein, a non-transitory computer-readable medium may include almost any structure, technology or method apart from a transitory waveform or similar medium.
Certain implementations of the disclosed technology are described herein with reference to block diagrams of systems, and/or to flowcharts or flow diagrams of functions, operations, processes, or methods. It will be understood that one or more blocks of the block diagrams, or one or more stages or steps of the flowcharts or flow diagrams, and combinations of blocks in the block diagrams and stages or steps of the flowcharts or flow diagrams, respectively, can be implemented by computer-executable program instructions. Note that in some embodiments, one or more of the blocks, or stages or steps may not necessarily need to be performed in the order presented, or may not necessarily need to be performed at all.
These computer-executable program instructions may be loaded onto a general-purpose computer, a special purpose computer, a processor, or other programmable data processing apparatus to produce a specific example of a machine, such that the instructions that are executed by the computer, processor, or other programmable data processing apparatus create means for implementing one or more of the functions, operations, processes, or methods described herein. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement one or more of the functions, operations, processes, or methods described herein.
Aspects of the disclosure may operate on particularly created hardware, firmware, digital signal processors, or on a specially programmed computer including a processor operating according to programmed instructions. The terms controller or processor as used herein are intended to include microprocessors, microcomputers, Application Specific Integrated Circuits (ASICs), and dedicated hardware controllers. One or more aspects of the disclosure may be embodied in computer-usable data and computer-executable instructions, such as in one or more program modules, executed by one or more computers, or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The computer executable instructions may be stored on a computer readable storage medium such as a hard disk, optical disk, removable storage media, solid state memory, Random Access Memory (RAM), etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various aspects. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, FPGA, and the like. Particular data structures may be used to more effectively implement one or more aspects of the disclosure, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein.
The disclosed aspects may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed aspects may also be implemented as instructions carried by or stored on one or more or computer-readable storage media, which may be read and executed by one or more processors. Such instructions may be referred to as a computer program product. Computer-readable media, as discussed herein, means any media that can be accessed by a computing device. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media.
Computer storage media means any medium that can be used to store computer-readable information. By way of example, and not limitation, computer storage media may include RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Video Disc (DVD), or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, and any other volatile or nonvolatile, removable or non-removable media implemented in any technology. Computer storage media excludes signals per se and transitory forms of signal transmission.
Communication media means any media that can be used for the communication of computer-readable information. By way of example, and not limitation, communication media may include coaxial cables, fiber-optic cables, air, or any other media suitable for the communication of electrical, optical, Radio Frequency (RF), infrared, acoustic or other types of signals.
Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
Example 1 is a video format detector, comprising a video input structured to receive a video signal in a one of a plurality of possible video formats; a cumulative distribution function generator structured to generate a cumulative distribution function curve from a component of the video signal; a feature detector structured to generate one or more feature vectors from the cumulative distribution function curve; and a video classifier structured to accept the one or more feature vectors from the feature detector and to generate a prediction of the video format in which the video signal was produced based, at least in part, on the received feature vectors.
Example 2 is the video format detector of example 1 in which the video classifier is structured to generate the prediction of the video format based, at least in part, on a model generated for the video classifier.
Example 3 is the video format detector of either one of examples 1 or 2 in which the classifier is support vector machine.
Example 4 is the video format detector any one of examples 1 through 3 further comprising a recursive filter configured to filter the prediction of the video format from the video classifier and output a filtered prediction of the video format; and a threshold detector configured to receive the filtered prediction of the video format and output a detected video format based on whether the filtered prediction of the video format violates one or more thresholds.
Example 5 is the video format detector of example 4 further comprising a multiplexer configured to receive the detected video format from the threshold detector and select an input of the multiplexer based on the detected video format.
Example 6 is the video format detector of example 5, further comprising a converter configured to receive the prediction of the video format from the video classifier and convert the component of the signal to a converted component of the signal, wherein the multiplexer includes a first input to receive the component of the signal and a second input to receive the converted component of the signal.
Example 7 is the video format detector of example 6, wherein the component of the signal is a luminance component of the video signal, a color space component of the video signal, or both a luminance component of the video signal and a color space component of the video signal.
Example 8 is the video format detector of any one of examples 1, 2, and 4 through 7, wherein the video classifier is a Bayesian device, a decision tree, or a convolutional neural network.
Example 9 is the video format detector of any one of examples 1 through 8 further comprising a color space classifier structured to accept the one or more feature vectors from the feature detector and to generate a prediction of the color space in which the video signal was produced based, at least in part, on the received feature vectors.
Example 10 is a method for detecting a video format of an input video, comprising receiving a video signal in one of a plurality of possible video formats; generating a cumulative distribution function curve from a component of the video signal; generating one or more feature vectors from the cumulative distribution function curve; and generating, by a video classifier, a prediction of the video format in which the video signal was produced based, at least in part, on the one or more feature vectors.
Example 11 is the method of example 10, wherein generating the prediction of the video format includes generating the prediction based, at least in part, on a model generated for the video classifier.
Example 12 is the method of either one of examples 10 or 11, further comprising filtering the prediction of the video format from the video classifier and output a filtered prediction of the video format; determining whether the filtered prediction of the video format violates a threshold; and outputting a detected video format based on whether the filtered prediction of the video format violates the threshold.
Example 13 is the method of example 12 further comprising receiving the detected video format from the threshold detector at a multiplexer and selecting an input of the multiplexer based on the detected video format.
Example 14 is the method of example 13, further comprising converting the component of the signal to a converted component of the signal, wherein the multiplexer includes a first input to receive the component of the signal and a second input to receive the converted component of the signal
Example 15 is the method of example 14, wherein the component of the signal is a luminance component of the video signal, a color space component of the video signal, or both a luminance component of the video signal and a color space component of the video signal.
Example 16 is the method of any one of examples 10 through 15, wherein the video classifier is a support vector machine, a Bayesian device, a decision tree, or a convolutional neural network.
Example 17 is the method of any one of examples 10 through 16, further comprising generating a prediction of the color space in which the video signal was produced based, at least in part, on the one or more feature vectors.
Example 18 is one or more computer-readable storage media comprising instructions, which, when executed by one or more processors of a video format detector, cause the video format detector to generate a cumulative distribution function curve from a component of a received video signal in one of a plurality of possible video formats; produce one or more feature vectors from the cumulative distribution function curve; and determine, by a video classifier, a prediction of the video format in which the video signal was produced based, at least in part, on the one or more feature vectors.
Example 19 is the one or more computer-readable storage media of example 18, further comprising instructions to cause the video format detector to filter the prediction of the video format from the video classifier and output a filtered prediction of the video format; determine whether the filtered prediction of the video format violates a threshold; and output a value indicating the video format based on whether the filtered prediction of the video format violates the threshold.
Example 20 is the one or more computer-readable storage media of example 18, further comprising instructions to cause the video format detector to generate a prediction of the color space in which the video signal was produced based, at least in part, on the one or more feature vectors.
The previously described versions of the disclosed subject matter have many advantages that were either described or would be apparent to a person of ordinary skill. Even so, these advantages or features are not required in all versions of the disclosed apparatus, systems, or methods.
Additionally, this written description makes reference to particular features. It is to be understood that the disclosure in this specification includes all possible combinations of those particular features. Where a particular feature is disclosed in the context of a particular aspect or example, that feature can also be used, to the extent possible, in the context of other aspects and examples.
Also, when reference is made in this application to a method having two or more defined steps or operations, the defined steps or operations can be carried out in any order or simultaneously, unless the context excludes those possibilities.
Although specific examples of the invention have been illustrated and described for purposes of illustration, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. Accordingly, the invention should not be limited except as by the appended claims.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10440318, | Apr 24 2015 | SYNAPTICS LLC | Motion adaptive de-interlacing and advanced film mode detection |
10679070, | Feb 23 2018 | Meta Platforms, Inc | Systems and methods for a video understanding platform |
9918041, | Apr 24 2015 | Wells Fargo Bank, National Association | Motion adaptive de-interlacing and advanced film mode detection |
20080002771, | |||
20090034857, | |||
20110032991, | |||
20110211812, | |||
20150170389, | |||
20160094803, | |||
20160261793, | |||
20170337711, | |||
20180089581, | |||
20180295375, | |||
20180359416, | |||
20190005324, | |||
20190272643, | |||
20200320678, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 03 2020 | PROJECT GIANTS, LLC | (assignment on the face of the patent) | / | |||
Jun 16 2020 | BAKER, DANIEL G | PROJECT GIANTS, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 052999 | /0231 |
Date | Maintenance Fee Events |
Apr 03 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Apr 11 2026 | 4 years fee payment window open |
Oct 11 2026 | 6 months grace period start (w surcharge) |
Apr 11 2027 | patent expiry (for year 4) |
Apr 11 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 11 2030 | 8 years fee payment window open |
Oct 11 2030 | 6 months grace period start (w surcharge) |
Apr 11 2031 | patent expiry (for year 8) |
Apr 11 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 11 2034 | 12 years fee payment window open |
Oct 11 2034 | 6 months grace period start (w surcharge) |
Apr 11 2035 | patent expiry (for year 12) |
Apr 11 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |