A method and system for determining the likelihood or similarity ratio that a selected media file of interest is related to one or more predetermined media files is provided that utilizes, combines, analyzes, and evaluates different categories of data and metadata extracted from each media file to generate a media file identifier for each media file that can then be used as a basis to compare any two media files to each other.
|
1. A method for determining a similarity ratio that a selected media file of interest is a derivative work of or is derived from one or more predetermined media files comprising:
providing a selected media file of interest and one or more predetermined media files to be compared with the selected media file of interest,
obtaining media type classifications for the selected media file of interest and for each of the one or more predetermined media files,
extracting data or metadata from the selected media file of interest and the one or more predetermined files, wherein at least two different categories of data or metadata are extracted from the selected media file of interest and the one or more predetermined media files by engaging at least two extraction engines,
harvesting the data or metadata extracted from the selected media file of interest and the one or more predetermined media files,
storing the data or metadata extracted from the selected media file of interest and the one or more predetermined media files,
selecting, based on the media type classifications, two or more ranked categories of data or metadata to be used for generating media file identifiers for the selected media file of interest and for each of the one or more predetermined media files,
generating, based on the selected two or more ranked categories of data or metadata, the media file identifiers for the selected media file of interest and for each of the one or more predetermined media files,
storing the media file identifier generated for the selected media file of interest and the media file identifiers generated for each of the one or more predetermined media files,
comparing the media file identifier generated for the selected media file of interest to the media file identifier generated for each of the one or more predetermined media files, and
determining a similarity ratio that the selected media file of interest is a derivative work of or is derived from each of the one or more predetermined media files based on comparing the media file identifier generated for the selected media file of interest to the media file identifiers generated for each of the one or more predetermined media files, said similarity ratio indicative of whether the selected media file of interest is derived from one or more predetermined media files without regard to the media type classifications of selected media file of interest and the one or more predetermined media files.
17. Non-transitory computer-readable storage media encoded with a computer program including instructions executable by a processor for determining a similarity ratio that a selected media file of interest is a derivative work or is derived from one or more predetermined media files, the media comprising:
a database, recorded on the media, comprising different types of data and metadata extracted from each of the selected media file of interest and the one or more predetermined media files;
an evaluation software module comprising instructions for:
obtaining media type classifications for the selected media file of interest and for each of the one or more predetermined media files,
extracting data or metadata from the selected media file of interest and the one or more predetermined files, wherein at least two different categories of data or metadata are extracted from the selected media file of interest and the one or more predetermined media files by engaging at least two extraction engines,
harvesting the data or metadata extracted from the selected media file of interest and the one or more predetermined media files,
storing the data or metadata extracted from the selected media file of interest and the one or more predetermined media files,
selecting, based on the media type classifications, two or more ranked categories of data or metadata to be used for generating media file identifiers for the selected media file of interest and for each of the one or more predetermined media files,
generating, based on the selected two or more ranked categories of data or metadata, the media file identifiers for the selected media file of interest and for each of the one or more predetermined media files,
storing the media file identifiers generated for the selected media file of interest and for each of the one or more predetermined media files,
comparing the media file identifier generated for the selected media file of interest to each of the media file identifier generated for each of the one or more predetermined media files, and
determining a similarity ratio that the selected media file of interest is a derivative work of or is derived from each of the one or more predetermined media files based on comparing the media file identifier generated for the selected media file of interest to each of the media file identifiers generated for each of the one or more predetermined media files, said similarity ratio indicative of whether the selected media file of interest is derived from one or more predetermined media files without regard to the media type classifications of selected media file of interest and the one or more predetermined media files.
7. A system for determining a similarity ratio that a selected media file of interest is a derivative work of or is derived from one or more predetermined media files comprising:
a data receiving and input device for receiving a selected media file of interest and one or more predetermined media files to be compared with the selected media file of interest;
a data receiving and output device for providing similarity ratios quantifying how similar a selected media file of interest is to each of the one or more predetermined media files; and
a data and metadata harvesting, extraction, analysis, evaluation, and storage system, the data and metadata harvesting, extraction, analysis, evaluation, and storage system further comprising:
at least two data extraction engines configured to extract different categories of data or metadata from the selected media file of interest and from each of the one or more predetermined media files;
a data and metadata harvesting engine configured to manage the at least two data extraction engines and to collect and harvest data or metadata extracted from the at least two data extraction engines,
wherein the harvested data or metadata is stored in a data store as data or metadata subsets within each category of data or metadata extracted from the selected media file of interest and each of the one or more predetermined media files;
an analysis engine configured to:
obtain media type classifications for the selected media file of interest and for each of the one or more predetermined media files;
select, based on the media type classifications, at least one category of data or metadata to be used for generating media file identifiers for the selected media file of interest and for each of the one or more predetermined media files;
generate, based on the selected at least one category of data or metadata, the media file identifiers for the selected media file of interest and for each of the one or more predetermined media files;
compare the generated media file identifier for the selected media file of interest with each of the generated media file identifiers for each of the one or more predetermined media files; and
determine similarity ratios that a selected media file of interest is a derivative work of or is derived from one or more predetermined media files based on the comparison of the media file identifier for the selected media file of interest with each of the generated media file identifiers for each of the one or more predetermined media files, said similarity ratio indicative of whether the selected media file of interest is derived from one or more predetermined media files without regard to the media type classifications of selected media file of interest and the one or more predetermined media files; and
a user interface configured to provide a user access to a set of features and functionality of the system and to enable the user to select and rank one or more harvested subsets of data or metadata.
2. The method of
3. The method of
4. The method of
6. The method of
8. The system of
the data and metadata extracted from the selected media file of interest and each of the one or more predetermined media files;
the media file identifiers generated for the selected media file of interest and each of the one or more predetermined media files; and
the similarity ratios determined for each comparison made between the media file identifier of the selected media file of interest and the media file identifiers of each of the one or more predetermined media files.
9. The system of
10. The system of
11. The system of
12. The system of
13. The system of
14. The system of
16. The system of
18. The media of
storing the data or metadata extracted from the selected media file of interest and the one or more predetermined media files in the database;
storing the media file identifiers generated for the selected media file of interest and for each of the one or more predetermined media files in the database; and
storing the similarity ratio in the database.
19. The media of
20. The media of
|
The present application is a non-provisional of, and claims the benefit of U.S. Provisional Patent Application 62/281,711, filed on Jan. 21, 2016, the entire contents of which are incorporated herein by reference.
The present application relates to a method and system for determining media file identifiers and for determining the likelihood that a media file is related to, a derivative work of, or derived from one or more other media files that are not necessarily of the same type. More specifically, the present application relates to a method and system of extracting different types or categories of data or metadata from media files, analyzing or evaluating the different types or categories of extracted data and metadata to determine media file identifiers, and comparing the media file identifiers to determine a similarity ratio that captures the similarity between two media files that are not necessarily of the same type.
Digital content describes any file or set of data present on any digital system. Digital content may be any of a variety of digital media, including but not limited to voice (i.e. speech), music, video, audio, two-dimensional still images, two-dimensional moving images, including film and video, and text in various formats, including documents, subtitles, scripts, and the like.
The proliferation of digital media coupled with the ease of duplicating digital media files and of converting analog to digital media files has created growing concerns for copyright-owning individuals and organizations seeking to protect their copyrighted original works from unauthorized use or distribution. These concerns have spurred the development and use of digital rights management (DRM) schemes for controlling access and restricting usage of proprietary and copyrighted works including for example software and multimedia content.
Identifying exact duplicates of a work where the original and the copy are the same type of media file, for example—two video files—is generally a straightforward comparison of apples to apples. The challenge arises when digital content has been copied from one type of media file to a different type of media file. For example, an image in a video file might be copied to a digital photo in a JPEG format, or to a document in a PDF file format. An individual or organization that owns the copyright to the video file might want to know whether such an image has been copied to other media file formats. But due to the vast and increasing variety of different types of digital media, each having their own different types or categories of data and metadata, determining what or how much digital content of an original work has been copied to another digital media file poses a real technical challenge. What is needed is a method and system that can help to provide protection for owners of copyrighted original works from unauthorized use or distribution due to the copying of digital content from one type of media file to a different type of media file.
Accordingly, it would be advantageous to provide a method and system that can compare apples to oranges—namely, digital media that are not the same type and that may have different types or categories of data or metadata but where protected digital content has been copied from one media file to the other. But to compare apples to oranges in this case necessitates a method and system that can extract meaningful information from different types or categories of data or metadata taken from different types of media files in a form normalized to enable a meaningful comparison. The extraction of the different types or categories of data or metadata and the conversion of such data into a normalized form is a critical piece to addressing this problem. To accomplish this requires an understanding of how to evaluate the various data and metadata.
Simply put, metadata is data about data. For example, metadata may describe details about a file or a data stream. Metadata may include certain information about a set of data, such as the data format, where the data came from, when it was created or last edited, who created the data, and who has accessed the data.
Metadata may be generated by various means, for example, by taking an original data set and extrapolating certain information from it or interpreting it to generate new information. Metadata may be generated as a result of various processes including face detection, speech-to-text creation, object recognition, and other processes that result in generating details about data, which may comprise video or a still image data file.
Various algorithms may be used for extracting and generating metadata from data sources. Some of these algorithms may be found in the public domain and may be available on the Internet on websites from various universities, commercial entities, and from individuals through personal websites, while other algorithms or tools are proprietary. Representative examples of algorithms relating to moving image files, such as film and video, include: a) speech-to-text algorithms; b) optical character recognition (OCR) or text recognition algorithms; c) face detection algorithms; d) object recognition algorithms; e) picture, frame, and audio similarity algorithms.
There are a variety of known standards for metadata as it relates to different data sources. In particular, metadata may be embedded with the data source or file and may provide various types of information, including for example: a) the frame size; b) the length or duration of data content; c) the format of data content; e) the name of the data source; and f) information related to the context of the data source (e.g. permissions).
There are also various systems that extract metadata from data sources and store the metadata in a data store. In a software environment where media files are harvested for embedded data and metadata, and that data and/or metadata is collected and stored for various uses and presentations, individual extraction engines making use of both proprietary and open-sourced or licensed extraction libraries and/or algorithms may be used to extract different types or categories of data and metadata from the same data source or from different types of media files. It would therefore be desirable to have a method and system that can utilize, combine, analyze, and evaluate these different types or categories of data and metadata in order to transform the data into a normalized form such as a characteristic identifier, to provide a basis for a meaningful comparison between different types of media files. A characteristic identifier basically acts to transform an orange into an apple for purposes of comparison. Thus, while the original media files may not be directly compared to provide a meaningful similarity metric, their characteristic identifiers can be compared to each other to provide a similarity ratio that quantifies the similarity between the two different types of media files.
The challenge posed here how to determine whether a media file is a derivative work or is derived from another media file, where the two files being compared are not necessarily of the same file format type. What is needed to address this challenge is a method and system that can not only compare different media file types to determine whether a media file of interest contains digital content copied from an original work but that can also provide a tangible measure of similarity to quantify how much the media file of interest is considered to resemble, be a derivative work of, or be derived from an original work. At least some of these advantages and benefits are addressed by the system and method disclosed herein.
Disclosed herein is a method and system for extracting different types, categories, sets, or subsets of data and metadata from a given data set or media file and utilizing the different types, categories, sets, or subsets of data and metadata to determine how closely a given data set or media file is related to or is derived from another data set or media file, wherein the media files are not necessarily the same type. By extracting and utilizing different types, categories, sets, or subsets of data and metadata from a given data set or media file, the method and system as disclosed herein provides a way of comparing different types of media files to determine whether one is, for example, a derivative work of or derived from another. In this way, the method and system disclosed herein can provide a tangible metric in the form of a similarity ratio that quantifies the similarity between disparate types of files.
In a first aspect, a system for determining a similarity ratio that a selected media file of interest is a derivative work of or is derived from one or more predetermined media files comprises a data receiving and input device for receiving a selected media file of interest and one or more predetermined media files to be compared with the selected media file of interest and a data receiving and output device for providing similarity ratios quantifying how similar a selected media file of interest is to each of the one or more predetermined media files. The system also comprises a data and metadata harvesting, extraction, analysis, evaluation, and storage system. The data and metadata harvesting, extraction, analysis, evaluation, and storage system can comprise at least two data extraction engines configured to extract different categories of data or metadata from the selected media file of interest and from each of the one or more predetermined media files and a data and metadata harvesting engine configured to manage the at least two data extraction engines and to collect and harvest data or metadata extracted from the at least two data extraction engines. The harvested data or metadata can be stored in a data store as data or metadata subsets within each category of data or metadata extracted from the selected media file of interest and each of the one or more predetermined media files. The data and metadata harvesting, extraction, analysis, evaluation, and storage system can further comprise an analysis engine configured to combine and evaluate two or more ranked subsets of data or metadata; generate media file identifiers for the selected media file of interest and for each of the one or more predetermined media files; compare the generated media file identifier for the selected media file of interest with each of the generated media file identifiers for each of the one or more predetermined media files; and determine similarity ratios that a selected media file of interest is a derivative work of or is derived from one or more predetermined media files based on the comparison of the media file identifier for the selected media file of interest with each of the generated media file identifiers for each of the one or more predetermined media files. Finally, the system can comprise a user interface configured to provide a user access to a set of features and functionality of the system and to enable the user to select and rank one or more harvested subsets of data or metadata.
In addition, the system as disclosed herein can comprise a data store configured to store: the data and metadata extracted from the selected media file of interest and each of the one or more predetermined media files; the media file identifiers generated for the selected media file of interest and each of the one or more predetermined media files; and the similarity ratios determined for each comparison made between the media file identifier of the selected media file of interest and the media file identifiers of each of the one or more predetermined media files. The data store can be configured to store media file identifiers that have been generated for each of the one or more predetermined media files to form a library of media file identifiers for a set of predetermined media files prior to receiving a selected media file of interest. In some embodiments, the one or more harvested subsets of data or metadata can be ranked according to their importance by the user. Each ranking can be used to assign a weight to the one or more harvested subsets of data or metadata. The weight assigned to the one or more harvested subsets of data or metadata can be used in generating media file identifiers for the selected media file of interest and for each of the one or more predetermined media files.
In some embodiments, at least two data extraction engines can be configured to use the same data extraction process to extract different categories of data or metadata from the selected media file of interest and from each of the one or more predetermined media files. As another example, the selected media file of interest can be selected by an automated selection system. Additionally, the data and metadata harvesting engine can comprise an extraction engine manager configured to engage a first extraction engine and one or more other extraction engines in parallel or in sequence. The similarity ratios can be determined as a percentage. In a preferable embodiment, at least one of the one or more predetermined media files is not the same type of media file as the selected media file of interest.
In another aspect, a method for determining a similarity ratio that a selected media file of interest is a derivative work of or is derived from one or more predetermined media files comprises providing a selected media file of interest and one or more predetermined media files to be compared with the selected media file of interest and extracting data or metadata from the selected media file of interest and the one or more predetermined files. At least two different categories of data or metadata can be extracted from the selected media file of interest and the one or more predetermined media files by engaging at least two extraction engines. The method further comprises harvesting the data or metadata extracted from the selected media file of interest and the one or more predetermined media files, storing the data or metadata extracted from the selected media file of interest and the one or more predetermined media files, selecting two or more ranked categories for the data or metadata extracted from the selected media file of interest and for each of the one or more predetermined media files, generating a media file identifier for the selected media file of interest and for each of the one or more predetermined media files based on the two or more ranked categories for the data or metadata extracted from the selected media file of interest and the one or more predetermined media files, storing the media file identifier generated for the selected media file of interest and the media file identifiers generated for each of the one or more predetermined media files, comparing the media file identifier generated for the selected media file of interest to the media file identifier generated for each of the one or more predetermined media files, and determining a similarity ratio that the selected media file of interest is a derivative work of or is derived from each of the one or more predetermined media files based on comparing the media file identifier generated for the selected media file of interest to the media file identifiers generated for each of the one or more predetermined media files.
In some embodiments, the method disclosed herein can comprise storing media file identifiers that have been generated for each of the one or more predetermined media files to form a library of media file identifiers and retrieving the stored media file identifiers that have been generated for each of the one or more predetermined media files from the library of media file identifiers to compare the media file identifier generated for the selected media file of interest to the media file identifier generated for each of the one or more predetermined media files. In addition, the method can comprise assigning a weight to the one or more harvested subsets of data or metadata, wherein the weight assigned to the one or more harvested subsets of data or metadata is based on the ranking of the categories for the data or metadata. The assigned weight can be used to generate the media file identifiers for the selected media file of interest and for each of the one or more predetermined media files. In other examples, at least two data extraction engines can be configured to use the same data extraction process to extract different categories of data or metadata from the selected media file of interest and from each of the one or more predetermined media files. Moreover, the similarity ratios can be determined as a percentage. In a preferable embodiment, at least one of the one or more predetermined media files is not the same type of media file as the selected media file of interest.
In still another aspect, non-transitory computer-readable storage media encoded with a computer program including instructions executable by a processor for determining a similarity ratio that a selected media file of interest is a derivative work or is derived from one or more predetermined media files comprises: a database, recorded on the media, comprising different types of data and metadata extracted from each of the selected media file of interest and the one or more predetermined media files and an evaluation software module.
The evaluation software module can comprise instructions for: extracting data or metadata from the selected media file of interest and the one or more predetermined files, wherein at least two different categories of data or metadata are extracted from the selected media file of interest and the one or more predetermined media files by engaging at least two extraction engines, harvesting the data or metadata extracted from the selected media file of interest and the one or more predetermined media files, storing the data or metadata extracted from the selected media file of interest and the one or more predetermined media files, selecting two or more ranked categories for the data or metadata extracted from the selected media file of interest and the one or more predetermined media files, generating a media file identifier for the selected media file of interest and for each of the one or more predetermined media files based on the two or more ranked categories for the data or metadata extracted from the selected media file of interest and the one or more predetermined media files, storing the media file identifiers generated for the selected media file of interest and for each of the one or more predetermined media files, comparing the media file identifier generated for the selected media file of interest to each of the media file identifier generated for each of the one or more predetermined media files, and determining a similarity ratio that the selected media file of interest is a derivative work of or is derived from each of the one or more predetermined media files based on comparing the media file identifier generated for the selected media file of interest to each of the media file identifiers generated for each of the one or more predetermined media files.
In some embodiments, the evaluation software module can comprise instructions for: storing the data or metadata extracted from the selected media file of interest and the one or more predetermined media files in the database; storing the media file identifiers generated for the selected media file of interest and for each of the one or more predetermined media files in the database; and storing the similarity ratio in the database. In some examples, the one or more harvested subsets of data or metadata can be ranked according to their importance. Each ranking can be used to assign a weight to the one or more harvested subsets of data or metadata. Additionally, the weight assigned to the one or more harvested subsets of data or metadata can be used in generating media file identifiers for the selected media file of interest and for each of the one or more predetermined media files. In a preferable embodiment, at least one of the one or more predetermined media files is not the same type of media file as the selected media file of interest.
These and other embodiments are described in further detail in the following description related to the appended drawing figures.
The novel features of the invention are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings of which:
Specific embodiments of the disclosed method and system will now be described with reference to the drawings. Nothing in this detailed description is intended to imply that any particular step, component, or feature is essential to the method and system as disclosed herein.
For clarity of presentation, the disclosed method, system, and underlying metadata extraction are exemplified and described with focus on the analysis of a two-dimensional digital source or “document,” although the disclosed method and system are not limited to such documents and may be applied to other sources of data. A document may be obtained from an image acquisition device such as a video camera or it may be read into memory from a data storage device, for example, in the form of a media file.
In a first aspect, a method for determining the similarity ratio that a selected media file of interest is a derivative work or is derived from one or more predetermined media files comprises: providing a selected media file of interest and one or more predetermined media files to be compared with the selected media file of interest; extracting data or metadata from the selected media file of interest and the one or more predetermined files, wherein at least two different categories of data or metadata are extracted from the selected media file of interest and the one or more predetermined media files by engaging at least two extraction engines; harvesting the data or metadata extracted from the selected media file of interest and the one or more predetermined media files; storing the data or metadata extracted from the selected media file of interest and the one or more predetermined media files; selecting two or more ranked categories for the data or metadata extracted from the selected media file of interest and the one or more predetermined media files; generating a media file identifier for each of the selected media file of interest and the one or more predetermined media files; storing the media file identifier generated from each of the selected media file of interest and the one or more predetermined media files; comparing the media file identifier generated from the selected media file of interest to the media file identifier generated from each of the one or more predetermined media files; and determining a similarity ratio that the selected media file of interest is a derivative work of or is derived from each of the one or more predetermined media files based on comparing the media file identifier generated from the selected media file of interest to the media file identifier generated from each of the one or more predetermined media files.
In another aspect, a VideoDNA™ system for determining the likelihood or similarity ratio that a selected media file of interest is related to one or more predetermined media files comprises: a data receiving and input device for receiving a selected media file of interest and one or more predetermined media files to be compared with the selected media file of interest; a data receiving and output device for providing likelihood values or similarity ratios representing the likelihood that the selected media file of interest is related to each of the one or more predetermined media files; and a data and metadata harvesting, extraction, analysis, evaluation, and storage system, the data and metadata harvesting, extraction, analysis, evaluation, and storage system further comprising: at least two data extraction engines configured to extract different categories of data or metadata from the selected media file of interest and each of the one or more predetermined media files; a data and metadata harvesting engine configured to manage the at least two data extraction engines and to collect and harvest data or metadata extracted from the at least two data extraction engines, wherein the harvested data or metadata is stored in a data store as data or metadata subsets within each category of data or metadata extracted from the selected media file of interest and each of the one or more predetermined media files; a user interface configured to provide a user access to a set of features and functionality of the VideoDNA™ system and to enable the user to select and rank one or more harvested subsets of data or metadata; a VideoDNA™ Analysis Engine configured to combine, analyze and/or evaluate two or more ranked subsets of data or metadata in order to generate media file identifiers for the selected media file of interest and each of the one or more predetermined media files and to determine likelihood values or similarity ratios representing similarity between the selected media file of interest and each of the one or more predetermined media files; and a data store configured to store: the data and metadata extracted and/or harvested from the selected media file of interest and each of the one or more predetermined media files; the media file identifiers generated for the selected media file of interest and each of the one or more predetermined media files; and the likelihood values or similarity ratios determined for each comparison made between the selected media file of interest and each of the one or more predetermined media files.
In some embodiments, two or more ranked types, categories, sets, or subsets of data or metadata can be selected at 211 and the different categories, sets, or subsets of data or metadata can be combined and analyzed or evaluated at 212 to generate a media file identifier for the media file at 213, which can then be stored at 214. This process, beginning with the presentation of media files at 200 and ending with the storing of media file identifiers at 214 may be repeated for any number of media files to form a library of media file identifiers at 214 for a set of predetermined media files that may later be used as a basis of comparison with respect to a selected media file of interest.
Once a library of media file identifiers for a set of predetermined media files has been established, a selected media file of interest can be provided at 200 and the selected media file of interest can be processed according to the flowchart in
The selection of a media file of interest may be performed by a user or by an automated selection system. In addition, after the extraction engine manager 202 has programmatically engaged at least a first extraction engine at 204 and a second extraction engine at 206, it may thereafter, engage any number of additional extraction engines at 208 in parallel or in sequence. Alternatively, the extraction engine manager may programmatically engage any two or all extraction engines in parallel or in sequence. Each extraction engine may be configured to extract a different category of data or metadata from the media file. The extracted data may be combined before or after storage, or it may be harvested, packaged, reformatted or restructured before or after being stored, analyzed, or evaluated. Additionally, the likelihood or similarity ratios or values returned at 217 may be computed as a percentage or using some other similarity metric known in the art or yet to be developed to represent how likely the selected media file of interest is derived from, is related to, or is otherwise similar to a predetermined media file stored in the library, where the library is comprised of media file identifiers generated from previously processed media files.
Because the generated media file identifier characterizes, corresponds to, or may in some cases, uniquely represent a particular media file, the media file identifier generated by this invention may serve as a signature, fingerprint, or VideoDNA™ of a media file. Accordingly, while determining the likelihood of a relationship or degree of similarity between media files is one application of the media file identifiers generated by this invention, the media file identifiers may also be used in other applications requiring accurate and reliable identification, characterization, or comparison of media files.
Content Source 302 may be any device or system capable of storing or hosting files or data represented in some other fashion. Notably, the performance of the method and implementation of the system as disclosed herein are independent of data format. In particular, the relevant file formats that are returned as a result of searching for similar content in the data store using the media file identifiers generated by an exemplary system as disclosed herein may be of any data format known in the art or yet to be developed.
LAN/WAN 304 may be any local area network (LAN) or wide area network (WAN). When LAN/WAN 304 is configured as a LAN, the LAN may be configured as a ring network, a bus network, a wireless local network and/or any other network configuration. When LAN/WAN 304 is configured as a WAN, the WAN may be the public-switched telephone network, a proprietary network, the public access WAN commonly known as the Internet, and/or any other WAN configuration.
Regardless of the actual network used in a particular embodiment, data may be exchanged over LAN/WAN 304 using various communication protocols. For example, transmission control protocol/Internet protocol (TCP/IP) may be used if LAN/WAN 304 is the Internet. Proprietary image data communication protocols may be used when LAN/WAN 304 is a proprietary LAN or WAN. Although
The various portions of VideoDNA™ Analysis Engine 308 as well as the underlying metadata harvesting mechanism, Data Harvesting Engine 300, may be implemented in hardware, software, firmware, or combinations therefor. In a preferable embodiment, VideoDNA™ Analysis Engine 308 is implemented using a combination of hardware and software or firmware that is stored in memory and executed by a suitable instruction execution system. If implemented solely in hardware, as in an alternative embodiment, for example, the VideoDNA™ Analysis Engine 308 may be implemented with any or a combination of technologies that are well-known in the field, including: discrete logic circuits, application-specific integrated circuits (ASICs), programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.), or other technologies known in the art or yet to be developed.
In a preferable embodiment, the user can provide a media file by interacting with User Interface 400 to provide a media file to the system via Data Input 404. Data Harvesting Engine 416 can manage the data extraction process performed on the media file received via Data Input 404. Data Extraction Engines 420, 422, and 424 may each be configured to generate a different type or category of data or metadata based on the same data source or media file. An extraction engine may be configured to extract a particular type of data or metadata. For example, an extraction engine may be configured to extract the size of a file. Since all files have a size attribute, an extraction engine configured to extract the size of a file can be applied to any type of file. Other extraction engines can be file-based. For example, an extraction engine configured to extract facial recognition attributes in order to, for example recognize a celebrity, can be applied to a video file or a photo, but cannot be applied to a text file.
In addition, a media file can have a mime type that enables identification of the type of file it is—for example, whether it is an audio file, a document, a text file, an application or a video file. The type of file can be used to determine which extraction engines apply. Thus, as described above, an extraction engine configured to extract file size can apply to all of the aforementioned examples, but one configured for facial recognition might apply to a video file, but not to, for example a text file or an audio file.
The data and metadata extracted by Data Extraction Engines 420, 422 and 424 can be stored in Data Store 408 as Harvested Metadata 410 and may reside in any form alongside any other relevant data to the system or environment, shown in
To determine the likelihood or similarity ratio that a selected media file of interest is related to one or more of the predetermined media files previously processed by an embodiment of the system as described above, a user may provide a selected media file of interest by interacting with User Interface 400 to provide the selected media file of interest to the system via Data Input 404. Data Harvesting Engine 416 manages the data extraction process performed on the selected media file of interest received via Data Input 404. Using the same data extraction process performed on the predetermined media files, Data Extraction Engines 420, 422, and 424 extract data or metadata from the selected media file of interest. The data and metadata extracted by Data Extraction Engines 420, 422 and 424 are stored in Data Store 408 as Harvested Metadata 410 and may reside in any form alongside any other relevant data to the system or environment, shown in
To effectively return a meaningful result, VideoDNA™ Analysis Engine 418 may consider one or more, types, categories, sets, or subsets of Harvested Metadata 410. In particular, VideoDNA™ Analysis Engine 418 may compare a particular subset of metadata to some or all of the other different subsets of metadata accessible to the Data/Metadata Harvesting and Analysis System 406. Moreover, VideoDNA™ Analysis Engine 418 may determine to what extent a selected media file of interest is related to one or more predetermined media files by comparing various aspects of some or all the harvested data or metadata, including but not limited to data extracted from: video picture identification; object identification; audio waveform analysis, face recognition, face detection, voice recognition, spoken language detection, or other technologies known in the art or later developed. VideoDNA™ Analysis Engine 418 may also compare certain aspects of other extracted data or metadata such as file creation information or embedded metadata sets such as the Exchange Image File Format (EXIF).
VideoDNA™ Analysis Engine 418 may determine the specific types, categories, sets, or subsets of data or metadata to be compared, analyzed, evaluated, or utilized for generating media file identifiers using various criteria about the data type of the data source file. For example, data type may include a classification of the media upon which the acquired digital data originated. In particular, digital documents may have been recorded via a video camera or otherwise acquired from various media devices corresponding to different media classifications (e.g. “video tape”, “digital video disc,” and “file system”). Information reflective of the media classification may be used in selecting the type or category of data or metadata to be used in generating the media file identifiers. In other cases, it may be possible to fine tune or to otherwise adjust the algorithm used by VideoDNA™ Analysis Engine 418 in order to achieve more accurate results.
The data input device, Data Input 404, may be a mounted file system via a connection technology such as the Common Internet File System (CIFS) or Network File System (NFS) or any technology known in the art or yet to be developed. The data storage device, Data Store 408, may be any structured database, unstructured database, relational database, operational database, database warehouse, distributed database, end-user database, or any database or data storage device known in the art or yet to be developed. In addition, the various other elements of Data/Metadata Harvesting and Analysis System 406, including VideoDNA™ Analysis Engine 418, may be stored and operative on a single computing device or alternatively, may be distributed among several memory devices under the coordination of one or more computing devices.
Moreover, various information, including but not limited to VideoDNA™ Data 412 that is generated by VideoDNA™ Analysis Engine 418, may be used to form a knowledge base that may reside in Data Store 408 or as a separate functional component. Regardless of the actual implementation, the knowledge base may contain information that VideoDNA™ Analysis Engine 418 may use in an unlimited fashion to improve its own accuracy, or to evaluate new data sources or files to be analyzed or evaluated.
Video standards may include embedded metadata such as, for example, high dynamic range (HDR) metadata standards, Exchange Image File Format (EXIF), Dublin Core, Encoded Archival Description (EAD), IEEE LOM (Learning Object Metadata), Machine-Readable Cataloguing (MARC), or other standards known in the art. VideoDNA™ Analysis Engine 418 may use an abstract set of standards that are independent of any particular format. By using an abstract set of data-interchange standards, VideoDNA™ Analysis Engine 418 may implement any algorithm configured to determine an identifier or characterization of a file based on its extracted data or metadata, including for example, algorithms configured to determine a fingerprint, signature, VideoDNA™, or other media file identifier of a media file.
In an alternative embodiment, for example, in the case where the source data takes the form of an audio file, the data-interchange standards described above may be replaced in part or in their entirety by a set of appropriate data-interchange standards suited for characterizing digital audio data as opposed to digital video data. Additionally, other data-interchange standards may be selected to accommodate other types of data, for example, photos, film, graphics, text documents, formatted data-feeds, formatted database exports, or other types of data known in the art or yet to be developed.
While the functional block diagram in
Note that the method as described by the flowchart of
The similarity between any two files may be captured by a likelihood or similarity ratio or value that may be returned either in percentage form or in the form of some other similarity metric known in the art or yet to be developed. The results of the comparison performed in 606 may be presented at 608 programmatically through a data-interchange format (e.g. JSON), visually through a user interface, a combination herein of the two, or by any other means known in the art or yet to be developed.
An exemplary system as disclosed herein may be implemented via a combination of a computing device with a local data storage device. This device may be an internal hard-disk drive, a magnet tape drive, a compact-disk drive, and/or other data storage devices known in the art or yet to be developed that can be made operable with a computing device. In addition, software instructions and/or data associated with the VideoDNA™ Analysis Engine may be distributed across several of the above-mentioned data storage devices.
The VideoDNA™ Analysis Engine may be implemented via a combination of software and data executed and stored under the control of a computing processor. It should be noted, however, that an exemplary system as disclosed herein is not dependent upon the nature of the underlying computer in order to accomplish the designated functions.
Processor 700 may be a hardware device for executing software that may be stored in Memory 702. Processor 700 may be any custom-made or commercially-available processor, a central processing unit (CPU), or an auxiliary processor among several processors associated with the computing device, a semi-conductor-based microprocessor (in the form of a microchip), or a macroprocessor.
Memory 702 may include any one or a combination of volatile memory elements including for example, random access memory (e.g. RAM, such as dynamic RAM or DRAM, static RAM or SRAM), nonvolatile memory elements (e.g. read-only memory (ROM), hard drives, tape drives, compact discs (CD-ROM)), or other memory elements known in the art or yet to be developed. In particular, Memory 702 may incorporate electronic, magnet, optical, and/or other types of storage media known in the art or yet to be developed. Note that Memory 702 may have a distributed architecture, wherein various components are situated remotely from one another but can still be accessed by Processor 700.
The software in Memory 702 may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions. As depicted in
A preferable embodiment may comprise one or more source programs, executable programs (object code), scripts, or other collections each comprising a set of instructions to be performed. In addition, VideoDNA™ Analysis Engine 708 may be written in a number of programming languages known in the art or yet to be developed. Operating System 704 may control the execution of other computer programs, such as Data/Metadata Harvesting and Analysis System 706, and may provide scheduling, input-output control, file and data management, memory management, system configuration, and communication control and related services.
Input/Output Device Interface(s) 714 may take the form of human/machine device interfaces for communicating via various devices, such as for example, a keyboard, a mouse or other suitable pointing device, a microphone, or other devices known in the art or yet to be developed. Input/Output Device Interface(s) 714 may also comprise various output devices, for example, a printer, a monitor, an external speaker, or other output devices known in the art or yet to be developed.
LAN/WAN Interface(s) 712 may include a host of devices that may establish one or more communication sessions between, for example, the computing device as depicted in
When the computing device of
In a preferable embodiment, a user may access a page of the interface as depicted in
For example, a user may interact with the Configuration Dashboard 801 shown in
For an example,
Typical video content contains several different sources of metadata that can be used to better distinguish and more uniquely identify the video content when the data or metadata are extracted, combined, and evaluated to provide a media file identifier as described herein. For example, most major motion picture studios distribute tens of thousands of different versions of the same title or work—these can include, for instance, an airline version with Chinese subtitles, a European version dubbed in French, a version with different product placement, and various other alternative versions. These versions of multiple titles can be scanned and the data and metadata of each version of each title can be harvested, extracted, analyzed and evaluated by the method and system as disclosed herein. In a preferable embodiment, a set or subset of metadata that might be selected for use or ranked at a higher priority (for example, with a ranking of 1), would include the actors who appear in each title, unique songs that are in each title, scenes or objects detected in each title, the duration of each title, and landmarks or locations detected to a similar version of the desired title or original work. In this example, irrelevant titles and versions can be filtered out by using or ranking these particular sets or subsets of metadata. This filtering of irrelevant titles and versions allows the user to locate a desired version based on relating the desired version to another similar version saved in the system. Accordingly, the use of the aforementioned sets or subsets of metadata can provide for a more unique or distinguishing identifier which can subsequently result in more accurate similarity ratios.
In addition, one might start with several types of media files including raw video files, raw audio files, and a script. The goal might be to search for a particular phrase contained in any or all of these media files. An extraction engine can be configured to extract speech-to-text and would thus convert the speech in the raw video and raw audio files into text. Another extraction engine might be configured to extract text from a pdf file or other type of text file. The text extracted from the different types of files can then be compared using the method and system disclosed herein to search for the particular phrase of interest and a similarity ratio can be determined based on comparing a selected media file of interest to other media files of different types. Similarly, other types of metadata, including for example certain actors or certain objects in a scene, can be extracted in this manner using a corresponding extraction engine (e.g. for facial or object recognition) configured to extract that particular type of metadata.
In a preferable embodiment, users may add metadata subsets to the ranked groups of
In some embodiments, the ranking process can be performed automatically by the Analysis Engine, which can be configured to assign a ranking to the extracted data or metadata, or to the subsets of data or metadata according to a set of rules. The rankings can subsequently be adjusted or changed by a user as described above.
Each result 1104 corresponding to a selected media file of interest may be displayed alongside a thumbnail 1120 of the selected media file of interest. Additionally, each result 1104 may be displayed as a direct link to the file that is represented, thus allowing a user to select the link and have direct access to the desired file. The link may be a universal naming convention (UNC) path, a hypertext markup language (HTML) path, a uniform resource locator (URL), or other type of link to the current or previous location of the relevant file known in the art or yet to be developed.
An expandable dropdown menu 1106 may provide access to a graphical display of the results of applying an exemplary method as disclosed herein. The results may be listed by decreasing likelihood or similarity value as a percentage or other similarity metric, or by any other sorting or ranking method known in the art or yet to be developed. In this example, the media file indicated at 1108 with associated thumbnail 1110 has a likelihood value of 87% of being related to the selected media file of interest 1104, while the media file indicated at 1112 with associated thumbnail 1114 has a likelihood value of 77%, and the media file indicated at 1116 with associated thumbnail 1118 has a likelihood value of 72%. Note that in this example, the results are listed in decreasing order by likelihood value or in order of closest similarity to the selected media file of interest, with the most similar file listed first. Similarity may be represented as a percentage, rank, description, score, or by other similarity metrics known in the art or yet to be developed. In addition, the similarity value may be displayed alongside the link to the media file and/or its associated thumbnail representation.
Moreover, there is no limit to the file format types that may be considered similar or related. For instance, the media file at 1112 with associated thumbnail 1114 may be a still photo taken from the same location or featuring the same background, objects, or scene as the selected media file of interest 1104. Similarly, the media file at 1116 and associated thumbnail 1118 may be a text document determined to be similar to selected media file of interest 1104 by an exemplary system as disclosed herein upon analysis, evaluation, and comparison of the text or other metadata extracted from the two files. The method and system disclosed herein is able to compare disparate file format types and provide a meaningful metric in the form of a similarity ratio that quantifies how similar one media file is to another media file even if they have different file format types.
The media files along with their determined likelihood values or similarity ratios returned as a result of applying an exemplary method and system as described herein and as represented in this example by 1104, 1108, 1112, and 1116 and associated thumbnails 1120, 1110, 1114, and 1118, respectively, or any other associated visual or non-visual representation, may be displayed in the graphical user interface to link directly to the actual file, asset, data, or content therein represented. For example, clicking, tapping, selecting, engaging, or otherwise interacting with the link 1108 may direct the user to an interface for accessing, playing, downloading, acquiring, restoring, or interacting with the actual media file represented or with the data itself.
An exemplary system as disclosed herein may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instruction execution system, apparatus or device, and execute the instructions. A computer-readable medium may be any device or apparatus that can store, communicate, propagate, or transport a program for use by or in connection with the instruction execution system, apparatus, or device. For example, the computer-readable medium may be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or other propagation medium known in the art or yet to be developed. The computer-readable medium may even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in an appropriate manner if necessary, and then stored in a computer memory.
The process descriptions or blocks in the flowcharts presented in
While preferable embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.
Edell, Aaron, Chai, Sek, Ryer, Mat
Patent | Priority | Assignee | Title |
10936630, | Sep 13 2018 | Microsoft Technology Licensing, LLC | Inferring topics with entity linking and ontological data |
11321340, | Mar 31 2020 | WELLS FARGO BANK, N A | Metadata extraction from big data sources |
11409718, | Oct 26 2018 | LIBERTREE INC | Method for generating and transmitting MARC data in real time when user applies for wish book, and system therefor |
11675858, | Jun 26 2019 | Primer Technologies, Inc. | Enhanced content format management for web services |
11868362, | Mar 31 2020 | Wells Fargo Bank, N.A. | Metadata extraction from big data sources |
Patent | Priority | Assignee | Title |
5873085, | Nov 20 1995 | MATSUSHITA ELECTRIC INUDUSTRIAL CO , LTD | Virtual file management system |
5897638, | Jun 16 1997 | Ab Initio Software LLC; Architecture LLC; Ab Initio Technology LLC | Parallel virtual file system |
6356863, | Sep 08 1998 | DAYLIGHT CHEMICAL INFORMATION SYSTEMS, INC | Virtual network file server |
6356915, | Feb 22 1999 | JPMORGAN CHASE BANK, N A , AS SUCCESSOR AGENT | Installable file system having virtual file system drive, virtual device driver, and virtual disks |
6381615, | Feb 02 2000 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Method and apparatus for translating virtual path file access operations to physical file path access |
6895591, | Oct 18 1999 | Unisys Corporation | Virtual file system and method |
7024427, | Dec 19 2001 | EMC IP HOLDING COMPANY LLC | Virtual file system |
7200622, | Mar 19 2004 | Hitachi, Ltd. | Inter-server dynamic transfer method for virtual file servers |
7296041, | Mar 19 2004 | Hitachi, Ltd. | Inter-server dynamic transfer method for virtual file servers |
7496613, | Jan 09 2006 | International Business Machines Corporation | Sharing files among different virtual machine images |
7644136, | Nov 28 2001 | Interactive Content Engines, LLC.; Interactive Content Engines, LLC | Virtual file system |
7756821, | Nov 02 2006 | Microsoft Technology Licensing, LLC | Virtual deletion in merged file system directories |
7831578, | Apr 28 2006 | XUESHAN TECHNOLOGIES INC | Apparatus for file system management with virtual file name |
7908476, | Jan 10 2007 | International Business Machines Corporation | Virtualization of file system encryption |
8001086, | Jan 09 2006 | International Business Machines Corporation | Sharing files among different virtual machine images |
8065730, | Mar 31 2008 | CA, INC | Anti-malware scanning in a virtualized file system environment |
8103050, | Jan 16 2006 | Thomson Licensing | Method for computing a fingerprint of a video sequence |
8104083, | Mar 31 2008 | CA, INC | Virtual machine file system content protection system and method |
8205193, | Jun 11 2001 | HEWLETT-PACKARD DEVELOPMENT COMPANY L P | Runtime updating of virtual machine class files |
8285118, | Jul 16 2007 | NOVAFORA, INC | Methods and systems for media content control |
8358840, | Jul 16 2007 | NOVAFORA, INC | Methods and systems for representation and matching of video content |
8417037, | Jul 16 2007 | VDOQWEST, INC , A DELAWARE CORPORATION | Methods and systems for representation and matching of video content |
8434016, | Feb 19 2004 | Qualcomm Incorporated | Virtual file system |
8442384, | Jul 16 2007 | VDOQWEST, INC , A DELAWARE CORPORATION | Method and apparatus for video digest generation |
8549556, | Apr 15 2008 | NOVAFORA, INC | Contextual advertising |
8594429, | Oct 11 2005 | Hewlett-Packard Development Company, L.P. | Method and apparatus for processing a video stream |
8627269, | Aug 29 2006 | Adobe Inc | Runtime library including a virtual file system |
8655826, | Aug 01 2008 | MOTION PICTURE LABORATORIES, INC | Processing and acting on rules for content recognition systems |
8676030, | Apr 15 2008 | NOVAFORA, INC | Methods and systems for interacting with viewers of video content |
8719288, | Apr 15 2008 | VDOQWEST, INC , A DELAWARE CORPORATION | Universal lookup of video-related data |
8731235, | Dec 30 2008 | IRDETO B V | Fingerprinting a data object |
8732220, | Jun 15 2005 | International Business Machines Corporation | Virtualized file system |
8793274, | Aug 08 2011 | VOBILE, INC | System and method for auto content recognition |
8839245, | Jun 18 2012 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Transferring files using a virtualized application |
8863297, | Jan 06 2012 | IVANTI, INC | Secure virtual file management system |
8863298, | Jan 06 2012 | IVANTI, INC | Secure virtual file management system |
8863299, | Jan 06 2012 | IVANTI, INC | Secure virtual file management system |
8892572, | Dec 30 2011 | Cellco Partnership | Video search system and method of use |
8990270, | Aug 03 2006 | Hewlett Packard Enterprise Development LP | Protocol virtualization for a network file system |
9100245, | Feb 08 2012 | Amazon Technologies, Inc | Identifying protected media files |
9122706, | Feb 10 2014 | Geenee GmbH | Systems and methods for image-feature-based recognition |
9135270, | Dec 26 2007 | Oracle International Corporation | Server-centric versioning virtual file system |
9213858, | Jan 06 2012 | IVANTI, INC | Secure virtual file management system |
9311375, | Feb 07 2012 | QUEST SOFTWARE INC F K A DELL SOFTWARE INC ; Aventail LLC | Systems and methods for compacting a virtual machine file |
9401972, | Sep 18 2012 | WIWYNN CORPORATION | Virtual file transmission system and method of transmitting virtual file thereof |
9465953, | Jan 06 2012 | IVANTI, INC | Secure virtual file management system |
9524122, | Jan 19 2011 | Quantum Corporation | Metadata storage in unused portions of a virtual disk file |
20020112012, | |||
20020188935, | |||
20030110188, | |||
20030115218, | |||
20050114350, | |||
20050210074, | |||
20050273486, | |||
20060041527, | |||
20060123062, | |||
20060173805, | |||
20060288034, | |||
20070162510, | |||
20070283280, | |||
20080126434, | |||
20080155214, | |||
20080165957, | |||
20080216009, | |||
20090022472, | |||
20090150879, | |||
20090175538, | |||
20090259633, | |||
20100005488, | |||
20100008643, | |||
20100011392, | |||
20100085481, | |||
20100104184, | |||
20100287201, | |||
20110035358, | |||
20110040812, | |||
20110246471, | |||
20110289114, | |||
20110289532, | |||
20110296452, | |||
20120124638, | |||
20120143929, | |||
20130006951, | |||
20130013583, | |||
20130167104, | |||
20130219176, | |||
20130219456, | |||
20130247036, | |||
20130297662, | |||
20140082051, | |||
20140136578, | |||
20140140682, | |||
20140250450, | |||
20140324845, | |||
20150046537, | |||
20150058998, | |||
20150095972, | |||
20150235047, | |||
20150237410, | |||
20150254342, | |||
20150254343, | |||
20150302886, | |||
20150379039, | |||
20150381671, | |||
20160063272, | |||
20160246816, | |||
20160275101, | |||
20160292080, | |||
20170004131, | |||
EP774723, | |||
EP840242, | |||
EP1267259, | |||
JP103421, | |||
JP2000513857, | |||
JP2001051890, | |||
JP2002524793, | |||
JP2004288129, | |||
JP2007513429, | |||
JP2007535025, | |||
JP2008547074, | |||
JP2009053995, | |||
JP2012168906, | |||
JP2014174870, | |||
JP2257340, | |||
JP5216938, | |||
JP749805, | |||
WO14632, | |||
WO2005057343, | |||
WO2005081132, | |||
WO2005119495, | |||
WO2006134023, | |||
WO2009035764, | |||
WO2009129243, | |||
WO2009129328, | |||
WO2009146180, | |||
WO2011022388, | |||
WO2014078445, | |||
WO2016018446, | |||
WO2016161251, | |||
WO9858331, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 23 2017 | GRAYMETA, INC. | (assignment on the face of the patent) | / | |||
Jan 25 2017 | EDELL, AARON | GRAYMETA, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 042399 | /0719 | |
Jan 31 2017 | RYER, MAT | GRAYMETA, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 042399 | /0719 | |
Apr 25 2017 | CHAI, SEK | GRAYMETA, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 042399 | /0719 | |
Jan 17 2024 | GRAYMETA, INC | WASABI TECHNOLOGIES LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 066652 | /0457 |
Date | Maintenance Fee Events |
Dec 12 2022 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Jun 11 2022 | 4 years fee payment window open |
Dec 11 2022 | 6 months grace period start (w surcharge) |
Jun 11 2023 | patent expiry (for year 4) |
Jun 11 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 11 2026 | 8 years fee payment window open |
Dec 11 2026 | 6 months grace period start (w surcharge) |
Jun 11 2027 | patent expiry (for year 8) |
Jun 11 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 11 2030 | 12 years fee payment window open |
Dec 11 2030 | 6 months grace period start (w surcharge) |
Jun 11 2031 | patent expiry (for year 12) |
Jun 11 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |