A method for comparing a query video and a target video includes partitioning frames of the query video and frames of the target video into blocks and calculating the mean intensity value for each block. A plurality of query time series is produced for the query video, each query time series representing temporal variation in mean intensity value for blocks from the same location in different frames of the query video. A plurality of target time series is produced for the target video, each target time series representing temporal variation in mean intensity value for blocks from the same location in different frames of the target video the query time series and the target time series are used in determining if alignment exists between the query video and the target video.
|
1. A method for comparing a query video and a target video, the method including:
partitioning frames of the query video and frames of the target video into blocks;
calculating the mean intensity values for the blocks;
producing a plurality of query time series for the query video, the query time series representing temporal variation in mean intensity value for blocks from the same location in different frames of the query video;
producing a plurality of target time series for the target video, the target time series representing temporal variation in mean intensity value for blocks from the same location in different frames of the target video;
determining if alignment exists between the query video and the target video using the query time series and the target time series;
segmenting the query time series and the target time series into a respective set of discrete linear segments; and
performing local sequence alignment of those linear segments.
27. A non-transitory computer-readable medium storing instructions that, when executed by a computer, cause the computer to:
partition frames of the query video and frames of the target video into blocks;
calculate the mean intensity values for the blocks;
produce a plurality of query time series for the query video, the query time series representing temporal variation in mean intensity value for blocks from the same location in different frames of the query video;
produce a plurality of target time series for the target video, the target time series representing temporal variation in mean intensity value for blocks from the same location in different frames of the target video;
determine if alignment exists between the query video and the target video using the query time series and the target time series;
segment the query time series and the target time series into a respective set of discrete linear segments; and
perform local sequence alignment of those linear segments.
14. A device comprising:
a processor; and
memory storing instructions that, when executed, cause the device to perform a method of:
partitioning frames of the query video and frames of the target video into blocks;
calculating the mean intensity values for the blocks;
producing a plurality of query time series for the query video, the query time series representing temporal variation in mean intensity value for blocks from the same location in different frames of the query video;
producing a plurality of target time series for the target video, the target time series representing temporal variation in mean intensity value for blocks from the same location in different frames of the target video;
determining if alignment exists between the query video and the target video using the query time series and the target time series;
segmenting the query time series and the target time series into a respective set of discrete linear segments; and
performing local sequence alignment of those linear segments.
2. The method as claimed in
3. The method as claimed in
4. The method as claimed in
5. The method as claimed in
6. The method as claimed in
7. The method as claimed in
8. The method as claimed in
9. The method as claimed in
10. The method as claimed in
11. The method as claimed in
12. The method as claimed in
13. The method as claimed in
15. The device as claimed in
16. The device as claimed in
17. The device as claimed in
18. The device as claimed in
19. The device as claimed in
20. The device as claimed in
21. The device as claimed in
22. The device as claimed in
23. The device as claimed in
24. The device as claimed in
25. The device as claimed in
26. The device as claimed in
|
The present invention relates to a method and apparatus for comparing videos.
In a video hosting website, such as, for example, YouTube, Google Video and Yahoo! Video, video content may be uploaded by users to the site and made available to others via search engines. It is believed that current web video search engines provide a list of search results ranked according to their relevance scores based on a particular a text query entered by a user. The user must then consider the results to find the video or videos of interest.
Since it is easy for users to upload videos to a host, obtain videos and distribute them again with some modifications, there are potentially numerous duplicate, or near duplicate, contents in the video searching results. Such duplicates would be considered by a user to be “essentially the same”, based on their overall content and subjective impression. For example, duplicate video content may include video sequences with identical or approximately identical content but which are in different file formats, have different encoding parameters, and/or are of different lengths. Other differences may be photometric variations, such as color and/or lighting changes, and/or minor editing operations in spatial or and temporal domain, such as the addition or alteration of captions, logos and/or borders. These examples are not intended to be an exhaustive list and other types of difference may also occur in duplicate videos.
The proliferation of duplicate videos can make it difficult or inconvenient for a user to find the content he or she actually wants. As an example, based on sample queries from YouTube, Google Video and Yahoo! Video, on average it was found that there are more than 27% near-duplicate videos listed in search results, with popular videos being those that are most duplicated in the results. Given a high percentage of duplicate videos in search results, users must spend significant time to sift through them to find the videos they need and must repeatedly watch similar copies of videos which have already been viewed. The duplicate results depreciate users' experience of video search, retrieval and browsing. In addition, such duplicated video content increases network overhead by storing and transferring duplicated video data across networks.
One type of video copy detection technique is sequence matching. In sequence matching, an interval of time with multiple frames provides a basis for comparing the similarity of a query video and a target video. Typically, this involves extracting a sequence of features, which may be, for example, ordinal, motion, color and centroid-based features, from both the query video frames and the target video frames. The extracted feature sequences are then compared to determine the similarity distance between the videos. For example, where ordinal signatures are used, each video frame is first partitioned into N1×N2 blocks and the average intensity of each block is calculated. Then, for each frame, the blocks are ranked according to their average intensities. The ranking order is considered to be that frame's ordinal measure. The sequence of ordinal measures for one video is compared with that of the other to assess their similarity.
Sequence matching enables the start of the overlapping position between duplicate videos to be determined. Sequence matching approaches are suitable for identifying almost identical videos and copies of videos with format modifications, such as coding and frame resolution changes, and those with minor editing in the spatial and temporal domains. In particular, using spatial and temporal ordinal signatures allows detection of video distortions introduced by video digitalization/encoding processes (for example, changes in color, brightness and histogram equalization, encoding parameters) and display format conversions (for example converting to letter-box or pillar-box) and modification of partial content (for example, cropping and zooming in).
Sequence matching techniques involve a relatively easy calculation and provide a compact representation of a frame, particularly when using ordinal measures. Sequence matching tends to be computationally efficient and real time computations may be carried out for processing live video. For example, an ordinal measure with 2×2 partitions of a frame needs only 4-dimensions to represent each frame, requiring fewer comparison points between two frames.
However, existing sequence matching based techniques are unable to detect duplicate video clips where there are changes in frame sequences, such as insertion, deletion or substitutions of frames. Changes of frame sequences are introduced by user editing, or by video hosting websites to insert commercials into a video, for example. Since it is not feasible to assume the type of user modification beforehand, the lack of ability to detect frame sequence changes limits the applicability of sequence matching techniques to real life problems.
Existing solutions for detecting duplicate videos with frame sequence alterations such as insertions, deletions or substitutions of frames, are based on keyframe matching techniques.
Keyframe matching techniques usually segment videos into a series of keyframes to represent the videos. Each keyframe is then partitioned into regions and features are extracted from salient local regions. The features may be, for example, color, texture, corners, or shape features for each region. Keyframe matching is capable of detecting approximate copies that have undergone a substantial degree of editing, such as changes in temporal order or insertion/deletion of frames. However, since there are simply too many local features in a keyframe, it is computationally expensive to identify keyframes, extract local features from each keyframe and conduct metric distance comparison between them to match a video clip against a large number of videos in database.
Recent research has been aimed at improving the speed of keyframe matching methods by fast indexing the feature vectors or by using statistical information to reduce the dimension of feature vectors. However, for online analysis, both the cost of segmenting videos into keyframes and the cost of extracting local features from a query video are still unavoidable. It becomes a real challenge to provide online real-time video duplication detection in a Web 2.0 video hosting environment. Keyframe matching approaches are more suitable for offline video redundancy detection with fine-grain analysis to aggregate and classify database videos.
According to a first aspect of the invention, a method for comparing a query video and a target video includes partitioning frames of the query video and frames of the target video into blocks and calculating the mean intensity value for each block. A plurality of query time series is produced for the query video, each query time series representing temporal variation in mean intensity value for blocks from the same location in different frames of the query video. A plurality of target time series is produced for the target video, each target time series representing temporal variation in mean intensity value for blocks from the same location in different frames of the target video. The query time series and the target time series are used in determining if alignment exists between the query video and the target video. By using the invention, time series may be produced which can be compared for similarities. Duplicate videos show similarities in the their respective time series, which may be used to identify that they are related. A method in accordance with the invention offers efficient video duplication detection by reducing the comparison space between two videos.
An embodiment includes segmenting the query time series and the target time series into a respective set of discrete linear segments and performing local sequence alignment of those linear segments. Linear segmentation enables mean video intensities to be compressed into a discrete list of linear inclines/declines which may then be compared for alignment.
In duplicate videos, the overlapping video regions usually do not span the entire length of video sequences and similar regions could be isolated. Therefore, local alignment of linear segments is needed. In bioinformatics, the Smith-Waterman algorithm is well-known for determining similar regions between two nucleotide or protein sequences. The Smith-Waterman algorithm compares string segments of all possible lengths and optimizes the similarity measure. The present inventors have realized that the Smith-Waterman algorithm may be extended to perform local alignment for video intensity segments. Instead of comparing strings, intensity linear segments are compared to find local optimal alignment between videos.
The Smith-Waterman algorithm is a dynamic programming algorithm to provide optimized search. It is fairly demanding of time and memory resources: the computational complexity is O(MN) and the storage is O(min(M, N)), where M and N are the lengths of the sequences under comparison.
To accelerate the search process, instead of aligning all intensity segments, in an embodiment, a sequence of major inclines/declines is selected as representations of key signatures of compared videos. A heuristic method is applied to provide fast alignment of those major inclines/major declines by excising alignments that unlikely to result in a successful alignment before performing the more time-consuming Smith-Waterman algorithm. This reduces computational cost. The heuristic method expedites the execution of the matching algorithm by filtering out very dissimilar videos and by narrowing down the potential matched regions for similar videos.
An embodiment in accordance with the invention may be advantageous where it is not feasible to know the types of user modifications in advance before applying video duplication detection techniques, allowing sequence matching techniques to be used. In addition, it retains the advantages of using sequence matching approaches, which is to provide efficient detection.
Detecting duplicate video with frame changes using an embodiment in accordance with the invention may be used by video hosting websites as a user feature; or used by video content providers to keep track of royalty payments and to detect possible copyright infringements; or used by communication “pipes” (e.g. Internet Service Providers (ISPs), peer-to-peer (P2P) system providers, content distribution network (CDN)) to reduce network traffic and to manage the storage of video content. It could assist video hosting websites in removing or aggregating near-duplicate videos to provide service for users to search, retrieval and browsing. It could also facilitate video content-based searching by finding similar videos, for example, with high quality (HD) or 3D.
A pre-existing video duplication system may be modified to include an embodiment in accordance with the invention, to enhance the ability to handle user modifications, such as frame insertions, deletions, or substitutions.
According to a second aspect of the invention, a device is programmed or configured to perform a method in accordance with the first aspect.
According to a third aspect of the invention, a data storage medium is provided for storing a machine-executable program for performing a method in accordance with the first aspect.
Some embodiments of the present invention will now be described by way of example only, and with reference to the accompanying drawings, in which;
With reference to
With reference to
For comparison, a target video 5 shown in
The next step at 7 in
A Bottom-Up algorithm is used to segment the time series. The Bottom-Up approach is a well-known approximation algorithm in time series. It starts from the finest possible approximation and iteratively merges segments until a stopping criterion is met. In this case, linear interpolation is used rather than linear regression to find the approximating line since linear interpolation can be obtained in constant time with low computational complexity. The quality of fit for a potential segment is evaluated using residual error. A residual error is calculated by taking all the vertical differences between the best-fit line and the actual data points, squaring them and then summing them together.
In another embodiment, the fast linear segmentation of the time series is achieved by an interpolation method using extraction of major maxima and major minima points as extrema points.
Following linear segmentation of the time series, major inclines/declines in the time series are selected at 9 as providing significant video signatures. This enables the search space for aligning linear segments to be reduced.
The linear segments with longer distance and deeper height usually represent conspicuous changes in a scene. They are therefore chosen as major inclines. Matching consecutive major inclines indicates video copies following similar behavior with the same sequence of major scene changes. In contrast, linear segments of deep heights but of very short lengths are typically associated with shot boundaries, such as hard cuts or fades. Such linear segments often contain less information than those representing changes within a scene. A shot boundary can be determined if the linear segments from all partitioned blocks have deep heights within a same short distance occurring at a same time (i.e. the same starting frame IDs). Those linear segments representing shot boundaries are ignored in the process of selecting major inclines.
At 12, the major inclines/declines of a query video and a target video are compared, as illustrated in
Given two frame sequences F1 and F2, the ordinal signature measurement calculates the distance between two frame sequences F1 and F2
Since user modification and video processing techniques could cause differences in video intensity values, such as histogram equalization, frame resizing or cropping, changes in brightness/color/hue, other added noises, the heights of similar intensity linear segments could be different. The distances of similar linear segments could also be different due to linear segment approximation error or other user introduced noises. The use of parameter ratioH and ratioL allows tolerance of such noises to a certain degree. Even though here the ordinal signatures based measurement D(p) is used to calculate distance of two frame sequences, matching of video frames can be based on other global descriptors or even local descriptors, using sequence matching or keyframe based matching algorithms.
Alter aligning major inclines, the potential major inclines alignments are extended to neighbor non-major inclines to find more aligned linear segments, as shown in
At the next step, to find the key approximate alignments, the inventors have realized that alignment can be carried out using an approach similar to that provided by FASTA, which is a fast search algorithm used in finding similar DNA and protein sequences. All diagonal lines of consecutive value “1”s in the matrix are identified, as shown in
Reward scores are assigned to diagonal matched lines and penalty scores to gaps, that is, mismatches, when connecting neighboring diagonal lines. A score is obtained by adding the reward scores of each of the connected diagonals and subtracting the gap penalties. If the score of a linked approximate alignment exceeds a given threshold, a check is made to determine if the previously ignored initial short aligned segments around the linked segments can be joined to form an approximate alignment with gaps, as shown in
The next stage at 15 is to conduct fine-grain alignment of all intensity linear segments of compared videos by applying the Smith-Waterman algorithm. Based on the approximate alignments of major inclines/declines found previously, lists of linear intensity segments that could lead to successful alignment can be determined. The Smith-Waterman algorithm only needs to examine a restricted range of linear segments.
The Smith-Waterman algorithm uses edit distance to find the optimal alignment. It constructs a scoring matrix H as follows:
where x and y are the lists of linear segments that are potentially aligned, M and N are the lengths of x and y sequences, and ω(xi, yj) is a scoring scheme. If xi and yj match, ω(xi, yj) is positive and if they don't match, it is negative. For insertion and deletion, ω(xi, −) and ω(−, yj) are negative.
The Smith-Waterman algorithm finds the local alignment by searching for the maximal score in matrix H and then tracking back the optimal path depending on the direction of movement used to construct the matrix. It maintains this process until a score of 0 is reached. Once the local optimal alignment is obtained, the video similarity distance is calculated at 16 by applying existing sequence matching techniques for the matched linear segments. In this embodiment, ordinal measurement with 2×2 partitions is used to determine the video similarity distance. If the distance is found to be less than a threshold at 17, the two compared videos are considered to be duplicates.
Next, alignment at video frame level is examined at 18, instead of at linear segment level, for linear segments. Since the optimal local alignment is based on intensity linear segments, if frame changes occur inside a segment, the entire segment is considered as not being a match using the Smith-Waterman algorithm, as discussed above. To find potential matching positions inside the unmatched segments, a frame to frame comparison is conducted to calculate the frame level similarity distance. If a frame similarity distance is less than the video similarity distance obtained using the Smith-Waterman algorithm, those frames are considered to be matched. This ensures that the similarity distance of the matched frames inside those unmatched segments will not exceed the average video similarity distance obtained from the rest of the matched segments. Frame comparisons are initiated from both the beginning and the end of the unmatched segments, towards the middle of the segments. Matching is continued until a frame similarity distance is larger than the video similarity distance. The video overlapping positions are then updated.
Thus, in this embodiment, the changes of intensity values of partitioned blocks are first considered as time series. Then; the time series are segmented into a list of discrete linear representations. Local sequence alignment is performed of those linear segments to find optimal matching position. Then video similarity distance is calculated based on the potential alignment position. If the best matching similarity distance is less than a given threshold, two videos are considered as duplicate. To handle changes of frames, gaps, the result of frame insertions, deletions, and substitutions, are permitted to exist when in comparing linear sequence segments.
With reference to
A user transmits a video Q that he or she wants to add to the database 19 by submitting the video Q via a user interface 20. The video Q is sent to the video database 19 and also to a partitioner 21. At Stage 1 of the operation, the partitioner 21 partitions each frame of the video Q into N1×N2 blocks. A calculator 22 calculates the mean intensity values for each of the blocks.
At Stage 2, mean intensity value data is received by a segmenter 23 from the calculator 22. The segmenter 23 segments the changes of mean intensities of each block. A sorter 24 then sorts the linear segments from all blocks based on the segment starting frame IDs into a sorted list. A selector 25 receives the sorted list and selects major inclines/major declines from the sorted list.
In the next stage, Stage 3, an aligner 26 attempts to find an approximate match between the selected major inclines and major declines of the query video and those of one or more target videos that have undergone similar processing. The results are tested by a first comparator 27. If there is no similarity, judged against a given threshold parameter, then the query video and target video or videos are deemed to not be duplicates and the duplication detection process stops at 28.
If the comparator 27 detects approximate alignment, at Stage 4, a handed Smith-Waterman algorithm is applied by processor 29 and the results applied to a similarity distance calculator 30. The output of the similarity distance calculator 30 is checked against a given threshold by a second comparator 31. If there is insufficient similarity, the compared videos are deemed not to be duplicates and the process stops at 32.
If there is sufficient similarity, at Stage 5, a frame matcher 33 checks unmatched frame positions for video insertions, deletions or substitutions.
The results of the duplicate detection process are sent to the video database 19 to be used in managing the stored videos. If the query video is not found to be a duplicate, the video database 19 accepts it for storage. If the query video is found to be a duplicate, then in one embodiment, the video database 19 rejects it with or without a message to the user to inform them.
In an alternative embodiment, or mode, if the query video is found to be a duplicate, it is accepted into the video database 19 but it is denoted as a duplicate, preferably with a reference to the target video that it matches. Duplicate videos may be collected together in a group. When a search performed on the database calls up one of the group, other group members may be suppressed from the search results or are given a lower ranking in the search results than they would otherwise merit, so that any duplicates tend to be presented after other non-duplicates.
The video management apparatus of
The functions of the various elements shown in the figures, including any functional blocks labeled as “processors”, may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non volatile storage. Other hardware, conventional and/or custom, may also be included.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Wood, Thomas L., Chang, Fangzhe, Ren, Yansong
Patent | Priority | Assignee | Title |
10250941, | Dec 13 2016 | NBCUniversal Media, LLC; Comcast Cable Communications, LLC | System and method for mapping affiliated graphs using video fingerprints |
Patent | Priority | Assignee | Title |
7472242, | Feb 14 2006 | Network Appliance, Inc. | Eliminating duplicate blocks during backup writes |
EP2214106, | |||
WO2010078629, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 24 2011 | Alcatel Lucent | (assignment on the face of the patent) | / | |||
Feb 15 2011 | REN, YANSONG | Alcatel-Lucent USA Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026095 | /0787 | |
Feb 24 2011 | WOOD, THOMAS L | Alcatel-Lucent USA Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026095 | /0787 | |
Mar 01 2011 | CHANG, FANGZHE | Alcatel-Lucent USA Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026095 | /0787 | |
Jan 30 2013 | Alcatel-Lucent USA Inc | CREDIT SUISSE AG | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 030510 | /0627 | |
Aug 19 2014 | CREDIT SUISSE AG | Alcatel-Lucent USA Inc | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 033949 | /0016 | |
Aug 26 2014 | Alcatel-Lucent USA Inc | Alcatel Lucent | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033616 | /0042 | |
Sep 12 2017 | Nokia Technologies Oy | Provenance Asset Group LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 043877 | /0001 | |
Sep 12 2017 | NOKIA SOLUTIONS AND NETWORKS BV | Provenance Asset Group LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 043877 | /0001 | |
Sep 12 2017 | ALCATEL LUCENT SAS | Provenance Asset Group LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 043877 | /0001 | |
Sep 13 2017 | Provenance Asset Group LLC | NOKIA USA INC | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 043879 | /0001 | |
Sep 13 2017 | PROVENANCE ASSET GROUP HOLDINGS, LLC | CORTLAND CAPITAL MARKET SERVICES, LLC | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 043967 | /0001 | |
Sep 13 2017 | PROVENANCE ASSET GROUP, LLC | CORTLAND CAPITAL MARKET SERVICES, LLC | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 043967 | /0001 | |
Sep 13 2017 | PROVENANCE ASSET GROUP HOLDINGS, LLC | NOKIA USA INC | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 043879 | /0001 | |
Dec 20 2018 | NOKIA USA INC | NOKIA US HOLDINGS INC | ASSIGNMENT AND ASSUMPTION AGREEMENT | 048370 | /0682 | |
Nov 01 2021 | CORTLAND CAPITAL MARKETS SERVICES LLC | Provenance Asset Group LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 058983 | /0104 | |
Nov 01 2021 | CORTLAND CAPITAL MARKETS SERVICES LLC | PROVENANCE ASSET GROUP HOLDINGS LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 058983 | /0104 | |
Nov 29 2021 | Provenance Asset Group LLC | RPX Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 059352 | /0001 | |
Nov 29 2021 | NOKIA US HOLDINGS INC | PROVENANCE ASSET GROUP HOLDINGS LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 058363 | /0723 | |
Nov 29 2021 | NOKIA US HOLDINGS INC | Provenance Asset Group LLC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 058363 | /0723 |
Date | Maintenance Fee Events |
Sep 03 2014 | ASPN: Payor Number Assigned. |
May 14 2018 | REM: Maintenance Fee Reminder Mailed. |
Nov 05 2018 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Sep 30 2017 | 4 years fee payment window open |
Mar 30 2018 | 6 months grace period start (w surcharge) |
Sep 30 2018 | patent expiry (for year 4) |
Sep 30 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 30 2021 | 8 years fee payment window open |
Mar 30 2022 | 6 months grace period start (w surcharge) |
Sep 30 2022 | patent expiry (for year 8) |
Sep 30 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 30 2025 | 12 years fee payment window open |
Mar 30 2026 | 6 months grace period start (w surcharge) |
Sep 30 2026 | patent expiry (for year 12) |
Sep 30 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |