System and method for partitioning a video into a series of semantic units where each semantic unit relates to a generally complete thematic topic. A computer implemented method for partitioning a video into a series of semantic units wherein each semantic unit relates to a theme or a topic, comprises dividing a video into a plurality of homogeneous segments, analyzing audio and visual content of the video, extracting a plurality of keywords from the speech content of each of the plurality of homogeneous segments of the video, and detecting and merging a plurality of groups of semantically related and temporally adjacent homogeneous segments into a series of semantic units in accordance with the results of both the audio and visual analysis and the keyword extraction. The present invention can be applied to generate important table-of-contents as well as index tables for videos to facilitate efficient video topic searching and browsing.
|
6. A method for partitioning a video sequence, comprising:
dividing a video sequence into a plurality of segments;
selecting a group of segments from the plurality of segments, wherein the segments in the group of segments are temporally adjacent;
forming a partition of the video sequence from the group of segments;
denoting an end segment, wherein the end segment is a one of the plurality of segments in the group of segments that is located at an end of the group of segments;
determining whether an audio content of any of the plurality of segments around the end segment includes only music or only silence;
responsive to a determination that the audio content of the any of the plurality of segments around the end segment includes only music or only silence, selecting a one of the any of the plurality of segments around the end segment that includes only music or only silence as a boundary for the partition; and
responsive to a determination that the audio content of the any of the plurality of segments around the end segment does not include only music or only silence, locating a one of the plurality of segments around the end segment having visual content including a narrator shot and selecting the one of the plurality of segments having visual content including a narrator shot as the boundary for the partition.
11. An apparatus, comprising:
a processing unit, wherein the processing unit is configured to:
divide a video sequence into a plurality of segments;
select a group of segments from the plurality of segments, wherein the segments in the group of segments are temporally adjacent;
form a partition of the video sequence from the group of segments;
denote an end segment, wherein the end segment is a one of the plurality of segments in the group of segments that is located at an end of the group of segments;
determine whether an audio content of any of the plurality of segments around the end segment includes only music or only silence;
select a one of the any of the plurality of segments around the end segment that includes only music or only silence as a boundary for the partition, responsive to a determination that the audio content of the any of the plurality of segments around the end segment includes only music or only silence;
locate a one of the plurality of segments around the end segment having visual content including a narrator shot; and
select the one of the plurality of segments having visual content including a narrator shot as the boundary for the partition, responsive to a determination that the audio content of the any of the plurality of segments around the end segment does not include only music or only silence.
7. An apparatus, comprising:
a processing unit, wherein the processing unit is configured to:
divide a video sequence into a plurality of segments;
generate a transcript of speech content of the video sequence, wherein the transcript comprises a plurality of words and identifies temporal locations of the words in the video sequence;
select a plurality of keywords from the plurality of words in the transcript;
select a set of keywords from the plurality of keywords, wherein the keywords in the set of keywords are related to each other by meanings of the keywords;
determine a distribution of occurrences across the plurality of segments of the keywords in the set of keywords;
select a group of segments from the plurality of segments using the distribution, wherein the segments in the group of segments are temporally adjacent and the group of segments corresponds to a peak of the occurrences across the plurality of segments of the keywords in the set of keywords;
form a partition of the video sequence from the group of segments;
generate the transcript of speech content of the video sequence from audio content of the video sequence using automatic speech recognition;
determine whether the transcript generated from the audio content is satisfactory;
determine whether the video sequence has closed caption, responsive to a determination that the transcript generated from the audio content is not satisfactory; and
generate the transcript from the closed caption, responsive to a determination that the video sequence has closed caption.
1. A method for partitioning a video sequence, comprising:
dividing a video sequence into a plurality of segments;
generating a transcript of speech content of the video sequence, wherein the transcript comprises a plurality of words and identifies temporal locations of the words in the video sequence;
selecting a plurality of keywords from the plurality of words in the transcript;
selecting a set of keywords from the plurality of keywords, wherein the keywords in the set of keywords are related to each other by meanings of the keywords;
determining a distribution of occurrences across the plurality of segments of the keywords in the set of keywords;
selecting a group of segments from the plurality of segments using the distribution, wherein the segments in the group of segments are temporally adjacent and the group of segments corresponds to a peak of the occurrences across the plurality of segments of the keywords in the set of keywords; and
forming a partition of the video sequence from the group of segments; wherein generating the transcript of speech content of the video sequence comprises
generating the transcript from audio content of the video sequence using automatic speech recognition,
determining whether the transcript generated from the audio content is satisfactory,
responsive to a determination that the transcript generated from the audio content is not satisfactory, determining whether the video sequence has closed caption, and
responsive to a determination that the video sequence has closed caption, generating the transcript from the closed caption.
16. A computer program product for partitioning a video sequence, comprising:
a non-transitory computer readable storage medium;
first program instructions to divide a video sequence into a plurality of segments;
second program instructions to select a group of segments from the plurality of segments, wherein the segments in the group of segments are temporally adjacent;
third program instructions to form a partition of the video sequence from the group of segments;
fourth program instructions to denote an end segment, wherein the end segment is a one of the plurality of segments in the group of segments that is located at an end of the group of segments;
fifth program instructions to determine whether an audio content of any of the plurality of segments around the end segment includes only music or only silence;
sixth program instructions to select a one of the any of the plurality of segments around the end segment that includes only music or only silence as a boundary for the partition, responsive to a determination that the audio content of the any of the plurality of segments around the end segment includes only music or only silence;
seventh program instructions to locate a one of the plurality of segments around the end segment having visual content including a narrator shot;
eighth program instructions to select the one of the plurality of segments having visual content including a narrator shot as the boundary for the partition, responsive to a determination that the audio content of the any of the plurality of segments around the end segment does not include only music or only silence; and
wherein the first, second, third, fourth, fifth, sixth, seventh, and eighth program instructions are stored on the non-transitory computer readable storage medium.
12. A computer program product for partitioning a video sequence, comprising:
a non-transitory computer readable storage medium;
first program instructions to divide a video sequence into a plurality of segments;
second program instructions to generate a transcript of speech content of the video sequence, wherein the transcript comprises a plurality of words and identifies temporal locations of the words in the video sequence;
third program instructions to select a plurality of keywords from the plurality of words in the transcript;
fourth program instructions to select a set of keywords from the plurality of keywords, wherein the keywords in the set of keywords are related to each other by meanings of the keywords;
fifth program instructions to determine a distribution of occurrences across the plurality of segments of the keywords in the set of keywords;
sixth program instructions to select a group of segments from the plurality of segments using the distribution, wherein the segments in the group of segments are temporally adjacent and the group of segments corresponds to a peak of the occurrences across the plurality of segments of the keywords in the set of keywords;
seventh program instructions to form a partition of the video sequence from the group of segments; and
wherein the first, second, third, fourth, fifth, sixth, and seventh program instructions are stored on the non-transitory computer readable storage medium,
wherein the second program instructions comprise program instructions to:
generate the transcript of speech content of the video sequence from audio content of the video sequence using automatic speech recognition;
determine whether the transcript generated from the audio content is satisfactory;
determine whether the video sequence has closed caption, responsive to a determination that the transcript generated from the audio content is not satisfactory; and
generate the transcript from the closed caption, responsive to a determination that the video sequence has closed caption.
2. The method of
obtaining color data for the frames;
identifying from the color data temporal locations of abrupt color changes in the video sequence, wherein the locations of abrupt color changes correspond to abrupt color changes between adjacent ones of the frames; and
dividing the video sequence into the plurality of segments at the locations of abrupt color changes.
3. The method of
responsive to a determination that the video sequence does not have closed caption, manually generating the transcript from the audio content.
4. The method of
5. The method of
generating a sound label for each of the plurality of segments, wherein the sound label indicates a class of sound in audio content of a corresponding segment;
generating a visual label for each of the plurality of segments, wherein the visual label indicates a class of visual content of the corresponding segment; and
selecting a one of the plurality of segments as a boundary for the partition using a one of the sound label or the visual label for the selected one of the plurality of segments.
8. The apparatus of
obtain color data for the frames;
identify from the color data temporal locations of abrupt color changes in the video sequence, wherein the locations of abrupt color changes correspond to abrupt color changes between adjacent ones of the frames; and
divide the video sequence into the plurality of segments at the locations of abrupt color changes.
9. The apparatus of
10. The apparatus of
generate a sound label for each of the plurality of segments, wherein the sound label indicates a class of sound in audio content of a corresponding segment;
generate a visual label for each of the plurality of segments, wherein the visual label indicates a class of visual content of the corresponding segment; and
select a one of the plurality of segments as a boundary for the partition using a one of the sound label or the visual label for the selected one of the plurality of segments.
13. The computer program product of
obtain color data for the frames;
identify from the color data temporal locations of abrupt color changes in the video sequence, wherein the locations of abrupt color changes correspond to abrupt color changes between adjacent ones of the frames; and
divide the video sequence into the plurality of segments at the locations of abrupt color changes.
14. The computer program product of
15. The computer program product of
eighth program instructions to generate a sound label for each of the plurality of segments, wherein the sound label indicates a class of sound in audio content of a corresponding segment;
ninth program instructions to generate a visual label for each of the plurality of segments, wherein the visual label indicates a class of visual content of the corresponding segment;
tenth program instructions to select a one of the plurality of segments as a boundary for the partition using a one of the sound label or the visual label for the selected one of the plurality of segments; and
wherein the eighth, ninth, and tenth program instructions are stored on the non-transitory computer readable storage medium.
|
This application is a continuation of application Ser. No. 11/210,305, filed Aug. 24, 2005, now U.S. Pat. No. 7,382,933.
This invention was made with Government support under Contract No.: W91CRB-04-C-0056 awarded by the U.S. Army. The Government has certain rights in this invention.
1. Field of the Invention
The present invention relates generally to the field of multimedia content analysis, and more particularly, to a system and method for segmenting a video into semantic units using joint audio, visual and text information.
2. Description of the Related Art
Advances in modern multimedia technologies have led to huge and ever-growing archives of videos in various application areas including entertainment, education, training, and online information services. On one hand, this has made digital videos available and accessible to the general public; while on the other hand, it poses great challenges to the task of efficient content access, browse and retrieval.
Consider a video currently available at a website of CDC (Centers for Disease Control and Prevention), as an example. The video is approximately 26 minutes long, and describes the history of bioterrorism. Specifically, the content of the video consists of the following seven parts (in temporal order): overview, anthrax, plague, smallpox, botulism, viral hemorrhagic fevers and tularemia. Meanwhile, this website also contains seven other short video clips, with each clip focusing on one particular content part belonging to the above seven categories.
This availability of individual video segments allows for them to be assembled together as per some course objective, and is further useful in the sense that, when a viewer is only interested in one particular type of disease, he or she can directly watch the relevant video clip instead of looking it up in the original long video using fast forward or backward controls on a video player. Nevertheless, this convenience does not come free. With the current state of technology, it can only be achieved by either manual video segmentation or costly video reproduction.
Automatic video segmentation has been a popular research topic for a decade, and many approaches have been proposed. Among the proposed approaches, a common solution is to segment a video into shots where a shot contains a set of contiguously recorded frames. However, while a shot forms the building block of a video sequence in many domains, this low-level structure in itself often does not directly correspond to the meaning of the video. Consequently, most recent work proposes to segment a video into scenes where a scene depicts a higher-level concept. Various approaches have been reported as having received acceptable results. Nevertheless, a scene is still vaguely defined, and only applies to certain domains of video such as movies. In general, semantic understanding of scene content by jointly exploiting various cues in the form of audio, visual information and text available in the video has not been well attempted by previous efforts in the video analysis domain.
It would, accordingly, be advantageous to provide a system and method for segmenting a video sequence into a series of semantic units, with each semantic unit containing a generally complete and definite thematic topic.
The present invention provides a system and method for partitioning a video into a series of semantic units where each semantic unit relates to a generally complete thematic topic. A computer implemented method for partitioning a video sequence into a series of semantic units wherein each semantic unit relates to a thematic topic, comprises dividing a video into a plurality of homogeneous segments, analyzing audio and visual content of the video, extracting a plurality of keywords from speech content of each of the plurality of homogeneous segments of the video, and detecting and merging a plurality of groups of semantically related and temporally adjacent homogeneous segments into a series of semantic units in accordance with results of both the audio and visual analysis and the keyword extraction.
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
With reference now to the figures,
With reference now to
In the depicted example, data processing system 200 employs a hub architecture including north bridge and memory controller hub (MCH) 208 and south bridge and input/output (I/O) controller hub (ICH) 210. Processing unit 202, main memory 204, and graphics processor 218 are connected to north bridge and memory controller hub 208. Graphics processor 218 may be connected to north bridge and memory controller hub 208 through an accelerated graphics port (AGP).
In the depicted example, local area network (LAN) adapter 212, audio adapter 216, keyboard and mouse adapter 220, modem 222, read only memory (ROM) 224, hard disk drive (HDD) 226, CD-ROM drive 230, universal serial bus (USB) ports and other communications ports 232, and PCI/PCIe devices 234 connect to south bridge and I/O controller hub 210 through bus 238. PCI/PCIe devices may include, for example, Ethernet adapters, add-in cards and PC cards for notebook computers. PCI uses a card bus controller, while PCIe does not. ROM 224 may be, for example, a flash binary input/output system (BIOS).
Hard disk drive 226 and CD-ROM drive 230 connect to south bridge and I/O controller hub 210 through bus 240. Hard disk drive 226 and CD-ROM drive 230 may use, for example, an integrated drive electronics (IDE) or serial advanced technology attachment (SATA) interface. Super I/O (SIO) device 236 may be connected to south bridge and I/O controller hub 210.
An operating system runs on processing unit 202 and coordinates and provides control of various components within data processing system 200 in
As a server, data processing system 200 may be, for example, an IBM eServer™ pSeries® computer system, running the Advanced Interactive Executive (AIX®) operating system or LINUX operating system (eserver, pSeries and AIX are trademarks of International Business Machines Corporation in the United States, other countries, or both while Linux is a trademark of Linus Torvalds in the United States, other countries, or both). Data processing system 200 may be a symmetric multiprocessor (SMP) system including a plurality of processors in processing unit 202. Alternatively, a single processor system may be employed.
Instructions for the operating system, the object-oriented programming system, and applications or programs are located on storage devices, such as hard disk drive 226, and may be loaded into main memory 204 for execution by processing unit 202. The processes for embodiments of the present invention are performed by processing unit 202 using computer usable program code, which may be located in a memory such as, for example, main memory 204, read only memory 224, or in one or more peripheral devices 226 and 230.
Those of ordinary skill in the art will appreciate that the hardware in
In some illustrative examples, data processing system 200 may be a personal digital assistant (PDA), which is configured with flash memory to provide non-volatile memory for storing operating system files and/or user-generated data.
A bus system may be comprised of one or more buses, such as bus 238 or bus 240 as shown in
The present invention provides a system and method for partitioning a video into a series of semantic units where each semantic unit relates to a generally complete thematic topic. According to an exemplary embodiment of the present invention, this goal is achieved by exploiting multiple media cues including audio and visual information and text cues.
As shown in
The method then proceeds with the parallel steps of analyzing the audio/visual content of incoming video sequence 402 and integrating the results of the analysis (Step 406), and recognizing speech content from the video sequence and generating a transcript of the speech content (Step 408).
The visual content of the segmented video sequence 506, i.e. each micro-segment, is analyzed based on detected human faces (Step 508) and on extracted video overlay text in the frames of the micro-segments (Step 510). As a result of this visual content analysis, each micro-segment is classified into one of three classes or semantic visual labels (Step 512). These three classes include narrator, informative text (such as a segment showing presentation slides), and linkage scene (such as outdoor scene, indoor demo, etc.).
The results of the audio and visual analysis are then integrated together (Step 514), such that each micro-segment is tagged with a distinct semantic audio/visual label as shown at 516. According to an exemplary embodiment of the present invention, the semantic audio/visual labels are one of the following fifteen types: narrator presentation, narrator with environmental audio background, narrator with audio silence, narrator with music playing, narrator presentation with music playing, informative text with voice-over, informative text with environmental audio background, informative text with audio silence, informative text with music playing, informative text with voice-over and music, linkage scene with voice-over, linkage scene with environmental audio background, linkage scene with audio silence, linkage scene with music playing, and linkage scene with voice-over and music.
Returning to
In general, a manual speech transcribing service is used to generate the transcript only in a worst case scenario, when satisfactory transcripts cannot be obtained by either speech recognition or closed caption extraction.
Returning again to
The audiovisual analysis results from Step 406 and the extracted keywords from Step 410 are then combined together, and semantically related micro-segments are detected and merged into macro-segments (Step 412).
Step 702: Group all extracted keywords into a collection of synonym sets. Each synonym set contains words of identical or similar meanings although their morphologies could be varied. Words of identical meaning include abbreviations (e.g., “WMD” for ‘weapons of mass destruction’), alternative spellings (e.g., “anaesthesia” for “anesthesia”), or orthographical variations (e.g., “audio/visual input”, “audiovisual input” and “audio-visual input”). Abbreviations can be identified by using a pre-compiled lexicon or matched on the fly by applying natural language processing (NLP) techniques. Alternative spellings and orthographic variants can be recognized by lexical pattern processing.
Words of similar meanings include synonyms. For instance, words such as contagious, epidemiology, epidemic, infectious, infect, infection, plague and infest, are qualified to be grouped into a single synonym set. This captures the word correlation effect. The formation of synonym sets could be achieved using various existing techniques. One approach is to use a lexical taxonomy such as the WordNet, which provides word meanings and other semantically related words. Another approach is to apply natural language processing and machine learning techniques such as support vector machine (SVM) or latent semantic index (LSI) to find semantically related words and to form word clusters of similar meaning.
Step 704: For each synonym set S whose cardinality exceeds a certain threshold (i.e., it contains a sufficient number of words), find its distribution pattern across all micro-segments. In other words, find the micro-segments which have one or more keywords belonging to set S. Then, build set S's keyword occurrence histogram H, whose x-axis indicates the micro-segments and whose y-axis specifies the keyword occurrence frequency.
Based on this observation, it could be concluded that, if set S's keyword occurrence histogram displays a distinct peak restricted to a certain temporal period, as shown in
Step 706: Group all micro-segments, which are temporally adjacent and semantically related, into a macro-segment based on the decision made in Step 704. The challenge of this step is to locate the precise macro-segment boundaries where the thematic topics change. This can be solved by incorporating the pre-obtained audiovisual analysis results based on the following two observations.
Observation 1), it is noticed that for most professionally produced videos, there is a relatively long period of silence and/or music between two macro-segments of different topics.
From an examination of
Observation 2), it is also observed that narrator shots usually appear when a new topic is initiated or the previous topic ends. This is also confirmed in
Based on these two observations, the macro-segment boundaries are determined as follows:
Referring back to
The present invention thus provides a system and method for partitioning a video into a series of semantic units where each semantic unit relates to a generally complete thematic topic. A computer implemented method for partitioning a video sequence into a series of semantic units wherein each semantic unit relates to a thematic topic, comprises dividing a video into a plurality of homogeneous segments, analyzing audio and visual content of the video, extracting a plurality of keywords from speech content of each of the plurality of homogeneous segments of the video, and detecting and merging a plurality of groups of semantically related and temporally adjacent homogeneous segments into a series of semantic units in accordance with results of both the audio and visual analysis and the keyword extraction.
The invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
Furthermore, the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.
Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Li, Ying, Dorai, Chitra, Park, Youngja
Patent | Priority | Assignee | Title |
10034028, | Sep 22 2009 | VITAC CORPORATION | Caption and/or metadata synchronization for replay of previously or simultaneously recorded live programs |
11032620, | Feb 14 2020 | DISH Network Technologies India Private Limited | Methods, systems, and apparatuses to respond to voice requests to play desired video clips in streamed media based on matched close caption and sub-title text |
11178463, | Oct 31 2017 | SAMSUNG ELECTRONICS CO , LTD | Electronic device, speech recognition method, and recording medium |
11223878, | Oct 31 2017 | SAMSUNG ELECTRONICS CO , LTD | Electronic device, speech recognition method, and recording medium |
11270123, | Oct 22 2019 | Xerox Corporation | System and method for generating localized contextual video annotation |
11412291, | Feb 06 2020 | Samsung Electronics Co., Ltd. | Electronic device and control method thereof |
11432045, | Feb 19 2018 | SAMSUNG ELECTRONICS CO , LTD | Apparatus and system for providing content based on user utterance |
11445266, | Sep 13 2018 | ICHANNEL.IO LTD. | System and computerized method for subtitles synchronization of audiovisual content using the human voice detection for synchronization |
11509969, | Feb 14 2020 | DISH Network Technologies India Private Limited | Methods, systems, and apparatuses to respond to voice requests to play desired video clips in streamed media based on matched close caption and sub-title text |
11682415, | Mar 19 2021 | International Business Machines Corporation | Automatic video tagging |
11683558, | Jun 29 2021 | THE NIELSEN COMPANY US , LLC | Methods and apparatus to determine the speed-up of media programs using speech recognition |
11706495, | Feb 19 2018 | Samsung Electronics Co., Ltd. | Apparatus and system for providing content based on user utterance |
11785278, | Mar 18 2022 | Comcast Cable Communications, LLC | Methods and systems for synchronization of closed captions with content output |
11849193, | Feb 14 2020 | DISH Network Technologies India Private Limited | Methods, systems, and apparatuses to respond to voice requests to play desired video clips in streamed media based on matched close caption and sub-title text |
8423555, | Jul 09 2010 | Comcast Cable Communications, LLC | Automatic segmentation of video |
8572488, | Mar 29 2010 | CERBERUS BUSINESS FINANCE, LLC, AS COLLATERAL AGENT | Spot dialog editor |
8707381, | Sep 22 2009 | VITAC CORPORATION | Caption and/or metadata synchronization for replay of previously or simultaneously recorded live programs |
9177080, | Jul 09 2010 | Comcast Cable Communications, LLC | Automatic segmentation of video |
9418637, | Mar 20 2015 | CLAVISION INC | Methods and systems for visual music transcription |
Patent | Priority | Assignee | Title |
6421645, | Apr 09 1999 | Nuance Communications, Inc | Methods and apparatus for concurrent speech recognition, speaker segmentation and speaker classification |
7747943, | Sep 07 2001 | Microsoft Technology Licensing, LLC | Robust anchoring of annotations to content |
20020120925, | |||
20030023972, | |||
20030030752, | |||
20030093814, | |||
20040083104, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 25 2008 | International Business Machines Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Oct 02 2015 | REM: Maintenance Fee Reminder Mailed. |
Feb 21 2016 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Feb 21 2015 | 4 years fee payment window open |
Aug 21 2015 | 6 months grace period start (w surcharge) |
Feb 21 2016 | patent expiry (for year 4) |
Feb 21 2018 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 21 2019 | 8 years fee payment window open |
Aug 21 2019 | 6 months grace period start (w surcharge) |
Feb 21 2020 | patent expiry (for year 8) |
Feb 21 2022 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 21 2023 | 12 years fee payment window open |
Aug 21 2023 | 6 months grace period start (w surcharge) |
Feb 21 2024 | patent expiry (for year 12) |
Feb 21 2026 | 2 years to revive unintentionally abandoned end. (for year 12) |