Methods, systems, and computer-readable storage media for receiving metadata indicating one or more data sources, automatically retrieving data from a data source indicated in the metadata based on analytical data, generating a visualization based on the data, the visualization including an updated version of a previous visualization, generating an image of the visualization, and providing a video including the image, the image replacing a previous image within the video.
|
1. A computer-implemented method for providing dynamic content in videos between plays of the videos, the method being executed by one or more processors and comprising:
receiving metadata indicating one or more data sources and analytical data used to query the one or more data sources;
in response to a request to play a video, automatically retrieving data from a data source indicated in the metadata based on the analytical data;
generating a visualization based on a change of the data, the visualization comprising an updated version of a previous visualization, the visualization comprising a graph, a chart or a table;
generating an image of the visualization; and
providing the video comprising the image, the image replacing a previous image within the video.
8. A non-transitory computer-readable storage medium coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations for providing dynamic content in videos between plays of the videos, the operations comprising:
receiving metadata indicating one or more data sources and analytical data used to query the one or more data sources;
in response to a request to play a video, automatically retrieving data from a data source indicated in the metadata based on the analytical data;
generating a visualization based on a change of the data, the visualization comprising an updated version of a previous visualization;
generating an image of the visualization, the visualization comprising a graph, a chart or a table; and
providing the video comprising the image, the image replacing a previous image within the video.
15. A system, comprising:
a computing device; and
a computer-readable storage device coupled to the computing device and having instructions stored thereon which, when executed by the computing device, cause the computing device to perform operations for natural language explanations for providing dynamic content in videos between plays of the videos, the operations comprising:
receiving metadata indicating one or more data sources and analytical data used to query the one or more data sources;
in response to a request to play a video, automatically retrieving data from a data source indicated in the metadata based on the analytical data;
generating a visualization based on a change of the data, the visualization comprising an updated version of a previous visualization, the visualization comprising a graph, a chart or a table;
generating an image of the visualization; and
providing the video comprising the image, the image replacing a previous image within the video.
2. The method of
3. The method of
5. The method of
7. The method of
9. The computer-readable storage medium of
10. The computer-readable storage medium of
11. The computer-readable storage medium of
12. The computer-readable storage medium of
13. The computer-readable storage medium of
14. The computer-readable storage medium of
16. The system of
17. The system of
19. The system of
|
Video is a compelling medium for communication. For example, advertisements often use video to engage consumers more effectively than other types of media (e.g., print, radio). Videos often include content that is used to provide information, which enables viewers to make decisions. For example, videos can be used in presentations to effectively engage an audience and inform the audience on particular topics. However, videos are static in nature. Once a video is created, the video must be again edited to change content. Video editing is not a skill that everyone has by default and can be difficult to effectively perform. Even for users with video editing skills, editing videos can be time consuming and cumbersome, particularly for relatively dynamic content.
Implementations of the present disclosure are directed to automated updating of video content. More particularly, implementations of the present disclosure are directed to a video editing platform that automatically updates content in videos to provide a dynamic video that changes as content changes.
In some implementations, actions include receiving metadata indicating one or more data sources, automatically retrieving data from a data source indicated in the metadata based on analytical data, generating a visualization based on the data, the visualization including an updated version of a previous visualization, generating an image of the visualization, and providing a video including the image, the image replacing a previous image within the video. Other implementations of this aspect include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.
These and other implementations can each optionally include one or more of the following features: the video is generated based on a set of images including one or more static images and one or more dynamic images, the image including a dynamic image; the visualization is generated as a hypertext markup language (HTML) content division element (<div>); the image is generated as a screenshot of the visualization; the analytical data includes at least one measure and at least one dimension; the data source includes one of a database and a data file; and retrieving the data includes querying the data source using a query and receiving the data responsive to the query.
The present disclosure also provides a computer-readable storage medium coupled to one or more processors and having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations in accordance with implementations of the methods provided herein.
The present disclosure further provides a system for implementing the methods provided herein. The system includes one or more processors, and a computer-readable storage medium coupled to the one or more processors having instructions stored thereon which, when executed by the one or more processors, cause the one or more processors to perform operations in accordance with implementations of the methods provided herein.
It is appreciated that methods in accordance with the present disclosure can include any combination of the aspects and features described herein. That is, methods in accordance with the present disclosure are not limited to the combinations of aspects and features specifically described herein, but also include any combination of the aspects and features provided.
The details of one or more implementations of the present disclosure are set forth in the accompanying drawings and the description below. Other features and advantages of the present disclosure will be apparent from the description and drawings, and from the claims.
Like reference symbols in the various drawings indicate like elements.
Implementations of the present disclosure are directed to automated updating of video content. More particularly, implementations of the present disclosure are directed to a video editing platform that automatically updates content in videos to provide a dynamic video that changes as content changes. Implementations can include actions of receiving metadata indicating one or more data sources, automatically retrieving data from a data source indicated in the metadata based on analytical data, generating a visualization based on the data, the visualization including an updated version of a previous visualization, generating an image of the visualization, and providing a video including the image, the image replacing a previous image within the video.
To provide further context for implementations of the present disclosure, and as introduced above, video is a compelling medium for communication. For example, advertisements often use video to engage consumers more effectively than other types of media (e.g., print, radio). Videos often include content that is used to provide information, which enables viewers to make decisions. For example, videos can be used in presentations to effectively engage an audience and inform the audience on particular topics. However, videos are static in nature which means, once a video is created, the video must be again edited to change content. Video editing is not a skill, which everyone has by default, and can be difficult to effectively perform. Even for users with video editing skills, editing videos can be time consuming and cumbersome, particularly for relatively dynamic content (e.g., content that changes hourly, daily, weekly).
In an example use case, and without limitation, videos can be used to convey information regarding operations of an enterprise (e.g., sales figures, revenue figures), which information enables users to make decisions on enterprise operations. For example, videos can include embedded visualizations (e.g., in the form of charts, graphs, and the like) that graphically depict information (content) relevant to an audience. In many cases, the information is dynamic, changing over time (e.g., hourly, daily, weekly, quarterly, yearly). For example, an example video can include visualizations based on the revenue of an enterprise, which revenue changes daily. In some examples, user interfaces (UIs) can include embedded videos that convey information to users on-demand. For example, a UI can include a periodic feed (e.g., daily feed) that conveys information to users using at least one embedded video. In some examples, the information that is to-be-conveyed in the video can change between a first time a user views the UI and a second time the user views the UI.
In view of the above context, implementations of the present disclosure provide a video editing platform that automatically updates content in videos to provide dynamic videos that change as content changes. More particularly, and as described in further detail herein, the video editing platform includes a video composer application that dynamically embeds content (e.g., analytical content) into a video. In some implementations, the video composer application automatically creates dynamic videos by fetching the content from a data source and updating the video based on the content, the content changing over time. In this manner, the video editing platform of the present disclosure enables a video to be composed once, but effectively be played “live” as content changes over time.
Implementations of the present disclosure are described in further detail with reference to an example use case that includes videos that convey information representative of enterprise operations. It is contemplated, however, that implementations of the present disclosure can be realized in any appropriate use case.
In some examples, the client device 102 can communicate with the server system 104 over the network 106. In some examples, the client device 102 includes any appropriate type of computing device such as a desktop computer, a laptop computer, a handheld computer, a tablet computer, a personal digital assistant (PDA), a cellular telephone, a network appliance, a camera, a smart phone, an enhanced general packet radio service (EGPRS) mobile phone, a media player, a navigation device, an email device, a game console, or an appropriate combination of any two or more of these devices or other data processing devices. In some implementations, the network 106 can include a large computer network, such as a local area network (LAN), a wide area network (WAN), the Internet, a cellular network, a telephone network (e.g., PSTN) or an appropriate combination thereof connecting any number of communication devices, mobile computing devices, fixed computing devices and server systems.
In some implementations, the server system 104 includes at least one server and at least one data store. In the example of
In some implementations, and as described in further detail herein, content that changes within a video can be provided based on data stored within a data source. In some examples, the data source can be hosted by the server system 104. Example data sources can include, without limitation, a data file (e.g., a comma-separated values (CSV) file) and a database (e.g., an in-memory database). In some examples, data is stored in a data object, which can be provided as a data cube (e.g., an online analytical processing (OLAP) data cube). In some examples, a data cube is provided as an array of data categorized into one or more dimensions. For example, a data cube can be a representation of a multi-dimensional spreadsheet (e.g., a multi-dimensional dataset including a plurality of data tables). In some examples, a data cube includes a plurality of cells, where cells are populated with respective values (e.g., number, text). In some examples, each value represents some measure (e.g., sales, revenue, profits, expenses, budget, forecast).
In some implementations, a data cube can enable manipulation and/or analysis of data stored in the data cube from multiple perspectives (e.g., by dimensions, measures, and/or elements of the data cube). In some examples, a dimension of a data cube defines a category of stored data. Example dimensions can include, without limitation, time, location, product. In some examples, each dimension can have one or more sub-dimensions. For example, the time dimension can include sub-dimensions of year, each sub-dimension of year can include sub-dimensions of quarter, each sub-dimension of quarter can include sub-dimensions of month, each sub-dimension of month can include sub-dimensions of week, and so on. As another example, the product dimension can include sub-dimensions of category, and each sub-dimension of category can include sub-dimensions of line. As another example, the location dimension can include sub-dimensions of country, each sub-dimension of country can include sub-dimensions of region (e.g., north, east, west, south, mid-west), each sub-dimension of region can include sub-dimensions of sub-region (e.g., state, province), and each sub-dimension of sub-region can include sub-dimensions of city. In some examples, a data cube can include three-dimensions. In some examples, a data cube having more than three-dimensions is referred to as a hypercube.
As noted above, data stored in the data object includes one or more measures. In some examples, each measure is a fact (e.g., a numerical fact, a textual fact). In some examples, each measure can be categorized into one or more dimensions. Example measures can include specific product sales data (e.g., quantity sold, revenue, and/or profit margin), categorized by dimension. In short, measures can include any appropriate data that may be manipulated according to logic to assist or support the enterprise.
In accordance with implementations of the present disclosure, and as noted above, the server system 104 can host a video editing platform that automatically updates content in videos to provide dynamic videos that change as content changes. In some implementations, the video editing platform includes a video composer application that dynamically embeds content (e.g., analytical content) into a video. In some implementations, the video composer application automatically creates dynamic videos by fetching the content from a data source and updating the video based on the content, the content changing over time. In this manner, the video editing platform of the present disclosure enables a video to be composed once, but effectively be played “live” as content changes over time. In some examples, the video is displayed to the user 112 within the client device 102. For example, the video can be displayed within a web browser executed by the client device 102.
In some implementations, it can be specified when and for how long a particular input is to be present within the video 214 that is to be created by the LiveVideo creator 204. For example, for each of the visualization image file 212 and the image file 218, it can be specified when and for how long the respective image file 212, 218 is to be displayed within the video 214. As another example, for the audio file 220, it can be specified when the audio file 220 is to be played during the video 214.
In accordance with implementations of the present disclosure, the analytical data 210 is provided from a so-called analytical smart tile (AST), which stores analytical content. In some implementations, the AST provides the analytical data 210 to the offline engine and image generator 202. The AST can be provided as a data structure that contains user input defining content that is used to provide a visualization to be displayed in the video 214, as described in further detail herein. In some examples, the analytical data 210 indicates one or more measures and one or more dimensions for determining a set of data that is to be used to generate content displayed in the visualization image file 212. In some examples, the analytical data 210 includes one or more filters that are to be applied to generate the visualization displayed in the visualization image file 212. For example, and as described in further detail herein, the analytical data 210 defines what data is to be retrieved from the data source as content that is to be displayed in the video 220 (e.g., sales revenue per each city in a set of cities over a particular period of time). In the example use case, example analytical data 210 can include:
Data Source:
operationsdata.csv
Visualization Type:
Bar Chart
Dimensions:
City
Measures:
Sales Revenue
Duration:
5
Filter Dimension:
City
Filter Values:
Bangalore, Chennai, Mumbai
In accordance with implementations of the present disclosure, the video editing platform provides a UI (e.g., displayed on the client device 102), through which a user can create a video. In some examples, the UI enables the user to select media to be used in creating the video. Example media includes, without limitation, images, video, audio, and dynamic visualizations. For example, the user can begin the video creation process by selecting a UI element to create a video. In response, a video editing UI is displayed, which enables the user to select media to be included in the video.
In a non-limiting example, the video editing UI enables the user to select images, dynamic visualizations, and audio. For example, in response to user selection of images, an add image UI is displayed, through which the user is able to select an image file that is to be displayed in the video and to define a duration (e.g., in seconds) that the image is to be displayed. For example, the add image UI enables the user to select an image file from a local drive, a network drive, and/or the Internet. After the user has selected the image file and the duration, the video editing UI is again displayed and depicts the image represented in the image file (e.g., as a thumbnail image).
In response to user selection of dynamic visualizations, an AST UI is displayed, through which the user can define a dynamic visualization that is to be displayed in the video and to define a duration (e.g., in seconds) that the dynamic visualization is to be displayed. In some examples, the AST UI includes UI elements to select analytical data including a data source, a visualization type, one or more dimensions, one or more measures, and the duration. The AST UI also enables one or more filters to be selected (e.g., filter based on dimension). In some examples, and as described herein, the data source stores the data that is to be used to update the dynamic visualization when the resulting video is played. Example visualization types include, without limitation, column chart, bar chart, line chart, pie chart, and table. After the user has defined the visualization and the duration, an image file representing a visualization image depicting the visualization is provided, as described in further detail herein. The video editing UI is again displayed and depicts the visualization image represented in the image file (e.g., as a thumbnail image).
Referring again to
In accordance with implementations of the present disclosure, the metadata 216 provides information for generation of the video 214. In some examples, the metadata 216 indicates the data source, from which data is to be retrieved (e.g., one or more data sources indicated by the user, when originally creating the video 214). In some examples, the metadata 216 indicates media, order of media, and duration of each medium provided in the video 214. In some examples, the metadata 216 indicates analytical data that is used to query each of the one or more data sources indicated in the metadata 216. In some implementations, and as described in further detail herein, the metadata 216 can be used to automatically regenerate the video 214 (e.g., as a video 214′) to include an updated visualization, in response to a request to play the video 214.
In further detail, the offline engine and image generator 202 includes an offline engine 304 and a visualization renderer 306. In some implementations, the offline engine 304 receives the analytical data 210, which includes one or more measures 310, one or more dimensions 312, and one or more filters 314. In some examples, the offline engine 304 retrieves raw data from a data source 316 based on the one or more measures 310, one or more dimensions 312, and one or more filters 314. By way of non-limiting example, an example query for gross margin versus product with three filters applied can be provided as:
In some examples, the offline engine 304 provides visualization data based on the raw data retrieved from the data source. In some examples, the visualization renderer 306 renders a visualization based on the visualization data. Example visualizations include, without limitation, graphs, charts, tables.
In some examples, the visualization is rendered in a hypertext markup language (HTML) content division element (<div>), which can be described as a generic container for flow content. In some examples, flow content includes text and embedded content. An example <div> can be provided as:
In some implementations, the screenshot taker 302 receives the visualization from the visualization renderer 306 and generates the image data 212. For example, the screenshot taker 302 receives the <div> from the visualization renderer 306 and generates the image data 212 as a screenshot of the <div>.
For example, the <div> containing the visualization is placed behind a video player by setting its z-index to −1. In this manner, the visualization is not visible. By using a querySelector method of the document object model (DOM), the non-visible <div> can be obtained and passed to a library, which converts the <div> to an image represented in the image data 212. The image data 212 is provided to the LiveVideo creator 204, which generates the video 214 and the metadata 216.
In some implementations, the input to the LiveVideo creator 204 can be the image 218, the image 212, and/or the audio file 220. Using this incoming data, a metadata file is provided, which defines what media the video is made of. A sample metadata file can be provided as
Accordingly, the metadata provides details about the type of artifact, the duration of the content in the video and where the image/audio file is stored in the system. Further, the “LIVEFILE” type, which has the analytical content, also has path to the metafile and contains information required to re-run a query on a designated data source. An example metafile can be provided as:
In some implementations, the video is generated using an open source framework (e.g., ffmpeg). The live video converter reads the metadata file it has created and converts it into a format that can be given as input to ffmpeg. The created video is sent to the player so that it can be played and the metadata is stored for future use to recreate the video, as described herein.
In some implementations, in response to the input 402, the LiveVideo creator 204 references the metadata 216 to determine whether any data sources are to be queried to provide an updated visualization. If the metadata 216 indicates one or more data sources, the LiveVideo creator 204 requests one or more updated images from the offline engine and image generator 202. In some examples, the LiveVideo creator 204 transmits a request that indicates the data source that is to be queried and the analytical data that is to be used to query the data source (e.g., both provided from the metadata 216). In some examples, in response to the request, the offline engine and image generator 202 generates an image 212′ that depicts a visualization. In some examples, the image 212′ is generated in the same manner as described above with respect to the image 212 and reference to
In some implementations, before generating the video 214′, it can be determined whether the data underlying the image 212 has changed within the data source 316, since the image 212 was generated. In some examples, if the data has not changed, generating of the video 214′ can be foregone and the video 214 (the originally created video) can be played in response to the input 402. That is, because the data is unchanged, the visualization depicted in the image 212 is unchanged and there is no need to regenerate the image 212 as the image 212′. In this manner, time and computing resources can be conserved. In some examples, an indicator can be provided, which indicates when the data was last changed. For example, in response to a change in the data, an indicator can be provided to the offline engine and image generator 202 and/or the LiveVideo creator 204 (e.g., from the data source). In some examples, the data retrieved in response to the request from the LiveVideo creator 204 can be compared to the data originally retrieved in generating the image 212 to determined whether the data has changed since the image 212 was generated.
In accordance with implementations of the present disclosure, the video 504 can be played at a first time t1 as a video 504′ and can be played at a second time t2 as a video 504″. By way of non-limiting example, the first time t1 can be a day and the second time t2 can be a subsequent day. In the depicted example, the video 504′ includes content 510 that is displayed beginning at a time t′ within the video 504′. In the depicted non-limiting example, the content 510 includes a bar graph (e.g., representing revenue by city). In some implementations, at the second time t2, the video 504 is updated to include updated content that is to be presented within the video 504″.
A video and metadata are provided (602). For example, and as described herein (e.g., with reference to
A request to play the video is received (604). For example, and as described herein with reference to
If it is determined that the video is to be updated, one or more visualizations are provided (610). For example, and as described herein with reference to
Referring now to
The memory 720 stores information within the system 700. In some implementations, the memory 720 is a computer-readable medium. In some implementations, the memory 720 is a volatile memory unit. In some implementations, the memory 720 is a non-volatile memory unit. The storage device 730 is capable of providing mass storage for the system 700. In some implementations, the storage device 730 is a computer-readable medium. In some implementations, the storage device 730 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device. The input/output device 740 provides input/output operations for the system 700. In some implementations, the input/output device 740 includes a keyboard and/or pointing device. In some implementations, the input/output device 740 includes a display unit for displaying graphical user interfaces.
The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The apparatus can be implemented in a computer program product tangibly embodied in an information carrier (e.g., in a machine-readable storage device, for execution by a programmable processor), and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer can include a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer can also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, for example, a LAN, a WAN, and the computers and networks forming the Internet.
The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as the described one. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.
A number of implementations of the present disclosure have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the present disclosure. Accordingly, other implementations are within the scope of the following claims.
Nagaraja, Pavan Kowshik Santebidanur, T, Tapan Prakash
Patent | Priority | Assignee | Title |
11790008, | Sep 12 2019 | Business Objects Software Ltd.; Business Objects Software Ltd | Persisted queries and batch streaming |
Patent | Priority | Assignee | Title |
10715860, | Nov 29 2017 | TWITCH INTERACTIVE, INC | Video stream with additional content areas |
10783214, | Mar 08 2018 | Palantir Technologies Inc | Adaptive and dynamic user interface with linked tiles |
5996015, | Oct 31 1997 | International Business Machines Corporation | Method of delivering seamless and continuous presentation of multimedia data files to a target device by assembling and concatenating multimedia segments in memory |
6195692, | Jun 02 1997 | Sony Corporation; Sony Electronics, Inc. | Television/internet system having multiple data stream connections |
6330595, | Mar 08 1996 | OPENTV, INC | Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments |
7139813, | Nov 01 1999 | Nokia Networks Oy | Timedependent hyperlink system in videocontent |
8479246, | Dec 14 2000 | JOLLY SEVEN, SERIES 70 OF ALLIED SECURITY TRUST I | System and method for interactive video content programming |
9578351, | Aug 28 2015 | Accenture Global Services Limited | Generating visualizations for display along with video content |
9838740, | Mar 18 2014 | Amazon Technologies, Inc | Enhancing video content with personalized extrinsic data |
9930415, | Sep 07 2011 | IMDB.com, Inc. | Synchronizing video content with extrinsic data |
20080060034, | |||
20100293190, | |||
20100325657, | |||
20160092602, | |||
20180165283, | |||
20200329285, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 10 2019 | NAGARAJA, PAVAN KOWSHIK SANTEBIDANUR | SAP SE | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051103 | /0273 | |
Nov 10 2019 | T, TAPAN PRAKASH | SAP SE | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051103 | /0273 | |
Nov 25 2019 | SAP SE | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Nov 25 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Nov 06 2024 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
May 11 2024 | 4 years fee payment window open |
Nov 11 2024 | 6 months grace period start (w surcharge) |
May 11 2025 | patent expiry (for year 4) |
May 11 2027 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 11 2028 | 8 years fee payment window open |
Nov 11 2028 | 6 months grace period start (w surcharge) |
May 11 2029 | patent expiry (for year 8) |
May 11 2031 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 11 2032 | 12 years fee payment window open |
Nov 11 2032 | 6 months grace period start (w surcharge) |
May 11 2033 | patent expiry (for year 12) |
May 11 2035 | 2 years to revive unintentionally abandoned end. (for year 12) |