The present invention relates to a system for transferring an interactive movie located at a server via a network and displaying the same at a user station, which comprises: (a) a display unit at the user station for displaying movie frames; (b) a control device at the user station for enabling the user to navigate within the movie; (c) a map of the movie describing the links between the individual movie frames and the index of each movie frame, said map also maintains at any given time the present virtual location of the user within the map; and (d) a prediction unit for receiving inputs from said map and from a user control device, and based on said inputs predicting those future frames that may be required for view by the user, and instructing the server to convey said predicted future frames to the user station.
|
9. A computer-implemented method for transferring an interactive movie to a user device via a network, the interactive movie including a plurality of frames, each frame including an image captured while traversing a path, comprising:
generating, by a computer processor, at least a portion of a map of the interactive movie, the map representing at least a portion of the path, wherein the map describes a sequence for displaying a frame, and wherein the map includes an indicator that shows a location on the map that represents a point along the path where the image associated with the currently displayed frame was captured;
predicting, by the computer processor, based on either the location of the indicator on the map or control data received from a user, a frame that may be displayed in the future;
determining whether the interactive movie is playing, wherein playing of the interactive movie includes rapidly displaying images associated with at least a portion of the plurality of frames that make up the interactive movie;
while the interactive movie is playing, generating, by the computer processor, the plurality of frames, wherein each frame in the plurality of frames includes a lower resolution version of the image associated with the frame;
determining whether the interactive movie is paused, wherein pausing of the interactive movie includes displaying the image associated with a single frame; and
while the interactive movie is paused, generating, by the computer processor, the single frame, wherein the image of the single frame is at a higher resolution than the resolution used while rapidly displaying the images.
1. A computer-implemented method for transferring an interactive movie from a remote server via a network, the interactive movie including a plurality of frames, each frame including an image of a scene at a particular location along a path, the method comprising:
determining whether the interactive movie is playing, wherein playing of the interactive movie includes rapidly displaying images associated with at least a portion of a plurality of frames that make up the interactive movie;
while the interactive movie is playing, generating lower resolution future frames, wherein a particular lower resolution future frame corresponds to a particular frame of the interactive movie and includes a lower resolution version of the image associated with the particular frame;
determining whether the interactive movie is paused, wherein pausing of the interactive movie includes displaying the image associated with a single frame;
while the interactive movie is paused, generating a higher resolution frame, wherein the higher resolution frame corresponds to the single frame associated with the image currently displayed and includes an image at a higher resolution than the image associated with a corresponding lower resolution frame;
determining an indicator's location on a map of the interactive movie, the map representing at least a portion of the path, wherein the map describes a sequence for displaying the plurality of frames, and wherein the indicator that shows a location on the map that represents a point along the path where the image associated with a frame currently selected for display was captured; and
determining, based on either the location of the indicator on the map or input received from a user, one or more future frames that may be displayed.
14. A computer-implemented method for displaying an interactive movie at a user device, the interactive movie including a plurality of frames, each frame including an image captured while traversing a path, the method comprising:
determining whether the interactive movie is playing, wherein playing of the interactive movie includes rapidly displaying images associated with at least a portion of the plurality of frames that make up the interactive movie;
while the interactive movie is playing, receiving, by a computer processor, the plurality of frames, wherein each frame in the plurality of frames includes a lower resolution version of the image associated with the frame;
determining whether the interactive movie is paused, wherein pausing of the interactive movie includes displaying the image associated with a single frame;
while the interactive movie is paused, receiving, by the computer processor, the single frame, wherein the image of the single frame is at a higher resolution than the resolution used while rapidly displaying the images;
receiving, by the computer processor, at least a portion of a map of the interactive movie, the map representing at least a portion of the path, wherein the map describes a sequence for displaying each frame;
receiving, by the computer processor, an indicator that shows a location on the map that represents a point along the path where the image associated with the currently displayed frame was captured;
displaying, by the computer processor, the map at the user device;
predicting, by the computer processor, based on either the location of the indicator on the map or control data input by a user, a frame of the interactive movie that may be displayed at the user device in the future; and
requesting the frame that may be displayed in the future.
2. The computer implemented method of
generating a map of the interactive movie that represents the path; and
updating an indicator on the map, wherein the indicator shows a location on the map that represents a point along the path where an image associated with a currently displayed frame was captured.
3. The computer-implemented method of
predicting, based on the indicator's location on the map, an optional near future frame that may be required for display; and
generating the optional near future frame from the remote server.
4. The computer-implemented method of
requesting an image associated with a frame at a specific resolution level, wherein the specific resolution level is based, at least in part, on user input.
5. The computer-implemented method of
6. The computer-implemented method of
7. The computer-implemented method of
8. The computer-implemented method of
10. The computer-implemented method of
11. The computer-implemented method of
12. The computer-implemented method of
13. The computer-implemented method of
15. The computer-implemented method of
16. The computer-implemented method of
17. The computer-implemented method of
|
This application is a Continuation of U.S. patent application Ser. No. 11/915,646, with a §371(c) date of Nov. 27, 2007. This application claims priority to and the benefit of PCT/IL2006/000629, filed on May 29, 2006. This application also claims priority to and the benefit of Israeli Patent Application No. 168903, filed May 31, 2005. Each of these applications is hereby incorporated by reference in its entirety.
The field of the invention relates to systems and methods for displaying movies via the Internet. More particularly, the invention relates to a system and method for displaying an interactive movie via a network, such as the Internet, or in a most preferable case, to a system and method for displaying via the Internet an interactive movie enabling a user to virtually travel within a physical site, such as a building, office, etc.
Virtual movies are widely used for a variety of purposes, such as exhibitions, remote guidance and touring of places, education, training, electronic games, etc. Some of such movies are animated, and some others are real, in the sense of involving real frames that are photographed within a real, physical site and some combine a mixture of real and animated frames. (It should be noted herein that throughout this application the term “frame” is used to indicate a unitary view that is introduced to the user at a specific time unit. In this respect, it should be noted that there is no distinction as to how this frame is produced, whether by acquiring it fully from the multimedia content, or producing it from a previous frame by completing only the differences from that previous frame in a manner as some compression tools operate). The present invention relates to all such cases, but mostly to the latter case; i.e., to the case where the preparation of a virtual and interactive movie enables a user to interactively explore a real site; i.e., to a case in which a user wishes to watch a movie of a site, and navigate interactively within the site.
In recent years there have been extensive efforts to streamline the playing of multimedia over computer networks with no need of a full movie preload prior to a continuous playback. Throughout this application, a discussion is made to video movies (or clips), although some aspects of the invention may be relevant to other types of multimedia. Most existing solutions take into account the fact that the network bandwidth is limited, and that the speed of transfer dynamically varies. Some solutions suggest the reduction of the amount of data transferred (e.g., by means of compromising the media quality, or by means of compressing the data), and adjustment of the amount of data transferred to the network speed. The main objective is to obtain a smooth experience with maximum image quality for a given bandwidth. One common solution suggests the performing of a preload of a multimedia sequence (such as a movie clip), to balance the stream of the information loading during playback, in order to improve the availability of the clip portions at the user's station, and to prevent delays and suspensions. Throughout this application, the term “station” relates to any type of computer, processor, terminal, workstation, PDA, cellular phone, etc. which has a player for playing a content. Other factors that may affect the proper manner of handling multimedia information at the client side are the size of the data file and the amount of free storage at the user's end. Because the multimedia file at the remote (server) side is usually large, the fact that it cannot always be fully stored or fully preloaded to the client side must be taken in to account. In such a case, the user's experience may be hindered due to the need for a relatively long wait for the data he wishes to play.
The above prior art solutions are relatively satisfactory when playing a continuous conventional multimedia clip. However, such solutions cannot meet the stringent requirements of playing an interactive movie via a network of limited bandwidth. In a conventional movie the sequence of the movie frames is well defined, and therefore the maintaining of a FIFO type register at the user station can assist in the availability and integrity of the currently played sequence. However, this is not the case when playing an interactive virtual movie in which, during many stages of the movie, the user is given several options from which to select and continue with different sequences of the movie. This requirement to provide several video sequences to the user at any given time introduces a burden to the effective playing of an interactive virtual movie via the Internet (or any other bandwidth limited) network.
WO 2004/049263 by the same inventors discloses a method and system for the preparation of an interactive movie, which simulates the walking of a user within a real site and which enables a user to explore the site. Such a virtual movie can be streamed and displayed to a user relatively smoothly, in a high resolution and fast manner. However, any attempt to transfer and simultaneously display this interactive movie via a network, such as the Internet, at a station remote from the location in which the movie is stored, faces difficulties, particularly in satisfying the interactivity and smoothness requirements.
It is an object of the present invention to provide a method and system for the playing of a virtual movie via a bandwidth limited (sometimes very low bandwidth) network, such as the Internet.
It is another object of the invention to enable playing of such a movie in a manner that minimally affects the quality that the user experiences.
It is a particular object of the invention to enable the viewing via the Internet (or any other network) of a virtual movie for exploring a site.
Other objects and advantages of the invention will become apparent as the description proceeds.
The present invention relates to a system for transferring an interactive movie located at a server via a network and displaying the same at a user station, which comprises: (a) a display unit at the user station for displaying movie frames; (b) a control device at the user station for enabling the user to navigate within the movie; (c) a map of the movie describing the links between the individual movie frames and the index of each movie frame, said map also maintains at any given time the present virtual location of the user within the map; and (d) a prediction unit for receiving inputs from said map and from a user control device, and based on said inputs predicting those future frames that may be required for view by the user, and instructing the server to convey said predicted future frames to the user station.
Preferably, the system of the invention further comprises a cache unit at the user station for temporarily storing movie frames.
Preferably, the prediction unit orders a first level of resolution frames when the user is in a virtual movement within the interactive movie, and a second level of resolution frames when the user is stationary within the virtual movie. Throughout this application the term “resolution” of a frame is used in its broader interpretation to provide an indication either of the number of pixels in a frame or to the quality of the frame, wherein the “quality” indicates the amount of data or rate of distortion of the frame (for example due to compression-decompression).
Preferably, the said first level is lower than the said second level.
Preferably, the frames are stored in the server in at least two resolution levels.
Preferably, the frames are stored in the server in at least one resolution level, and the server comprises transform means for converting the frames resolution level to at least one other resolution level.
Preferably, the prediction unit instructs the server to convey a third level of resolution frames upon a special request by the user.
Preferably, said third level of resolution is the highest resolution level.
In one embodiment, the prediction unit is positioned at the user station.
In another embodiment, the prediction unit is positioned at the server.
In one embodiment, the map is positioned at the server.
In another embodiment, the map is positioned at the user station.
In still another embodiment, both the prediction unit and the map are positioned at the server.
In still another embodiment, both the prediction unit and map are positioned at the user station.
Preferably, the system further comprises a coordinator at the user station for communicating with said cache and coordinating the displaying of frames on the user display.
In an embodiment of the invention, the prediction unit and map are positioned at the server. Preferably, the system further comprises a coordinator at the user station for communicating with said cache and coordinating the display of frames on the user display.
In an embodiment of the invention, each frame is conveyed from the server to the user station associated with one or more of next frame indices, and wherein said coordinator at the user station determines from said next frame indices the order of the frames to be displayed.
Preferably, said coordinator at the user station determines said order of frames from said one or more next frame indices and from inputs it receives from said user control device.
Preferably, the system further comprises a transfer coordinator at the server for coordinator at the server for coordinating the flow of frames from the server to the user station.
The present invention further relates to a method for transferring a movie from a remote server via a network and displaying the movie at a user station, which comprises: (a) transferring and displaying to the user lower resolution frames as long as the user is in movement within the movie; and (b) transferring and displaying to the user medium or higher resolution frames as long as the user is stationary within the movie.
Preferably, the method further comprises the steps of: (a) providing a map of the movie; (b) continuously updating the map with the present location of the user within the movie; and (c) looking at the map and, from the present location of the user, predicting those optional near future frames that may be required for display to the user, and ordering said future frames from the server.
In the drawings:
A virtual movie is a movie which enables a user to view video (or sequences of frames), in which the user has, in at least some places within the movie, the option of selecting which video segment will be displayed to him. Therefore, in contrast to the case of a regular movie in which the whole sequence and order of the movie is fixed, predefined and known, in a virtual movie the order of the movie segments is left to the user's discretion. Therefore, the transfer of the movie over a network for playing in a user's station (which requires real time availability of the movie frames at the user station) becomes much more complicated, especially when the bandwidth of the network is relatively low. Of course, the higher the resolution of the movie and the larger the content of the whole movie are, the more complicated this real time availability problem of the movie frames at the user station becomes.
As previously said, at any given time at least a portion (generally made from a sequence of frames) of the future movie segments that the user wishes to view has to be available at his station. However, in view of the interactivity, and as there axe points in the movie in which the user has several movie segments to select from, at least a portion of each of said optional segments has to be available for display at his station at the time the selection is made, in order to assure smooth and continuous viewing. This interactivity requirement further increases the challenge that the present invention faces.
In any virtual movie, there are times in which the user navigates while being in movement, and there are other times in which he stops, and focuses in a specific view. One aspect of the present invention is the observation that during the “movement” of the user, in which the movie frames are rapidly changed, the resolution can be reduced, as it is assumed that during movement the user does not have a high interest in any specific view included in the rapidly changed sequence of frames.
Moreover, during said “movement” periods, the rate of change of the frames enables such reduction of resolution not to severely disturb the user experience. However, when the user “stops” and focuses on a specific view, it is assumed that he has a high interest in said view, and therefore in such “stopping” times a frame with an increased resolution is provided to the user and displayed to him. Furthermore, according to a preferred aspect of the invention, in all such user stopping points the user is given the option of requesting and immediately receiving a highest resolution frame.
Every movie comprises one or more sequences of frames. A conventional movie is essentially one long sequence of frames, while a virtual movie is comprised of several segments, each segment comprising a sequence of frames. The virtual movie also includes at least several junction locations in which a user can select from two or more optional segments to continue. The rate of virtuality of the movie is generally limited, as the user cannot go from each point to all other points in the movie, but he can do so only from some junctions. Therefore, the sequence of frames within a segment is generally fixed, but the transfer from one segment to another is flexible.
The present invention assumes the availability of a digital scheme (also referred to herein as “map”) of the movie, indicating all possible routes of the movie, including all those junctions and optional segments given to the user in each junction. Such a scheme contains at least several segments of frames. In each segment, each movie frame is connected to at least one, and generally to more than one next and/or previous frames of the segment. A connection of a frame to a previous frame may be referred to as an “entry” and each connection of a frame to a next frame may be referred to as “exit”. An example of such a scheme and a virtual movie is provided in WO 2004/049263.
The scheme of
WO 2004/049263 describes one way of producing a virtual movie which can be described in a form of a scheme, which is suitable for carrying out the invention. An office is filmed by means of a scanning apparatus which captures frames in up to a 360° field of view. The scanning apparatus is moved forward along the route, while capturing photographs in a relevant (predefined) field of view, from a plurality of filming spots (nodes).
The plurality of points along the route in which the scanning (filming) is performed is indicated in
It should be noted that the interactivity of the movie enables the user in some, generally junction, locations to decide between two optional courses. For example, assume that the user “walks” from node point 15 toward junction J1. When he arrives at junction 31, he can either continue in the corridor C1, or turn right to room B. Therefore, prior to the user arriving at junction J1 (for example, at point 17), the display at the user station should already include both the frames of room B (node 20, 27, . . . ) and the frames of the continuing portion of the corridor C1 (nodes 21, 28, . . . ).
Before the filming stage according to WO 2004/049263, a plan is made regarding the routes along which the filming will be made. Along the planed routes, the nodes are created when the scanning apparatus passes a predefined distance D, which may be, for example, every 5-50 cm, or changes its angular orientation by a predefined angle Δ° (for example 5°). For each node, a specific camera unit field of view is created. The term “camera unit field of view” defines the angular for each node, a cumulative field of views that are captured by the individual camera forming the camera unit. Then, the scanning apparatus progresses along the planned routes and captures frames.
WO 2004/049263 therefore assumes that the interactive virtual movie is a collection of image data frames that are captured along routes within the physical site. This is translated to a linked scheme of nodes such as of
It should be noted that each segment of the movie has preferably its unique reference number, and all the frames within each segment also receive reference numbers, preferably according to their sequential order within the segment.
As said, the invention of WO 2004/049263 enables a user to virtually navigate in the filmed site, by moving in various directions, and by looking at selected directions. The user is provided with a player, and the movement is made by means of a pointing device, by the arrow of the keyboard, by a joystick, etc. The movie contains a very large amount of data, which does not enable a smooth view by a user via a narrow bandwidth network, such as the Internet. The fact that the movie is interactive, which provides to the user the option of deciding in which direction to look, and furthermore, in junctions such as junction J1 in which direction to move, multiplies by several orders the amount of data that has to be handled in real time. When trying to enable a user to view such a movie via the Internet, this problem becomes much more complicated.
The PFOC 108 continuously receives updates 122 relating to the location of the user on the map 109, predicts the next movements of the user, and accordingly orders the necessary frames from the server 150 (it only orders those frames that are not already in the cache), in order to assure their availability at the display unit for maintaining smooth movie playing. It should be noted that, within continuous segments where the user's next movement is clearly defined, all or some of the frames up to the next junction may be ordered; but if there is an approaching junction, the PFOC 108 orders frames relating the routes that follow the junction. The number of frames that are ordered from said routes may vary. If at some stage one of the said several optional routes becomes evident, the frames corresponding to this route are displayed to the user. The frames that have not been used for the longest time may be deleted from the cache and replaced by newly coming frames. The PFOC 108 may operate in such a manner as to optimize the availability of frames at the cache when they are needed. If it is found that ordering a few future frames in each order is enough to ensure smoothness, this is preferable, as in this manner it is expected that fewer frames will be ordered and not finally used. In said latter case, more orders operation from the server 150 will be performed. The ordering by the PFOC 108 generally involves two steps: in the first step the PFOC checks the availability (line 112) of the necessary frames at the cache unit 105, and the second step involves the ordering (line 113) from the server of only the frames that are not available in the cache unit 105. The cache, in turn, reports the availability of frames over line 114 to both the PFOC 108 and to the exploration coordinator 103. The ordered frames from the server are conveyed to the user end 100 over channel 116. As said, the exploration coordinator 103 also receives updates 122 relating to the present location of the user on the map 109, as it needs this information in order to determine at any given moment what the next virtual location of the user in the map will be and the exact frame that has to be fetched from the cache and conveyed to the display unit 101.
Preferably, the operation of the PFOC is enhanced to also continuously check the streaming of the frames from the server to the user station, in order to eliminate the transfer of frames from the server that are found to be no longer necessary at the user station. For example, there may be cases in which a sequence of future frames has been ordered from the server, but while a portion of said sequence has not been yet transferred it is determined that this portion is no longer required at the user station (such a case may be, for example, due to a prediction by the PFOC that subsequently been found to be non-relevant) the PFOC 108 issues a signal to the server to cancel the transfer of those frames that are found to be no longer necessary.
The system as described, and particularly the use of a prediction unit and map, ensures that the future frames are available at the display unit 101 whenever they are required to ensure smooth play. The use of cache further enhances the system by ensuring that frames that are available at the user end (in cache unit 105) are not reordered, but they are fetched from the cache.
As previously said, still another aspect of the invention involves the use of two or more resolution levels of the frames that are ordered and displayed to the user.
According to a preferred embodiment of the invention, still another object of the PFOC 108 is to determine the movement type of the user. As said, the PFOC 108 receives indications relating to the user movement from the control device 107 over line 111a. Therefore, and for the reasons explained above, when the PFOC 108 determines that the user is in movement, it orders from the server low resolution frames. Whenever the PFOC 108 determines that the user stops and looks intently to a specific view, the PFOC 108 orders a medium resolution frame. However, when the user wishes to closely examine a specific view, he is provided with means, such as a button in the user interface, for ordering such a high resolution frame. This indication is conveyed over line 120 to PFOC 108, which in turn orders the highest resolution frame from the server. It should be noted that whenever a medium resolution frame or a highest resolution frame is ordered, the whole frames transfer is halted (or reduced to low priority); and the channel 116 is solely (or mostly) dedicated to the transfer of said medium or highest resolution frame. It should be noted, however, that this transfer halt of the low resolution frames is very short, as only one frame is transferred. When said ordered medium resolution or high resolution frame arrives in the cache, the regular process of low resolution frames transfer resumes.
The exploration coordinator 103 uses the movement information 111b in order to determine at any moment the frame required by the client. It further checks the availability of this frame in cache 105, and assuming it has already been ordered by the PFOC 108 and exists in cache 105, the exploration coordinator fetches the frame, and conveys this frame over channel 102 to the display unit 101.
With reference to
It should be noted that the three storages for the three levels of resolution have been given herein as an example only. Instead of three separate storages, there may exist only one, high resolution storage, and whenever other resolution level is ordered, the high resolution may be transformed by the transfer coordinator 157 into the desired resolution before the transfer.
As shown, in the first embodiment, the map must be located at the user end. Therefore, in the first embodiment, when a user wishes to view a specific interactive movie, the map of that movie is first transferred to the user side.
The PFOC 208 continuously receives updates 222 relating to the location of the user on the map 209, predicts the next movements of the user and accordingly orders the necessary frames from one of the three frames storages (low resolution storage 251, medium resolution storage 252, or high resolution storage 253). As will be described hereinafter, in most cases the order will be performed from the low resolution storage 251. This is done in order to assure the availability of the necessary frames at the display unit for maintaining smooth movie playing. It should be noted that, within continuous segments where the user next movement is clearly defined, all or some of the frames up to the next junction may be ordered. However, if there is an approaching junction, the PFOC 208 orders frames relating to the routes that follow the junction. The number of frames that are ordered from said routes may vary. The PFOC 108 may operate in such a manner as to optimize the availability of frames at the cache 205 (at the user side) when they are needed. The cache 205 at the user side may generally operate in such a manner that those frames that have been longest in non-use are replaced by new ones. The ordered frames from the relevant storage are conveyed to the transfer coordinator 257, which in turn transfers them over channel 216 of the network to the user side for display.
As previously said with respect to the embodiment of
The system as described, and particularly the use of a prediction coordinator and map 209, ensures that the future frames are available at the display unit 201 whenever they are required to ensure smooth play. The use of cache further enhances the system by ensuring that frames that are available at the user end (in cache unit 205) are not reordered but are fetched from the cache.
As in the first embodiment, the operation of the second embodiment also involves the use of several resolution levels of the frames that are displayed to the user. As long as the PFOC 208 determines from the signal over line 211a (the movement controls from the control device 207) that the user is in movement, the PFOC 208 orders and causes lower resolution frames (from storage 251) to be sent to the user. Whenever the prediction coordinator 208 determines that the user stops moving, it immediately orders the transfer coordinator to halt the transfer. It then orders a medium resolution frame from storage 252 and sends this medium resolution frame to the user end. If, on the other hand, the PFOC 208 determines that the user wishes to view a high resolution name, the order is made and conveyed in a similar manner from the high resolution storage 253.
In the embodiment of
In another aspect of the invention, it has been found that simultaneously with the playing of the movie, it is preferable to display to the user a small version of the site map, on which the present user location is displayed. Therefore, according to a preferred embodiment before playing the movie the site map is initially transferred from the server to the user. Thereafter, each time a frame is transferred from the server to the user station, the x:y location from where the frame was captured and also the view direction are also transferred associated with the frame. This additional data is also stored in the cache at the user station. When the virtual movie is played at the user station, said additional small map is also displayed, and using said additional data, the current user location and his view direction are also displayed over the map.
It should also be noted that the three storages for the three levels of resolution have been given herein as an example only. Instead of three separate storages, there may exist only one, high resolution storage, and whenever other resolution level is ordered, the high resolution may be transformed by the transfer coordinator 157 into the desired resolution before the transfer. Furthermore, while in the first embodiment the map has to be transferred to the user side, according to the second embodiment this transfer of the map to the user side is not required. It should further be noted that the present invention does not preclude preloading of a portion of the movie prior to playing of the interactive movie. As in the prior art, the use of preload may be advantageous, particularly in slow networks.
While some embodiments of the invention have been described by way of illustration, it will be apparent that the invention can be put into practice with many modifications, variations and adaptations, and with the use of numerous equivalents or alternative solutions that are within the scope of persons skilled in the art, without departing from the spirit of the invention or exceeding the scope of the claims.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5446834, | Apr 28 1992 | Sun Microsystems, Inc. | Method and apparatus for high resolution virtual reality systems using head tracked display |
20020144276, | |||
20030172131, | |||
20040239699, | |||
20050024362, | |||
20050117055, | |||
20050125150, | |||
WO2004049263, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 13 2010 | MENTORWAVE TECHNOLOGIES LTD | Google Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029166 | /0403 | |
Sep 14 2012 | Google Inc. | (assignment on the face of the patent) | / | |||
Sep 29 2017 | Google Inc | GOOGLE LLC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 044277 | /0001 |
Date | Maintenance Fee Events |
Nov 20 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Nov 22 2021 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
May 20 2017 | 4 years fee payment window open |
Nov 20 2017 | 6 months grace period start (w surcharge) |
May 20 2018 | patent expiry (for year 4) |
May 20 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 20 2021 | 8 years fee payment window open |
Nov 20 2021 | 6 months grace period start (w surcharge) |
May 20 2022 | patent expiry (for year 8) |
May 20 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 20 2025 | 12 years fee payment window open |
Nov 20 2025 | 6 months grace period start (w surcharge) |
May 20 2026 | patent expiry (for year 12) |
May 20 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |