A cache management system and method and a content distribution system. In one embodiment, the cache management system includes: (1) a content request receiver configured to receive content requests, (2) a popularity lifetime prediction modeler coupled to the content request receiver and configured to generate popularity lifetime prediction models for content that can be cached based on at least some of the content requests, (3) a database coupled to the popularity lifetime prediction modeler and configured to contain the popularity lifetime prediction models and (4) a popularity lifetime prediction model matcher coupled to the content request receiver and the database and configured to match at least one content request to the popularity lifetime prediction models and control a cache based thereon.
|
7. A cache management method, comprising:
receiving requests for a unit of content;
generating popularity lifetime prediction models for content that can be cached based on at least some of said requests, wherein at least one of said popularity lifetime prediction models for predicting a future popularity of said unit of content is determined based on a peak rate of requesting said unit of content after an introduction of said unit of content, a decay factor, and a base determined for said unit of content; and
associating at least one request for the unit of content with said popularity lifetime prediction models, and controlling a cache based on the at least one popularity lifetime prediction model and at least one of the following stimuli:
notification of upcoming content availability, and
notification of upcoming promotions of content.
1. A cache management system, comprising:
a content request receiver configured to receive requests for a unit of content;
a popularity lifetime prediction modeler configured to generate popularity lifetime prediction models for content that can be cached based on at least some of said requests, wherein at least one of said popularity lifetime prediction models for predicting a future popularity of said unit of content is determined based on a peak rate of requesting said unit of content after an introduction of said unit of content, a decay factor, and a base determined for said unit of content; and
a popularity lifetime prediction model processor configured to associate at least one request for the unit of content with said popularity lifetime prediction models, and to control a cache based on the at least one popularity lifetime prediction model and at least one of the following stimuli:
notification of upcoming content availability, and
notification of upcoming promotions of content.
13. A content distribution system, comprising:
mass storage;
a cache coupled to said mass storage; and
a cache management system associated with said cache and including:
a content request receiver configured to receive requests for a unit of content,
a popularity lifetime prediction modeler configured to generate popularity lifetime prediction models for content contained in said mass storage based on at least some of said requests, wherein at least one of said popularity lifetime prediction models for predicting a future popularity of said unit of content is determined based on a peak rate of requesting said unit of content after an introduction of said unit of content, a decay factor, and a base determined for said unit of content, and
a popularity lifetime prediction model processor configured to associate at least one request for the unit of content with said popularity lifetime prediction models, and to control said cache based on the at least one popularity lifetime prediction model and at least one of the following stimuli:
notification of upcoming content availability, and
notification of upcoming promotions of content.
2. The cache management system as recited in
3. The cache management system as recited in
pi+1=c+pi*di, where p0 is said peak request rate after said content item is added, di is said decay factor and c is said base determined for said unit of content.
4. The cache management system as recited in
5. The cache management system as recited in
6. The cache management system as recited in
8. The cache management method as recited in
9. The cache management method as recited in
pi+1=c+pi*di, where p0 is said peak request rate after said content item is added, di is said decay factor and c is said base determined for said unit of content.
10. The cache management method as recited in
11. The cache management method as recited in
explicit assertion/characterization about said unit of content.
12. The cache management method as recited in
14. The content distribution system as recited in
15. The content distribution system as recited in
pi+1=c+pi*di, where p0 is said peak request rate after said content item is added to said mass storage, di is said decay factor and c is said base determined for said unit of content.
16. The content distribution system as recited in
17. The content distribution system as recited in
|
The invention is directed to a cache management system and method.
Several techniques exist for managing data caches. These techniques track requests for data from the cache and track which data are within the cache. Techniques use this tracking information to determine whether data are in the cache and, when necessary, to determine which cached data should be removed to make room for new data. These techniques are distinguished primarily by the functions they use to select which data to move into or out of a cache.
Existing techniques use their tracking of data requests to manage cache content. If one can say that such techniques guess which data will be requested in the future, then one must say that these guesses are based on request tracking information. That is, their predictions of future requests are based simply on past requests.
To address the above-discussed deficiencies of the prior art, the invention provides a cache management system. In one embodiment, the cache management system includes: (1) a content request receiver configured to receive content requests, (2) a popularity lifetime prediction modeler coupled to the content request receiver and configured to generate popularity lifetime prediction models for content that can be cached based on at least some of the content requests, (3) a database coupled to the popularity lifetime prediction modeler and configured to contain the popularity lifetime prediction models and (4) a popularity lifetime prediction model matcher coupled to the content request receiver and the database and configured to match at least one content request to the popularity lifetime prediction models and control a cache based thereon.
Another aspect of the invention provides a cache management method. In one embodiment, the cache management method includes: (1) receiving content requests, (2) generating popularity lifetime prediction models for content that can be cached based on at least some of the content requests, (3) storing the popularity lifetime prediction models in a database and (4) matching at least one content request to the popularity lifetime prediction models and control a cache based thereon.
Yet another aspect of the invention provides a content distribution system. In one embodiment, the content distribution system includes: (1) mass storage, (2) a cache coupled to the mass storage and (3) a cache management system associated with the cache and including: (3a) a content request receiver configured to receive content requests, (3b) a popularity lifetime prediction modeler coupled to the content request receiver and configured to generate popularity lifetime prediction models for content contained in the mass storage based on at least some of the content requests, (3c) a database coupled to the popularity lifetime prediction modeler and configured to contain the popularity lifetime prediction models and (3d) a popularity lifetime prediction model matcher coupled to the content request receiver and the database and configured to match at least one content request to the popularity lifetime prediction models and control the cache based thereon.
For a more complete understanding of the invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
The content distribution system includes mass storage 110, which may take the form of an array of disk drives, configured to store content. The mass storage 110 is assumed to be of sufficient capacity to contain all content that can possibly be provided to users. A cache 120 is architecturally located between the mass storage 110 and content demand 130. As is true with caches in general, the cache 120 is capable of responding to content requests faster than the mass storage 110. However, it is assumed that the cache 120 is more expensive per unit of storage (e.g., terabyte) and therefore of substantially less capacity than the mass storage 110.
The cache 120 can fulfill content demand 130 to the extent that the cache 120 already contains requested content. The mass storage 110 must fulfill content demand 130 to the extent that the cache 120 does not already contain requested content. In the latter case, the cache 120 is typically updated with the requested content as it is retrieved from the mass storage 110. Updating the cache 120 makes the requested content more readily available for at least near-term future content requests. Fulfilling content requests with the cache 120 is typically far faster than fulfilling content requests with the mass storage 110, so it is beneficial to manage the cache 120 to increase the likelihood that it already contains requested content when a request for it is received. A cache management system 140 is provided to perform this function.
The cache management system 140 is responsible for determining the optimal subset of content that the cache 120 should store, often continually loading content into the cache 120 and replacing content that had previously been loaded into the cache 120. Conventional cache management systems base their determinations on the timing or number of past content requests. Some cache management systems cache content that has been most recently requested. Other cache management systems cache content that has been most often demanded. Still other cache management systems cache content based on some combination of demand recency or demand frequency. Unfortunately, these conventional cache management systems are reactive by their nature; they adjust cache content only in response to past content requests with the expectation that future content requests will bear some relationship to the past requests. Unfortunately, the popularity statistics of future and past requests may not be identical, but are time-varying in a certain fashion. This has been found particularly to be the case when the content in question includes newly-introduced content, such as feature motion pictures or music, or existing (“library”) content in which popularity has been temporarily revived by means of a promotion or recommendation. As a result, reactive cache management systems prove undesirable.
In contrast to the conventional reactive cache management systems described above, the invention provides, among other things various embodiments of cache management systems and methods that are capable of predicting future content requests and adjusting cached content based on: (1) one or more explicit stimuli for future content popularity, (2) one or more models of popularity lifetime or (3) both of one or more explicit stimuli for future content popularity and one or more models of popularity lifetime. Various embodiments of the systems and methods described herein may be employed to select appropriate explicit stimuli for future content popularity, generate appropriate models of popularity lifetime, seed content in a cache, control cache data updates (replacements), and transmit content to or from a cache. The general goals of various embodiments of the cache management system are to cache content to increase and perhaps maximize future content request rate and to cache content based on relevant explicit stimuli.
Before describing certain of the embodiments in detail, some general aspects of demand characteristics will be described to lay a foundation for understanding the certain embodiments.
It has been found that a relatively small set of distinct patterns describes the popularity of various content items over their lifetime. Patterns can be modified by defining values for their anchor points. Patterns can be assigned to a content item it two ways: (1) assignment by provider, e.g., based on experience, market data or marketing efforts (e.g., “The Pirates of the Caribbean”) or (2) automatic assignment by tracking popularity over an initial period of time.
A provider can actively influence the popularity of a content item through, for example, recommender systems or marketing events (e.g., a “Casablanca weekend”). However, it has also been found that not all events are significant. For example, an Oscar® nomination has been found to have little or no immediate impact on popularity. This influence can be reflected in the popularity lifetime by creating insertions that modify the standard pattern. The cache management system may monitor actual popularity and make adjustments, e.g., by modifying anchor points or changing a pattern.
In one embodiment, a cache management system employs a caching technique in which caching is based on a time ti+1 instead of a time ti. The issue to be resolved is how large i should be. The prediction for the popularity at time ti+1 may be calculated as follows:
pi+1=c+pi*di, (1)
where p0 is the peak request rate after the content item is added to the mass storage, di is a decay factor and c is determined for each content item. The decay factor is time-dependent to model an initial increase followed by a decline. The decay factor, di, is likely to be constant after the initial peak, e.g., d0=2, d1=1, and di=0.8 for i>1. In one embodiment, values for di and c are determined algorithmically based on past requests. In another embodiment, di and c are adapted over time based on ongoing content requests.
In another embodiment, a cache management system employs a caching technique in which caching is based on a defined border area (e.g., the least popular x % items in a given cache space). Content items that are increasing in popularity are preferred for caching over content items for which popularity is flat. Likewise, content items for which popularity is flat are preferred for caching over content items that are decreasing in popularity. The issue to be resolved is how large x should be.
Prediction-based cache replacement involves caching a changing population. It is based on the observation that the popularity of content changes over its lifetime following a few specific patterns. For example, the popularity of blockbusters follows a geometric decay.
The decay factor, di, may be determined, for example, by examining reviews. One publicly available source that aggregates reviews for titles and presents them on the Internet is Rotten Tomatoes™. Using Rotten Tomatoes™ to form predictions on a per-title basis, di may lie between about 0.72 and about 0.86 for individual titles. The average of di may be about 0.8. Given this per-title di, average prediction error may be about 4%. With a global value of di=0.8, the average error may be about 18%.
Two popularity lifetime prediction techniques will be now be described. The first popularity lifetime prediction technique considers an observed popularity to correct the prediction. Observed popularity could be measured, e.g., using a Least Recently/Frequently Used (LRFU) paging technique. Thus, ri=LRFU popularity at time i. This popularity lifetime prediction technique might require a LRFU technique that determines the absolute popularity of titles (i.e., independent of other titles). A distance-based technique may alternatively be used. For example, ri=1/(timelast
pi+1=(c+pi*di)*α+ri+1*(1−α). (2)
For cache replacement, a rank may be determined based on the number of times the item will be accessed in the future. The cache rank at time i, cri, may be determined as follows:
where k is the length of lookahead window. An item in the cache would typically be replaced if the new item has a higher rank than the item in the cache with the lowest rank.
The second popularity lifetime prediction technique captures the popularity trends. Equations employed in this technique are as follows:
L(t)=(1−θ)*D(t)+θ*(L(t−1)+T(t−1)), and (4)
T(t)=(1−β)*(L(t)−L(t−1))+β*T(t−1), (5)
where D is an observation or a measurement and T is a trend (a slope). The technique involves forecasting k periods into future F(t+k):
F(t+k)=L(t)+k*T(t). (6)
Smoothing parameters θ and β (each between 0 and 1) control the degree of forgetfulness of older measurements.
Having set forth various explicit stimuli and popularity lifetime prediction techniques that may be employed to improve caching, an example of a cache management system will now be set forth.
The cache management system 140 is configured to receive content requests 810 into a request receiver 820. The cache management system 140 is further configured to receive explicit stimuli 830 into a stimulus receiver 840. A popularity lifetime prediction modeler 850 is coupled to the request receiver 820 and perhaps also to the stimulus receiver 840 and is configured to generate popularity lifetime prediction models for the content that the mass storage 110 of
In the illustrated embodiment, the popularity lifetime prediction modeler 850 generates a popularity lifetime prediction model for each content item that the mass storage 110 of
The techniques that use predictions of future data requests and input from cache management routines to lower network transmissions of data to and from caches. Similar techniques lower the peak traffic loads or lower the loads on specific network links by scheduling and routing data to and from caches according to predicted values. For example, content that is predicted to be popular but is not yet popular may be loaded into the cache during appropriate times to avoid traffic load spikes.
Those skilled in the art to which the invention relates will appreciate that other and further additions, deletions, substitutions and modifications may be made to the described embodiments without departing from the scope of the invention.
Borst, Simon C., Rimac, Ivica, Hofmann, Markus A., Ensor, James R., Walid, Anwar I., Hilt, Volker F.
Patent | Priority | Assignee | Title |
10075551, | Jun 06 2016 | Amazon Technologies, Inc. | Request management for hierarchical cache |
10153969, | Mar 31 2008 | Amazon Technologies, Inc. | Request routing based on class |
10158729, | Mar 31 2008 | Amazon Technologies, Inc. | Locality based content distribution |
10180993, | May 13 2015 | Amazon Technologies, Inc. | Routing based request correlation |
10200402, | Sep 24 2015 | Amazon Technologies, Inc. | Mitigating network attacks |
10200492, | Nov 22 2010 | Amazon Technologies, Inc. | Request routing processing |
10218584, | Oct 02 2009 | Amazon Technologies, Inc. | Forward-based resource delivery network management techniques |
10225322, | Sep 28 2010 | Amazon Technologies, Inc. | Point of presence management in request routing |
10225326, | Mar 23 2015 | Amazon Technologies, Inc | Point of presence based data uploading |
10225362, | Jun 11 2012 | Amazon Technologies, Inc. | Processing DNS queries to identify pre-processing information |
10230819, | Mar 27 2009 | Amazon Technologies, Inc. | Translation of resource identifiers using popularity information upon client request |
10264062, | Mar 27 2009 | Amazon Technologies, Inc. | Request routing using a popularity identifier to identify a cache component |
10270878, | Nov 10 2015 | Amazon Technologies, Inc | Routing for origin-facing points of presence |
10305797, | Mar 31 2008 | Amazon Technologies, Inc. | Request routing based on class |
10348639, | Dec 18 2015 | Amazon Technologies, Inc | Use of virtual endpoints to improve data transmission rates |
10372499, | Dec 27 2016 | Amazon Technologies, Inc | Efficient region selection system for executing request-driven code |
10374955, | Jun 04 2013 | Amazon Technologies, Inc. | Managing network computing components utilizing request routing |
10447648, | Jun 19 2017 | Amazon Technologies, Inc | Assignment of a POP to a DNS resolver based on volume of communications over a link between client devices and the POP |
10469355, | Mar 30 2015 | Amazon Technologies, Inc. | Traffic surge management for points of presence |
10469442, | Aug 24 2016 | Amazon Technologies, Inc. | Adaptive resolution of domain name requests in virtual private cloud network environments |
10469513, | Oct 05 2016 | Amazon Technologies, Inc | Encrypted network addresses |
10491534, | Mar 27 2009 | Amazon Technologies, Inc. | Managing resources and entries in tracking information in resource cache components |
10503613, | Apr 21 2017 | Amazon Technologies, Inc.; Amazon Technologies, Inc | Efficient serving of resources during server unavailability |
10505961, | Oct 05 2016 | Amazon Technologies, Inc | Digitally signed network address |
10506029, | Jan 28 2010 | Amazon Technologies, Inc. | Content distribution network |
10511567, | Mar 31 2008 | Amazon Technologies, Inc. | Network resource identification |
10516590, | Aug 23 2016 | Amazon Technologies, Inc. | External health checking of virtual private cloud network environments |
10521348, | Jun 16 2009 | Amazon Technologies, Inc. | Managing resources using resource expiration data |
10523783, | Nov 17 2008 | Amazon Technologies, Inc. | Request routing utilizing client location information |
10530874, | Mar 31 2008 | Amazon Technologies, Inc. | Locality based content distribution |
10542079, | Sep 20 2012 | Amazon Technologies, Inc. | Automated profiling of resource usage |
10554748, | Mar 31 2008 | Amazon Technologies, Inc. | Content management |
10574787, | Mar 27 2009 | Amazon Technologies, Inc. | Translation of resource identifiers using popularity information upon client request |
10592578, | Mar 07 2018 | Amazon Technologies, Inc.; Amazon Technologies, Inc | Predictive content push-enabled content delivery network |
10616250, | Oct 05 2016 | Amazon Technologies, Inc | Network addresses with encoded DNS-level information |
10623408, | Apr 02 2012 | Amazon Technologies, Inc | Context sensitive object management |
10645056, | Dec 19 2012 | Amazon Technologies, Inc. | Source-dependent address resolution |
10645149, | Mar 31 2008 | Amazon Technologies, Inc. | Content delivery reconciliation |
10666756, | Jun 06 2016 | Amazon Technologies, Inc. | Request management for hierarchical cache |
10691752, | May 13 2015 | Amazon Technologies, Inc. | Routing based request correlation |
10728133, | Dec 18 2014 | Amazon Technologies, Inc. | Routing mode and point-of-presence selection service |
10742550, | Nov 17 2008 | Amazon Technologies, Inc. | Updating routing information based on client location |
10771552, | Mar 31 2008 | Amazon Technologies, Inc. | Content management |
10778554, | Sep 28 2010 | Amazon Technologies, Inc. | Latency measurement in resource requests |
10783077, | Jun 16 2009 | Amazon Technologies, Inc. | Managing resources using resource expiration data |
10785037, | Sep 04 2009 | Amazon Technologies, Inc. | Managing secure content in a content delivery network |
10797995, | Mar 31 2008 | Amazon Technologies, Inc. | Request routing based on class |
10831549, | Dec 27 2016 | Amazon Technologies, Inc | Multi-region request-driven code execution system |
10862852, | Nov 16 2018 | Amazon Technologies, Inc | Resolution of domain name requests in heterogeneous network environments |
10931738, | Sep 28 2010 | Amazon Technologies, Inc. | Point of presence management in request routing |
10938884, | Jan 30 2017 | Amazon Technologies, Inc.; Amazon Technologies, Inc | Origin server cloaking using virtual private cloud network environments |
10951725, | Nov 22 2010 | Amazon Technologies, Inc. | Request routing processing |
10958501, | Sep 28 2010 | Amazon Technologies, Inc.; Amazon Technologies, Inc | Request routing information based on client IP groupings |
11025747, | Dec 12 2018 | Amazon Technologies, Inc | Content request pattern-based routing system |
11075987, | Jun 12 2017 | Amazon Technologies, Inc. | Load estimating content delivery network |
11108729, | Sep 28 2010 | Amazon Technologies, Inc. | Managing request routing information utilizing client identifiers |
11115500, | Nov 17 2008 | Amazon Technologies, Inc. | Request routing utilizing client location information |
11134134, | Nov 10 2015 | Amazon Technologies, Inc. | Routing for origin-facing points of presence |
11194719, | Mar 31 2008 | Amazon Technologies, Inc. | Cache optimization |
11205037, | Jan 28 2010 | Amazon Technologies, Inc. | Content distribution network |
11245770, | Mar 31 2008 | Amazon Technologies, Inc. | Locality based content distribution |
11283715, | Nov 17 2008 | Amazon Technologies, Inc. | Updating routing information based on client location |
11290418, | Sep 25 2017 | Amazon Technologies, Inc. | Hybrid content request routing system |
11297140, | Mar 23 2015 | Amazon Technologies, Inc. | Point of presence based data uploading |
11303717, | Jun 11 2012 | Amazon Technologies, Inc. | Processing DNS queries to identify pre-processing information |
11330008, | Oct 05 2016 | Amazon Technologies, Inc. | Network addresses with encoded DNS-level information |
11336712, | Sep 28 2010 | Amazon Technologies, Inc. | Point of presence management in request routing |
11362986, | Nov 16 2018 | Amazon Technologies, Inc. | Resolution of domain name requests in heterogeneous network environments |
11381487, | Dec 18 2014 | Amazon Technologies, Inc. | Routing mode and point-of-presence selection service |
11451472, | Mar 31 2008 | Amazon Technologies, Inc. | Request routing based on class |
11457088, | Jun 29 2016 | Amazon Technologies, Inc. | Adaptive transfer rate for retrieving content from a server |
11461402, | May 13 2015 | Amazon Technologies, Inc. | Routing based request correlation |
11463550, | Jun 06 2016 | Amazon Technologies, Inc. | Request management for hierarchical cache |
11533383, | Oct 13 2015 | HOME BOX OFFICE, INC. | Templating data service responses |
11604667, | Apr 27 2011 | Amazon Technologies, Inc. | Optimized deployment based upon customer locality |
11632420, | Sep 28 2010 | Amazon Technologies, Inc. | Point of presence management in request routing |
11640429, | Oct 11 2018 | HOME BOX OFFICE, INC | Graph views to improve user interface responsiveness |
11729294, | Jun 11 2012 | Amazon Technologies, Inc. | Processing DNS queries to identify pre-processing information |
11762703, | Dec 27 2016 | Amazon Technologies, Inc. | Multi-region request-driven code execution system |
11811657, | Nov 17 2008 | Amazon Technologies, Inc. | Updating routing information based on client location |
11863417, | Dec 18 2014 | Amazon Technologies, Inc. | Routing mode and point-of-presence selection service |
11886870, | Oct 13 2015 | HOME BOX OFFICE, INC. | Maintaining and updating software versions via hierarchy |
11909639, | Mar 31 2008 | Amazon Technologies, Inc. | Request routing based on class |
9648126, | Apr 25 2014 | NEC Corporation | Efficient caching in content delivery networks based on popularity predictions |
Patent | Priority | Assignee | Title |
6021403, | Jul 19 1996 | Microsoft Technology Licensing, LLC | Intelligent user assistance facility |
6766422, | Sep 27 2001 | UNIFY, INC | Method and system for web caching based on predictive usage |
7028096, | Sep 14 1999 | Streaming21, Inc. | Method and apparatus for caching for streaming data |
7167895, | Mar 22 2000 | Intel Corporation | Signaling method and apparatus to provide content on demand in a broadcast system |
7254588, | Apr 26 2004 | Taiwan Semiconductor Manufacturing Company, Ltd. | Document management and access control by document's attributes for document query system |
7328250, | Jun 29 2001 | Nokia, Inc.; NOKIA, INC | Apparatus and method for handling electronic mail |
20020092026, | |||
20020162118, | |||
20050193414, | |||
20080154887, | |||
20100070700, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 29 2008 | HOFMANN, MARKUS A | Lucent Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 021526 | /0468 | |
Jul 31 2008 | RIMAC, IVICA | Lucent Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 021526 | /0468 | |
Aug 05 2008 | BORST, SIMON C | Lucent Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 021526 | /0468 | |
Aug 08 2008 | HILT, VOLKER F | Lucent Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 021526 | /0468 | |
Aug 11 2008 | WALID, ANWAR I | Lucent Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 021526 | /0468 | |
Aug 13 2008 | ENSOR, JAMES R | Lucent Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 021526 | /0468 | |
Sep 12 2008 | Alcatel Lucent | (assignment on the face of the patent) | / | |||
Nov 01 2008 | Alcatel USA Sourcing, Inc | Alcatel-Lucent USA Inc | MERGER AND CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 034209 | /0673 | |
Nov 01 2008 | ALCATEL USA MARKETING, INC | Alcatel-Lucent USA Inc | MERGER AND CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 034209 | /0673 | |
Nov 01 2008 | Lucent Technologies Inc | Alcatel-Lucent USA Inc | MERGER AND CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 034209 | /0673 | |
Jan 30 2013 | Alcatel-Lucent USA Inc | CREDIT SUISSE AG | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 030510 | /0627 | |
Aug 19 2014 | CREDIT SUISSE AG | Alcatel-Lucent USA Inc | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 033949 | /0016 | |
Jan 08 2015 | Alcatel-Lucent USA Inc | Alcatel Lucent | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 034694 | /0604 |
Date | Maintenance Fee Events |
Dec 24 2014 | ASPN: Payor Number Assigned. |
Jul 25 2018 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 20 2022 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 03 2018 | 4 years fee payment window open |
Aug 03 2018 | 6 months grace period start (w surcharge) |
Feb 03 2019 | patent expiry (for year 4) |
Feb 03 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 03 2022 | 8 years fee payment window open |
Aug 03 2022 | 6 months grace period start (w surcharge) |
Feb 03 2023 | patent expiry (for year 8) |
Feb 03 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 03 2026 | 12 years fee payment window open |
Aug 03 2026 | 6 months grace period start (w surcharge) |
Feb 03 2027 | patent expiry (for year 12) |
Feb 03 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |