Disclosed is a system for improving server efficiency by caching intermediate states encountered in generating responses to requests. The results of a mapping from an external name for a resource to an internal name for the resource may be cached as may the response header lines, or the body of the response message. In another disclosed aspect, candidates for intermediate state caching are selected from plain and small files. When the resource involves the product of an executable, another aspect involves delaying parsing request headers until necessary and then only parsing the headers required for generating the response.

Patent
   RE42169
Priority
May 25 1999
Filed
Dec 22 2005
Issued
Feb 22 2011
Expiry
May 25 2019
Assg.orig
Entity
Large
0
45
all paid
0. 16. A computer-implemented method for efficiently generating responses for repeated resource requests comprising:
receiving a first request for a first resource, said first request including a resource identifier and one or more parameters;
determining whether generating a response for said first request requires parsing said one or more parameters; and, if not,
generating said response without parsing said one or more parameters.
1. A computer-implemented method for efficiently generating responses for repeated resource requests comprising:
receiving a first request for a first resource, said first request comprising a resource identifier and request modifying information;
determining whether generating a response for said first request requires parsing said request modifying information; and, if not,
generating said response without parsing said request modifying information.
0. 58. An apparatus configured to generate responses for repeated resource requests, said apparatus comprising:
a network interface configured to receive a first request for a first resource, said first request including a resource identifier and one or more parameters;
wherein said apparatus is configured to determine whether generating a response to said first request requires parsing said one or more parameters, and, if not, said apparatus is configured to generate said response to said first request without parsing said one or more parameters.
0. 67. A server configured to concurrently execute a plurality of request handling processes, wherein each of said plurality of request handling processes is configured to receive requests for resources located on said server, each of said received requests including a resource identifier and one or more associated parameters;
wherein each of said plurality of request handling processes is configured to determine whether generating responses for each of said received requests requires parsing said one or more associated parameters, and, if not, each of said plurality of request handling processes is configured to generate responses to said received requests without parsing said one or more associated parameters.
0. 84. A computer program product comprising a non-transitory computer-readable medium having computer readable instructions encoded thereon, comprising:
computer program instructions configured to cause a computer to receive a first request for a first resource, said first request comprising a resource identifier and one or more parameters;
computer program instructions configured to cause a computer to determine whether generating a response for said first request requires parsing said one or more parameters; and
computer program instructions configured to cause a computer to generate said response without parsing said one or more parameters if generating said response does not require parsing said one or more parameters.
15. A computer program product comprising a non-transitory computer-readable medium having computer readable instructions encoded thereon for reducing parsing of request modifying information, comprising; :
computer program instructions configured to cause a computer to receive a first request for a first resource, said first request comprising a resource identifier and request modifying information;
computer program instructions configured to cause a computer to determine whether generating a response for said first request requires parsing said request modifying information; and
computer program instructions configured to cause a computer to generate said response without parsing said request modifying information if generating said response does not require parsing said request modifying information.
8. apparatus comprising a processor, a memory, a network interface, and a file system, programmed instructions configuring said apparatus to accept connections in order to service requests by sending responses thereto, said apparatus further configured with programmed instructions comprising:
a request receiver configured for receiving a first request for a first resource, said first request comprising a resource identifier and request modifying information;
a response generator configured for generating a response to said first request and further configured for determining whether generating said response requires parsing said request modifying information; and
a request modifying information parser configured to parse request modifying information only when said response generator determines generating said response requires parsing said request modifying information.
2. The computer-implemented method of claim 1 wherein the request modifying information includes request headers.
3. The computer-implemented method of claim 1 wherein generating the response for the first request requires parsing said request modifying information if the request is generated by executing instructions.
4. The computer-implemented method of claim 3 wherein parsing the request modifying information includes parsing only a subset of the request modifying information necessary for generating the request.
5. The computer-implemented method of claim 3 wherein the executing the instructions comprises executing a servlet.
6. The computer-implemented method of claim 3 wherein the executing the instructions comprises executing a script.
7. The computer-implemented method of claim 3 wherein the executing the instructions comprises executing a data-base query.
9. The apparatus of claim 8 wherein the response generator configured to determine that generating the response requires parsing if the response generator to generate the response by executing instructions.
10. The apparatus of claim 8 wherein the request modifying information parser configured to parse only a subset of the request modifying information necessary for generating the response.
11. The apparatus of claim 8 wherein the request modifying information includes request headers.
12. The apparatus of claim 9 wherein the instructions is a servlet.
13. The apparatus of claim 9 wherein the instructions is a database query.
14. The apparatus of claim 9 wherein the instructions is a script.
0. 17. The method of claim 16, wherein said first request is an HTTP request.
0. 18. The method of claim 16, wherein said first request corresponds to a request for an enhanced television resource.
0. 19. The method of claim 16, wherein said resource identifier is an URI.
0. 20. The method of claim 16, wherein said one or more parameters include one or more request headers.
0. 21. The method of claim 16, wherein said determining is based at least in part on whether generating a response for said first request requires running executable code.
0. 22. The method of claim 16, wherein determining is based at least in part upon cached information indicative of a file type of said first resource.
0. 23. The method of claim 16, further comprising parsing said one or more parameters if said parsing is required to generate said response for said first request.
0. 24. The method of claim 16, further comprising executing code as a part of generating said response for said first request.
0. 25. The method of claim 24, wherein said code is a servlet.
0. 26. The method of claim 24, wherein said code is a script.
0. 27. The method of claim 24, wherein said code is a database query.
0. 28. The method of claim 24, wherein said method further comprises parsing one or more request headers in said first request if responding to said first request requires running executable code.
0. 29. The method of claim 28, wherein only those request headers in said first request necessary to generate said response for said first request are parsed.
0. 30. The method of claim 16, wherein said generating includes using cached information.
0. 31. The method of claim 16, wherein said generating includes using said resource identifier to determine that information responsive to said first request is cached.
0. 32. The method of claim 31, wherein said determining whether information responsive to said first request is cached includes using said resource identifier to access a hash table.
0. 33. The method of claim 31, wherein said generating includes accessing a cached mapping of said resource identifier to a second identifier.
0. 34. The method of claim 33, wherein said second identifier is indicative of a location of said first resource.
0. 35. The method of claim 31, wherein said generating includes accessing cached debugging information associated with said first resource.
0. 36. The method of claim 31, wherein said generating includes accessing cached directory listing information corresponding to said first resource.
0. 37. The method of claim 31, wherein said generating includes accessing a cached body of said first resource.
0. 38. The method of claim 37, wherein said generating includes dynamically generating headers corresponding to said first resource.
0. 39. The method of claim 31, wherein said generating includes accessing cached headers corresponding to said first resource.
0. 40. The method of claim 39, wherein said generating includes accessing a non-cached body of said first resource.
0. 41. The method of claim 16, wherein said generating includes determining that information responsive to said first request for said first resource is not cached.
0. 42. The method of claim 41, wherein said determining that information responsive to said first request is not cached includes using said resource identifier to access a hash table.
0. 43. The method of claim 41, wherein said generating includes mapping said resource identifier to a second identifier.
0. 44. The method of claim 43, wherein said second identifier is indicative of a location of said first resource.
0. 45. The method of claim 44, further comprising caching said second identifier.
0. 46. The method of claim 41, wherein said generating includes accessing debugging information associated with said first resource.
0. 47. The method of claim 46, further comprising caching said debugging information.
0. 48. The method of claim 41, wherein said generating includes accessing directory listing information corresponding to said first resource.
0. 49. The method of claim 48, further comprising caching said directory listing information.
0. 50. The method of claim 41, wherein said generating includes accessing a body of said first resource.
0. 51. The method of claim 50, wherein said generating includes generating one or more response headers.
0. 52. The method of claim 50, further comprising caching said body of said first resource.
0. 53. The method of claim 51, further comprising caching one or more of said response headers.
0. 54. The method of claim 41, further comprising caching information used in generating said response for said first request.
0. 55. The method of claim 41, further comprising determining whether to cache information used in generating said response for said first request.
0. 56. The method of claim 55, wherein said caching is performed according to a caching policy that is based at least in part on a file type of said first resource.
0. 57. The method of claim 55, wherein said caching is performed according to a caching policy that is based at least in part of the size of said first resource.
0. 59. The apparatus of claim 58, wherein said first request is an HTTP request.
0. 60. The apparatus of claim 58, wherein said first request corresponds to a request for an enhanced television resource.
0. 61. The apparatus of claim 58, wherein said resource identifier is an URI.
0. 62. The apparatus of claim 58, wherein said one or more parameters include one or more request headers.
0. 63. The apparatus of claim 58, further comprising a cache, and wherein said apparatus is configured to determine whether information responsive to said first request is located in said cache.
0. 64. The apparatus of claim 63, wherein said cache is configured to store a mapping of said resource identifier to a second identifier.
0. 65. The apparatus of claim 64, wherein said second identifier is indicative of a location of said first resource.
0. 66. The apparatus of claim 64, wherein said apparatus is a server, and wherein said second identifier is indicative of an internal name of said first resource on said server.
0. 68. The server of claim 67, wherein each of said plurality of request handling processes has an independent address space including a cache.
0. 69. The server of claim 67, wherein said received requests include an HTTP request.
0. 70. The server of claim 67, wherein said received requests include a request for an enhanced television resource.
0. 71. The server of claim 67, wherein said received requests include a resource identifier that is an URI.
0. 72. The server of claim 67, wherein said received requests include one or more parameters that are request headers.
0. 73. The server of claim 67, wherein each cache corresponding to said plurality of request handling processes is configured to store information responsive to received resource requests.
0. 74. The server of claim 73, wherein said stored information includes mappings of first identifiers to secondary identifiers, wherein said first identifiers are included in said resource requests received by said server.
0. 75. The server of claim 74, wherein said secondary identifiers are indicative of locations of resources on said server.
0. 76. The server of claim 67, wherein said plurality of request handling processes share a common independent address space including a cache.
0. 77. The server of claim 76, wherein said received requests include an HTTP request.
0. 78. The server of claim 76, wherein said received requests include a request for an enhanced television resource.
0. 79. The server of claim 76, wherein said received requests include a resource identifier that is an URI.
0. 80. The server of claim 76, wherein said received requests include one or more parameters that are request headers.
0. 81. The server of claim 76, wherein the cache is configured to store information responsive to received resource requests.
0. 82. The server of claim 81, wherein said stored information includes mappings of first identifiers to secondary identifiers, wherein said first identifiers are included in said resource requests received by said server.
0. 83. The server of claim 82, wherein said secondary identifiers are indicative of locations of resources on said server.
0. 85. The computer program product of claim 84, wherein said first request is an HTTP request.
0. 86. The computer program product of claim 84, wherein said first request is a request for an enhanced television resource.
0. 87. The computer program product of claim 84, wherein said resource identifier is an URI.
0. 88. The computer program product of claim 84, wherein said one or more parameters include request headers.

This application is a divisional of U.S. patent application Ser. No. 09/318,493, filed May 25, 1999, now U.S. Pat. No. 6,513,062 issued Jan. 28, 2003.

Features of the invention relate generally to server performance improvements and, more particularly, to performance improvements based on elimination of repeated processing.

A server receiving numerous requests for resources within a brief period of time must be highly efficient in generating responses if the server is going to fulfill the requests within an acceptable period of time. One illustrative context where this problem arises is in connection with Enhanced Television (“ETV”). In the ETV context, typically a video production is distributed to numerous client applications. The video production has associated with it one or more enhancing resources that may be selected by a viewer of the video production. Conventionally, the enhancing resources are made available to the viewer by including an identifier of the resource in the video production. The viewer's client platform, e.g., a set-top box or computer, extracts the resource identifier and provides an indication to the viewer that enhancing resources are available. If the viewer selects the resource, a request is sent with the client application resident in the viewer's client platform. Frequently in the ETV context, numerous client applications each send requests contemporaneously. This aspect is typically present when, for instance, the video production is broadcast and each viewer becomes aware of the availability of the enhancing resource from the broadcast video production virtually simultaneously. It would thus be desirable for operators of servers receiving numerous simultaneous requests for server efficiency to be improved.

Unfortunately, conventional servers are not highly efficient. For instance, when a conventional HTTP serverfrom <http://www.apache.org> and in many commercial products). The particular server application is not fundamental, and others may be used without limitation, on variants of POSIX-like operating systems, WINDOWS operating systems from Microsoft Corp. of Redmond, Wash., or other operating systems.

Process flow initiates at a ‘start’ terminal 2010 and continues to receive a ‘request’ data block 2020. In this illustrative embodiment, the ‘request’ data block 2020 is a Request Message in accordance with the Hypertext Transfer Protocol (“HTTP”). However, as one of skill in the art will appreciate, other embodiments of the invention could work with other communication protocols and the particular protocol is not fundamental. In accordance with the draft HTTP/1.1 (available from the World Wide Web Consortium at <http://www.w3c.org> and the MIT Laboratory for Computer Science in Cambridge, Mass.), a Request Message comprises: a Request Line and zero or more Message Headers. A compliant Request Line comprises the URI and, in practice, typically several Message Headers are included in a Request Message that provide request modifying information, for instance as set forth in the HTTP protocol.

Next, a ‘request URI extraction’ process 2030 extracts the URI from the Request Line and process flow continues to a ‘URI hashes to descriptor’ decision process 2040. Using a conventional case-insensitive hash function, the ‘URI hashes to descriptor’ decision process 2040 hashes the URI received from the ‘request’ data block 2020 for a lookup operation in the hash table 1100. (One of skill will appreciate that use of a hash table is not fundamental; other data models could be used; preferably, the data model provides O(1) speed for lookup independent of the size of data set.) If the URI is not found in the hash table 1100, the ‘URI hashes to descriptor’ decision process 2040 exits through its ‘no’ branch and process flow continues to a ‘URI rewrite mapping’ process 2050.

The ‘URI rewrite mapping’ process 2050 performs a translation from the URI to an internal name for the resource associated with the URI. Typically the internal name is a location in the filesystem of the hardware running the server process. However, the URI may also map to, for instance, debugging information, a directory listing, or one of several default internal names of the server process. When Apache is used, the mod_rewrite uniform resource identifier rewriting engine may be used, and analogously functioning modules may be used with other servers, if desired. Typically, the flexible mapping from the URI to an internal name involves relatively computationally expensive parsing and extraction. Appreciable efficiencies may be obtained by caching the results of this mapping so that it need not be repeated for succeeding requests for the same resource. When the internal name for the resource has been determined, process flow then continues to a ‘URI descriptor creation’ process 2060.

The ‘URI descriptor creation’ process 2060 uses the URI and the internal name to create a URI Descriptor data structure that will, in part, cache the mapping performed by the ‘URI rewrite mapping’ process 2050. The ‘URI descriptor creation’ process 2060 creates a copy of the URI Descriptor data structure in the hash table 1100, sets the first variable 1455 indicating the resource is cached, sets the second variable 1460 indicating the type of file, and the third variable 1465 indicating the internal name for the resource.

Process flow continues to a ‘plainfile’ decision process 2070 and a ‘small’ decision process 2075 that determine whether the resource is a candidate for caching. In some variations of this illustrative embodiments two criteria must be met for the resource to be cached. First, the resource must be a plain file, and second the file must be ‘small.’ In variations, response headers can be cached even when the resource is not, for instance, if the resource is not ‘small’.

In some illustrative embodiments, a resource is a plain file if it does not require running executable code to generate a response, although other criteria could also be used. If the ‘plainfile’ decision process 2070 determines the resource is a plainfile, it exits through its ‘yes’ branch and process flow continues to the ‘small’ decision process 2075. Whether a file is ‘small’ for the purposes of this illustrative embodiment is a function of the caching policy, the server architecture, and the memory architecture of the hardware running the server process. First, the caching policy determines the number of files to cache. Some preferred embodiments use a FIFO cache with a fixed size of 20 files, although many other caching policies are within the level of ordinary skill in the art and could be used as well. It will be appreciated, that the more complex the collection of resources frequently requested from the server, the more desirable it becomes to have a cache with a greater fixed size; analogously, the less complex, the smaller the fixed size may be set. Second, the server architecture determines the number caches that need to be stored. A typical server running in a POSIX-like environment, e.g., Apache v.1.3.6 running under SOLARIS v. 2.7, will have several concurrently-executing request handling processes each with an independent address space. If each request handling process stores its own cache in its own address space, then there are as many caches as there are request handling process. Another typical situation is where a server runs in a multithreaded environment. In this instance, several concurrently executing request handling processes can share a common address space. A single cache can be stored in the common address space and accessed by all request handling processes (suitably synchronized). Third, the amount of physical memory available for cache(s) on the machine executing the server process provides an upper bound. Considering these factors, a size for a ‘small’ file may be determined as follows: f = M N · C where f = Size  of  a  ‘small’  file M = Memory  available  for cache(s) N = The  number  independent  caches C = Number  of  files  per  cache

It will be apparent to one skilled in the art that other cache polices will give rise to differing ways to make similar determinations and for conventional cache policies, it is within the ordinary skill in the art to suitably ascertain which files are desirable to cache.

If the ‘small’ decision process 2075 determines the resource is ‘small’, it exits through its ‘yes’ branch and process flow continues to an ‘open file’ process 2080 that opens the file associated with the resource. Next a ‘read file into d.body’ process 2090 reads the file associated with the resource into the buffer referenced by the sixth variable 1480, e.g, it creates the cached response body 1500, and assigns a value to the fourth variable 1470 of the length of the file associated with the resource. Process flow continues to a ‘d.headers building’ process 2100 that constructs the response headers. In embodiments that use the HTTP protocol, the response is a Response Message in accordance with the HTTP protocol and the response headers generally provide information about the server and about further access to the resource identified by the request. The ‘d.headers building’ process 2100 also reads the constructed response headers into the buffer referenced by the seventh variable 1485, e.g, the cached response header 1600, and assigns a value to the fifth variable 1475 of the length of the response headers. This substantially completes generation of the response and storage of intermediate state information in the URI Descriptor data structure.

This done, the response can be transmitted to the client and a ‘d.headers writing’ process 2110 begins communicating the response by writing the response headers from the cached response header 1600 referenced by the seventh variable 1485. In some instances, not all headers for the response can be cached and must be created dynamically at the time of transmitting the response, for instance the current date and time. A ‘dynamic headers writing’ process 2120 continues communicating the response to the client by writing any response headers that need to be created at the time of generating the response.

Next, a ‘body cached’ decision process 2125 determines whether the body of the resource is cached. In some variations, response headers are cached while the body of the resource is not. This may occur, for instance, when the resource is not a small file. In other instances, it may be desirable only to cache response headers. The ‘body cached’ decision process 2125 determines whether the body is cached. If so, the ‘body cached’ decision process 2125 exits through its ‘yes’ branch and a ‘d.body writing’ process 2130 completes communicating the response by writing the response body from the cached response body 1500 referenced by the sixth variable 1480 in the URI Descriptor data structure. Process flow completes through an ‘end’ terminal 2140.

If the ‘body cached’ decision process 2125 determines the body is not cached, it exits through its ‘no’ branch and process flow continues an ‘open file’ process 2127 that opens the file associated with the resource for reading, a ‘read file’ process 2180 that reads the resource, and a ‘write file’ process 2190 that completes the response by writing the resource. Process flow then completes through the ‘end’ terminal 2140.

Returning to the ‘small’ decision process 2075, when the resource is not ‘small’, the ‘small’ decision process 2075 exits through its ‘no’ branch and the response is generated. Response generation begins with an ‘open file’ process 2160 that opens the file associated with the resource for reading. Next, a ‘headers writing’ process 2170 generates and writes the response headers and the ‘read file’ process 2180 reads the resource. Then, the ‘write file’ process 2190 completes the response by writing the resource. Process flow then completes through the ‘end’ terminal 2140.

In varations where response headers are cached even when the resource itself is not, for instance when the resource is not ‘small’, the ‘headers writing’ process 2170 may also perform the function of caching the response headers. This may occur as was described above with reference to the ‘d.headers building’ process 2100.

Returning to the ‘plainfile’ decision process 2070, if it determines the resource is not a plain file, the ‘plainfile’ decision process 2070 exits through its ‘no’ branch and process flow continues to a ‘run executable’ process 2200. The ‘run executable’ process 2200 executes the instructions, e.g. servlet, script, database query, etc., responsible for generating the response. The ‘run executable’ process 2200 interacts with a ‘header parsing’ process 2210 that parses those headers from the request received in the ‘request’ data block 2020 that are necessary for generating the response. It will be appreciated that the request headers are only parsed when the ‘run executable’ process 2200 is entered, and then only those headers are parsed that are required by the particular instructions responsible for generating the response. Eliminating unnecessary parsing of headers from the request appreciably reduces the average computational overhead necessary for response generation. When the ‘run executable’ process 2200 completes generation of the response, it outputs the response for transmission to the client and process flow completes through the ‘end’ terminal 2140.

Now returning to the ‘URI hashes to descriptor’ decision process 2040, if the URI from the ‘request’ data block 2020 is found when a lookup is performed in the hash table 1100, the ‘URI hashes to descriptor’ decision process 2040, exits through its ‘yes’ branch. This occurs when a second (or later) request for a given resource occurs (and the URI Descriptor data structure associated with the resource has not already been displaced from the cache). Process flow continues to an ‘d.cached’ decision process 2220. The ‘d.cached’ decision process 2220 consults the first variable 1455 in the URI Descriptor data structure to determine whether the resource associated with the request URI is cached. If the resource is cached, the ‘d.cached’ decision process 2220 exits through its ‘yes’ branch to the ‘d.headers writing’ process 2110 and the ‘dynamic headers writing’ process 2120 that write the cached headers, as well as any dynamically-created headers for the response. Next, the ‘body cached’ decision process 2125 determines whether the body of the response is cached and process flow continues to the ‘open file’ process 2127 or the ‘d.body writing’ process 2130, to generate the body of the response, as previously described. Process flow then completes through the ‘end’ terminal 2140.

If the ‘d.cached’ decision process 2220, determines that the resource associated with the request URI is not cached, it exits through its ‘no’ branch and process flow continues to a ‘plain file’ decision process 2150. If the ‘plain file’ decision process 2150 determines the resource is not a plain file, it exits through its ‘no’ branch and process flow continues to the ‘run executable’ process 2200 and continues as was previously described. If the ‘plain file’ decision process 2150 determines the response is a plain file, it exits through its ‘yes’ branch and process flow continues to the ‘open file’ process 2160 and the response is generated as was previously described.

FIG. 3 depicts a computer system 3000 capable of embodying aspects of the invention. The computer system 3000 comprises a microprocessor 3010, a memory 3020 and an input/output system 3030. The memory 3020 is capable of being configured to provide a data structure 3040, such as the cache data architecture 1000, which may contain data manipulated by the computer system 3000 when embodying aspects of the invention. Further illustrated is a media drive 3070, such as a disk drive, CD-ROM drive, or the like. The media drive 3070 may operate with a computer-usable storage medium 3075 capable of storing computer-readable program code able to configure the computer system 3000 to embody aspects of the invention. The input/output system 3030 may also operate with a keyboard 3050, a display 3060, a data network 3080 such as the internet or the like (through an appropriate network interface), a mass data storage 3085, and a pointing device 3090. As illustrated, the computer system 3000 is general-purpose computing machinery. As one of skill recognizes, programmed instructions may configure general purpose computing machinery to embody structures capable of performing functions in accordance with aspects of the invention. Special purpose computing machinery comprising, for example, an application specific integrated circuit (ASIC) may also be used. One skilled in the art will recognize, numerous structures of programmed or programmable logic capable of being configured to embody aspects of the invention. In some illustrative embodiments, the computer system 3000 is an UltraSPARC workstation from Sun Microsystems of Mountain View, Calif., that runs the SOLARIS operating system (also from Sun) and the Apache HTTP (web) server application.

All documents, standards, protocols, and draft protocols referred to herein are incorporated herein by this reference in their entirety.

The present invention has been described in terms of features illustrative embodiments. To fully described the features of the present invention, embodiments were selected that fully illustrated the features of the invention. However, one skilled in the art will understand that various modifications, alterations, and elimination of elements may be made without departing from the scope of the invention. Accordingly, the scope of the invention is not to be limited to the particular embodiments discussed herein, but should be defined only by the allowed claims and equivalents thereof.

Weber, Jay C.

Patent Priority Assignee Title
Patent Priority Assignee Title
5452447, Dec 21 1992 Sun Microsystems, Inc.; Sun Microsystems, Inc Method and apparatus for a caching file server
5511208, Mar 23 1993 IBM Corporation Locating resources in computer networks having cache server nodes
5682514, Nov 30 1995 Comtech EF Data Corporation Apparatus and method for increased data access in a network file oriented caching system
5737523, Mar 04 1996 Oracle America, Inc Methods and apparatus for providing dynamic network file system client authentication
5740370, Mar 27 1996 Clinton, Battersby System for opening cache file associated with designated file of file server only if the file is not subject to being modified by different program
5787470, Oct 18 1996 AT&T Corp Inter-cache protocol for improved WEB performance
5793966, Dec 01 1995 Microsoft Technology Licensing, LLC Computer system and computer-implemented process for creation and maintenance of online services
5826253, Jul 26 1995 Borland Software Corporation Database system with methodology for notifying clients of any additions, deletions, or modifications occurring at the database server which affect validity of a range of data records cached in local memory buffers of clients
5852717, Nov 20 1996 Intel Corporation Performance optimizations for computer networks utilizing HTTP
5864852, Apr 26 1996 Meta Platforms, Inc Proxy server caching mechanism that provides a file directory structure and a mapping mechanism within the file directory structure
5892914, Nov 28 1994 RPX Corporation System for accessing distributed data cache at each network node to pass requests and data
5933849, Apr 10 1997 AT&T Properties, LLC; AT&T INTELLECTUAL PROPERTY II, L P Scalable distributed caching system and method
6003082, Apr 22 1998 International Business Machines Corporation Selective internet request caching and execution system
6023726, Jan 20 1998 Meta Platforms, Inc User configurable prefetch control system for enabling client to prefetch documents from a network server
6078929, Jun 07 1996 DROPBOX, INC Internet file system
6085234, Nov 28 1994 RPX Corporation Remote file services network-infrastructure cache
6128655, Jul 10 1998 UNILOC 2017 LLC Distribution mechanism for filtering, formatting and reuse of web based content
6128701, Oct 28 1997 CA, INC Adaptive and predictive cache refresh policy
6182127, Feb 12 1997 EPLUS CAPITAL, INC Network image view server using efficent client-server tilting and caching architecture
6185598, Feb 10 1998 MOUNT SHASTA ACQUISITION LLC; Level 3 Communications, LLC Optimized network resource location
6185608, Jun 12 1998 IBM Corporation Caching dynamic web pages
6209048, Feb 09 1996 RICOH COMPANY,LTD ,A CORP OF JAPAN HAVING A PLACE OF BUSINESS IN TOKYO,JAPAN AND RICOH CORP ; RICOH COMPANY, LTD A CORP OF JAPAN; Ricoh Corporation Peripheral with integrated HTTP server for remote access using URL's
6212565, Aug 26 1998 Oracle America, Inc Apparatus and method for improving performance of proxy server arrays that use persistent connections
6240461, Sep 25 1997 Cisco Technology, Inc Methods and apparatus for caching network data traffic
6243719, Oct 20 1997 Fujitsu Limited Data caching apparatus, data caching method and medium recorded with data caching program in client/server distributed system
6243760, Jun 24 1997 Transcore Link Logistics Corporation Information dissemination system with central and distributed caches
6286043, Aug 26 1998 International Business Machines Corp. User profile management in the presence of dynamic pages using content templates
6298356, Jan 16 1998 Wilmington Trust, National Association, as Administrative Agent Methods and apparatus for enabling dynamic resource collaboration
6324685, Mar 18 1998 Implicit, LLC Applet server that provides applets in various forms
6330561, Jun 26 1998 AT&T Corp Method and apparatus for improving end to end performance of a data network
6330606, Jun 03 1996 Rovi Technologies Corporation Method and apparatus for dispatching document requests in a proxy
6393422, Nov 13 1998 International Business Machines Corporation Navigation method for dynamically generated HTML pages
6397246, Nov 13 1998 GLOBALFOUNDRIES Inc Method and system for processing document requests in a network system
6442601, Mar 25 1999 Comcast IP Holdings I, LLC System, method and program for migrating files retrieved from over a network to secondary storage
6490625, Nov 26 1997 IBM Corporation Powerful and flexible server architecture
6505241, Jun 03 1992 RPX Corporation Network intermediate node cache serving as proxy to client node to request missing data from server
6507867, Dec 22 1998 International Business Machines Corporation Constructing, downloading, and accessing page bundles on a portable client having intermittent network connectivity
6519646, Sep 01 1998 Oracle America, Inc Method and apparatus for encoding content characteristics
WO9903047,
WO9905619,
WO9917227,
WO9820,
WO9903047,
WO9905619,
WO9917227,
/////////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Jul 14 1999WEBER, JAY C B3TV, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0239380214 pdf
Jul 19 2000B3TV, INC RESPONDTV, INC CHANGE OF NAME SEE DOCUMENT FOR DETAILS 0239380114 pdf
Jun 28 2002RESPONDTV, INC GRISCHA CORPORATION, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0239380172 pdf
May 09 2005GRISCHA CORPORATION, INC Rehle Visual Communications LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0239380200 pdf
Dec 22 2005Rehle Visual Communications LLC(assignment on the face of the patent)
Dec 30 2013Rehle Visual Communications LLCIntellectual Ventures I LLCMERGER SEE DOCUMENT FOR DETAILS 0318810511 pdf
Sep 11 2017Intellectual Ventures I LLCINTELLECTUAL VENTURES ASSETS 57 LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0437070337 pdf
Sep 22 2017INTELLECTUAL VENTURES ASSETS 57 LLCIQ HOLDINGS, LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0446690326 pdf
Feb 19 2018IQ HOLDINGS, LLCACCELERATED MEMORY TECH, LLCASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0449660918 pdf
Date Maintenance Fee Events
May 23 2011M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
May 26 2015M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Feb 22 20144 years fee payment window open
Aug 22 20146 months grace period start (w surcharge)
Feb 22 2015patent expiry (for year 4)
Feb 22 20172 years to revive unintentionally abandoned end. (for year 4)
Feb 22 20188 years fee payment window open
Aug 22 20186 months grace period start (w surcharge)
Feb 22 2019patent expiry (for year 8)
Feb 22 20212 years to revive unintentionally abandoned end. (for year 8)
Feb 22 202212 years fee payment window open
Aug 22 20226 months grace period start (w surcharge)
Feb 22 2023patent expiry (for year 12)
Feb 22 20252 years to revive unintentionally abandoned end. (for year 12)