Methods and systems for managing statistics in an I/O system are disclosed. Embodiments of the present technology may include a method for managing statistical data at an I/O system, the method including reading a statistic record from an array of statistic records according to a write request that is held in a register of a register interface. In some embodiments, the write request includes a data element. In some embodiments, the array of statistic records is stored in random access memory (RAM).
|
1. A method for managing statistical data at an input/output (I/O) system, the method comprising:
reading a statistic record from an array of statistic records according to a write request that is held in a register of a register interface, wherein the write request includes a data element and wherein the array of statistic records is stored in RAM;
performing a statistic update operation in response to the write request to generate an updated statistic, wherein performing the statistic update operation involves executing an operation using the data element included with the write request that is held in the register interface and the statistic record that is read from the RAM; and
writing the updated statistic to the statistic record in the array of statistic records that is stored in the RAM;
further comprising processing multiple write requests corresponding to the same statistic record in the array serially in a register interface in control hardware space of the I/O system.
9. A system for statistic management, the system comprising:
random access memory (RAM) that stores an array of statistic records; and
a register interface including:
a register to receive a write request, wherein the write request includes an index and a data element; and
update logic configured to:
read a statistic record from the array of statistic records that is stored in the RAM according to the index in the register interface;
perform a statistic update operation in response to the write request to generate an updated statistic, wherein performing the statistic update operation involves executing an operation using the data element in the register interface and the statistic read from the RAM; and
write the updated statistic to the statistic record in the array of statistic records that are stored in the RAM;
wherein the register interface includes a write request buffer to buffer multiple write requests, and wherein the register interface is configured to process multiple write requests corresponding to the same statistic record in the array serially.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
10. The system of
11. The system of
12. The system of
13. The system of
14. The system of
15. The system of
16. The system of
17. The system of
|
In data networks, input/output (I/O) systems such as switches, routers, and network interface cards receive data (e.g., packets) at input interfaces, process the received data, and then forward the data to one or more output interfaces. It is important that such I/O systems operate as quickly as possible in order to keep pace with a high rate of incoming data. Additionally, I/O systems typically track certain statistics related to processing incoming data, including for example, statistics such as packet count and total bytes count, histograms such as packet distribution, and timestamps such as packet arrival time and request latency. Tracking such statistics in a high speed I/O system can be a resource intensive operation that can become a drag on system performance.
Methods and systems for managing statistics in an I/O system are disclosed. Embodiments of the present technology may include a method for managing statistical data at an I/O system, the method including reading a statistic record from an array of statistic records according to a write request that is held in a register of a register interface. In some embodiments, the write request includes a data element. In some embodiments, the array of statistic records is stored in random access memory (RAM).
Embodiments may also include performing a statistic update operation in response to the write request to generate an updated statistic. In some embodiments, performing the statistic update operation involves executing an operation using the data element included with the write request that is held in the register interface and the statistic record that is read from the RAM. Embodiments may also include writing the updated statistic to the statistic record in the array of statistic records that is stored in the RAM.
In some embodiments, reading the statistic record from the array of statistic records may include generating a physical memory address from an index that is included in the write request and held in the register interface. In some embodiments, performing the statistic update operation involves update logic in the control hardware space of the I/O system adding a value of the data element to a value in the statistic record.
In some embodiments, the write request includes multiple data elements that correspond to different statistics. In some embodiments, multiple statistic records in the array of statistic records are updated in response to the multiple data elements in the write request. In some embodiments, reading the statistic record according to write request may include reading multiple adjacent statistic records in the array in the same read.
In some embodiments, the write request includes multiple data elements that correspond to different statistics. In some embodiments, the multiple adjacent statistic records in the array of statistic records are updated in response to the multiple data elements in the write request. In some embodiments, writing the updated statistic to the statistic record may include writing updated statistics for the multiple adjacent statistic records to the array in the same write.
Embodiments may also include processing multiple write requests corresponding to the same statistic record in the array serially in a register interface in control hardware space of the I/O system. Embodiments may also include receiving multiple write requests directed to the same statistic record at the register interface and performing the statistic updates in a batch process before the updated statistic is written back to the array of statistic records that are stored in the RAM.
Embodiments of the present technology may also include a method for managing statistical data at an I/O system, the method including reading a statistic record from an array of statistic records that is stored in RAM according to an index of a write request that is held in an address register of a register interface. In some embodiments, the write request also includes a data element that is held in a data register of the register interface.
Embodiments may also include performing a statistic update operation in response to the write request to generate an updated statistic. In some embodiments, performing the statistic update operation involves adding a value of the data element from the write request that is held in the register interface to a value in the statistic record that was read from the RAM. Embodiments may also include writing the updated statistic back to the statistic record that is stored in the RAM.
Embodiments of the present technology may also include a system for statistic management, the system including RAM that stores an array of statistic records. Embodiments may also include a register interface including, a register to receive a write request. In some embodiments, the write request includes an index and a data element. Embodiments may also include update logic configured to, read a statistic record from the array of statistic records that is stored in the RAM according to the index in the register interface. Embodiments may also include perform a statistic update operation in response to the write request to generate an updated statistic. In some embodiments, performing the statistic update operation involves executing an operation using the data element in the register interface and the statistic read from the RAM. Embodiments may also include write the updated statistic to the statistic record in the array of statistic records that are stored in the RAM.
In some embodiments, the register interface is configured to generate a physical memory address from the index stored in the register in order to read the statistic record from the array. In some embodiments, the write request held in the register of the register interface includes multiple data elements that correspond to different statistics. In some embodiments, multiple statistic records in the array of statistic records are updated in response to the multiple data elements in the write request.
In some embodiments, reading the statistic record according to write request may include reading multiple adjacent statistic records in the array in the same read. In some embodiments, the write request includes multiple data elements that correspond to different statistics. In some embodiments, multiple statistic records in the array of statistic records are updated in response to the multiple data elements in the write request.
In some embodiments, writing the updated statistic to the statistic record may include writing updated statistics for multiple adjacent statistic records to the array in the same write. Embodiments may also include a processing element and a bus that connects the processing element to the register interface. In some embodiments, the write request is received on the bus from the processing element. In some embodiments, the update logic is configured to add a value of the data element to a value in the statistic record.
In some embodiments, the register interface includes a write request buffer to buffer multiple write requests. In some embodiments, the register interface is configured to process multiple write requests corresponding to the same statistic record in the array serially. In some embodiments, the register interface includes a write request buffer to buffer multiple write requests directed to the same statistic record. In some embodiments, the update logic is further configured to perform the statistic updates in a batch process before the updated statistic is written back to the array of statistic records that are stored in the RAM.
Other aspects in accordance with the invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrated by way of example of the principles of the invention.
Throughout the description, similar reference numbers may be used to identify similar elements.
It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussions of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.
Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.
Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present invention. Thus, the phrases “in one embodiment”, “in an embodiment”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
Network processing hardware (e.g., I/O systems) and processing elements such as CPUs frequently desire to increment statistic counters, which track statistics such as packet count and total bytes count, event count (e.g., packet drop events or error events), histograms such as packet distribution, and timestamps such as packet arrival time and request latency. Typically, a statistics update requires a processing element to read a statistic record from memory, to apply an update to the statistic according to an algorithm, and to write the updated statistic back to the memory. Statistics, such as fine grained counters on packets and/or bytes, may be used to charge customers on a per usage basis, or may be used to implement service level agreements (SLAs). Such a statistic update process consumes considerable resources of the processing element as reading from and writing to the memory can be time and resource intensive operations, which can become a drag on the performance of the processing element, especially in a high bandwidth I/O device. An additional challenge to statistics updates may occur when multiple processing elements attempt to update the same statistic record at the same time, which can create a race condition in the system. Some conventional approaches to managing statistics in high bandwidth I/O devices have relied on using customized memory circuits to speed the update process, however, such customized memory circuits tend to be expensive and difficult to scale.
In accordance with an embodiment of the invention, a technique for managing statistics in an I/O system utilizes a register interface and an array of statistic records that is stored in memory (e.g., general purpose memory such as DDR) to implement hardware-based statistics management that is both highly flexible and massively scalable while conserving processor/CPU cycles as compared to conventional statistics management techniques. In particular, a method for statistic management involves reading a statistic record from an array of statistic records according to a write request that is held in a register of a register interface, wherein the write request includes a data element and wherein the array of statistic records is stored in RAM, performing a statistic update operation in response to the write request to generate an updated statistic, wherein performing the statistic update operation involves executing an operation using the data element included with the write request that is held in the register interface and the statistic record that is read from the RAM, and writing the updated statistic to the statistic record in the array of statistic records that is stored in the RAM. Using such a technique, a statistic update can be implemented in response to the issuance of a single write instruction from a processing element. Therefore, the processing element does not have to read a statistic record from a memory, update the statistic record (e.g., execute an add operation), and then write the updated statistic back to the memory. Rather, the processing element simply issues a single write request and thus, from the perspective of the processing element, a statistic update involves issuing only a single write request, thereby achieving an “atomic” update of a statistic record. Additionally, the processing element does not need physical resources such as read response buffers since the processing element does not implement a read operation. Because the array of statistic records is accessed and updated through a hardware register interface, the statistic update process does not consume CPU resources and a large number of statistic records can be uniquely managed in general purpose memory such as DDR, with a wide range of feature availability. Additionally, statistics updates can be initiated by multiple different processing elements within an I/O system using the same technique.
The memory 110 of the NIC 102 can include memory for running Linux or some other operating system, memory for storing data structures such as flow tables, statistics, and other analytics, and memory for providing buffering resources for advanced features including TCP termination and proxy, deep packet inspection, and storage offloads. The memory may include a high bandwidth module (HBM) that may support, for example, 4 GB capacity, 8 GB capacity, or some other capacity depending on package and HBM.
In an embodiment, the CPU cores 118 are general purpose processor cores, such as ARM processor cores, Microprocessor without Interlocked Pipeline Stages (MIPS) processor cores, and/or x86 processor cores, as is known in the field. In an embodiment, each CPU core includes a memory interface, an ALU, a register bank, an instruction fetch unit, and an instruction decoder, which are configured to execute instructions independently of the other CPU cores. In an embodiment, the CPU cores are Reduced Instruction Set Computers (RISC) CPU cores that are programmable using a general-purpose programming language such as C. The service processing offloads 120 are specialized hardware modules purposely optimized to handle specific tasks at wire speed, such as cryptographic functions, compression/decompression, etc. The packet buffer 122 can act as a central on-chip packet switch that delivers packets from the network interfaces 124 to packet processing elements of the NIC 102 and vice-versa.
Memory transactions in the NIC 102, including host memory, on board memory, and registers may be connected via the coherent interconnect 112. In one non-limiting example, the coherent interconnect can be provided by a network on a chip (NOC) “IP core”. Semiconductor chip designers may license and use prequalified IP cores within their designs. Prequalified IP cores may be available from third parties for inclusion in IC devices produced using certain semiconductor fabrication processes. A number of vendors provide NOC IP cores. The NOC may provide cache coherent interconnect between NOC masters and slaves, including the packet processing circuit implementing P4 pipelines 114, the pipeline circuit implementing extended packet processing pipelines 116, the CPU cores 118, and the PCIe 108. The coherent interface provides a bridge between the control hardware (referred to generally herein as the control hardware space) and the memory (referred to generally herein as the memory space) of the I/O system 100. The coherent interconnect may distribute memory transactions across a plurality of memory interfaces using a programmable hash algorithm. Traffic targeting the memory may be stored in a NOC cache (e.g., 1 MB cache). The NOC cache may be kept coherent with the CPU core caches. The NOC cache may be used to aggregate memory write transactions which may be smaller than the cache line (e.g., size of 64 bytes) of an HBM.
The register interface 232 in the control hardware space 234 of the I/O system 200 is configured to implement operations related to statistics management as is described below. The register interface includes an address register 250, a data register 252, and update logic 254. In an embodiment, the register interface is integrated onto the same IC device as the processing element 230 and the address register 250 and data register 252 are used to hold components of write requests. For example, with regard to the write requests, the address register 250 holds an index (e.g., atomic_add_index) that is used to identify a statistic record that is stored in the memory and the data register 252 holds a data element (e.g., data_element) that is used to update the identified statistic record that is stored in the memory 236. In an embodiment, the address and data registers, 250 and 252, are 64-bit hardware registers that are implemented in hardware circuits such as flip-flop circuits as is known in the field. Although the address and data registers are described a 64-bit registers, the address and data registers could be of a different size, such as, for example, 32-bit registers. In an embodiment, the address and data registers of the register interface are embodied as a small amount of fast storage that is external to the processing element 230 and that is distinct from the user-accessible address and data registers 240 and 242, which are incorporated into the processing element 230, e.g., as part of an intellectual property (IP) block that is often provided by third-party CPU providers. Although not shown in
In an embodiment, the address and data registers 250 and 252 of the register interface 232 are connected to the corresponding address and data registers, 240 and 242, within the processing element 230 via a bus 212 (e.g., the coherent interconnect 112 as described above with reference to
In an embodiment, the update logic 254 of the register interface 232 is implemented in hardware circuits that interact with the address register 250 and the data register 252 and with data from an array of statistic records 260 that is stored in the memory 236 (e.g., DDR) to service write requests received from the processing element 230. For example, the update logic 254 includes hardware circuits configured to implement finite state machines (FSMs) that perform statistic update operations that include reading statistic records from the memory, updating statistics to generate updated statistics, and then writing the updated statistics back to the memory. In an embodiment, the update logic is implemented as a pipeline machine, which includes a stage for decoding write requests, a stage for reading statistic records from the array of statistic records that are stored in the memory, a stage for updating statistics (e.g., executing add operations), and a stage for writing updated statistics back to the statistic records in the array of statistic records that are stored in the memory. Operations of the register interface are described in more detail below with reference to
Turning now to the memory 236 of the computing system 200, in an embodiment, the memory is general purpose memory such as RAM. For example, the memory is double-data rate synchronous dynamic RAM (DDR-SDRAM or simply DDR), although the RAM may be static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), or a combination thereof. As illustrated in
An example of implementing a statistic update via a single write request from a processing element is now described with reference to
Thus, as described above, from the perspective of the processing element 230, a statistic update is implemented with the issuance of a single write instruction from the processing element. That is, the processing element does not have to read a statistic record from the memory 236, update the statistic record (e.g., execute an add operation), and then write the updated statistic back to the memory. Thus, from the perspective of the processing element, a statistic update involves issuing only a single write request, thereby achieving an “atomic” update of a statistic record from the perspective of the processing element.
Current high-speed memory systems include a high-speed data bus and have a multi-byte burst length. For example, some current high-speed memory systems have a 64-byte burst length, in which 64-bytes are read from the memory in a single read operation and written to the memory in a single write operation. Given a 64-byte burst length, eight 64-bit statistic records can be read from an array of statistic records in a single read. In fact, in a system with a 64-byte burst length, at least 64-bytes will typically be read from the memory in each read operation regardless of the number bytes that are desired. With reference back to
As described above with reference to
The second write request 678-2 is similar to the first write request except that the size field is set to “01,” which indicates that there are two different data elements included. As shown in
The third write request 678-3 is similar to the first and second write requests except that the size field is set to “10,” which indicates that there are four different data elements included. As shown in
The fourth write request 678-4 is similar to the first, second, and third write requests except that the size field is set to “11,” which indicates that there are seven different data elements included. As shown in
As explained above, a single memory read can include up to eight different statistic records, e.g., eight 64-bit statistic records for a total read of 64-bytes. Although eight different statistic records can be read in a single read operation, in the example described herein, a maximum of seven unique data elements are carried in a single write request. In other embodiments, the number of data elements carried in the same 56-bit field could be increased by decreasing the number of bits per data element. In the example described herein, the minimum size of the data element was selected to be 8-bits and therefore, the number of unique data elements was limited to seven. Additional variations can be envisioned if the write requests are comprised of more or less bits than described herein.
As described above, the register interface 232 generates a physical memory address for a statistic record from the write request. In an embodiment, in order to address a large memory space, the register interface uses a field from the address register 250 in conjunction with a field from the data register 252 to generate the address in physical memory at which the desired statistic record is stored.
Address = {atomic_add[63:58], atomic_add_index[23:0], 3'd0} +
atomic_base_address<<28)
As expressed above, the upper 6 bits (ADDR_HI or atomic_add[63:58]) of the physical memory address come from the data register and the lower address bits of the physical memory address come from the address register and from a base address (e.g., atomic_base_address [9-bits] shifted by 28 bits) that is added to allow the array of statistic records to start at any 256 MB boundary within the memory, e.g., within the DDR. In an embodiment, the DDR starts at 2 GB in the system memory map and therefore, the atomic_base_address (e.g., atomic_base_address) is set to reflect the 2 GB start address.
In an embodiment, example update operations (e.g., add operations) corresponding to the four different data field formats are represented as:
atomic_add[57:56] (SZ[2]), the size of the atomic add operation:
00: Single counter add
*Address += atomic_add[55:0]
01: Dual counter add
*Address += atomic_add[31:0]
*(Address + 8) +=atomic_add[55:32]
10: Quad counter add,
*Address += atomic_add[15:0]
*(Address + 8) +=atomic_add[31:16]
*(Address + 16) +=atomic_add[47:32]
*(Address + 24) +=atomic_add[55:48]
11: Seven counter add,
*Address += atomic_add[7:0]
*(Address + 8) +=atomic_add[15:8]
*(Address + 16) +=atomic_add[23:16]
*(Address + 24) +=atomic_add[31:24]
*(Address + 32) +=atomic_add[39:32]
*(Address + 48) +=atomic_add[47:40]
*(Address + 56) +=atomic_add[55:48]
In an embodiment, a value of “0” in a data element field is valid and will leave the corresponding statistic unchanged. Thus, while using a write request format with multiple different data elements, not all of the corresponding statistic records may be updated in response to the write request.
In an embodiment, write requests that include multiple data elements must be naturally aligned in address. In other words, for atomic_add[57:56] (size field):
00: write to any counter OK;
01: write to any even counter (index LSB is 0);
10: write to any fourth counter (index LSBs are 2′b0); and
11: write to any eighth counter (index LSBs are 3′b0).
As described herein, the generation of physical memory addresses is implemented in the register interface 232 of the I/O system 200. Implementing the physical memory address generation in the register interface centralizes the address generation function to a single place in the system, which makes the address generation process efficient to implement, control, and/or modify. Additionally, centralizing the address generation function to the register interface removes the burden of physical memory address generation from the processing elements, which may be heterogenous processing elements distributed throughout the I/O system, and which may require custom configuration for each instance.
In some cases, multiple write requests directed to the same statistic record may be sent to the register interface in rapid succession, e.g., in a burst of write requests. Thus, in some embodiments, the register interface includes a request buffer (see request buffer 256,
In one embodiment, multiple write requests to the same statistic record are buffered in the request buffer 256 and processed serially by the register interface. For example,
In another embodiment, multiple write requests to the same statistic record are buffered in the request buffer 256 and processed as a “batch” or in a “batch process.” For example,
add1=(record1+write_requestA);
add2=write_requestB+(add1); and
add3=write_requestC+(add2).
The result of the three consecutive add operations (e.g., add3) is then written back to the memory. Although the processing of the three write requests involves three add operations, the processing of the three write requests requires only one read of statistic record 1 from the memory and only one write of the updated statistic back to the memory. Additionally, the batch processing of write requests as described herein ensures that a race condition is avoided amongst the three write requests. In an embodiment, the batch processing of write requests is triggered when there is more than one write request corresponding to the same statistic record buffered in the register interface.
Although the update logic is described above as implementing add operations, in other embodiments, the update logic may be configured to implement other statistic update operations in addition to, or instead of, add operations. For example, other update operations that may be implemented by the update logic may include:
1) an increment plus timestamp update operation: read one or more statistic records from the generated physical memory address, add one or more data elements that are write request; load the current timestamp from a time keeping resource; store all results plus timestamp back to memory.
2) a histogram update operation: determine which bucket the data is bound for based on a programmable profile; determine bucket statistics address based on bucket number plus calculated address; read current bucket value from memory; add write data; store back to memory.
3) a histogram plus timestamp update operation: a combination of operations 1) and 2), above.
4) an increment plus transaction latency update operation: read one or more statistic records from the calculated address, add one or more write elements from the write request; store the current timestamp for request transaction, or read and calculate the time difference for a response transaction; store calculated latency in the memory.
5) a moving average update operation: read statistic record from the memory; calculate an updated moving average using the data element in the write request; write the updated moving average back to the memory.
In an embodiment, the particular update operation that is executed by the update logic is determined in response to the index (e.g., atomic_add_index) that is included in the request. For example, the update logic is configured to associate particular indexes with particular update operations.
Although the examples described herein involve separate write address and write data channels (264 and 266,
Some advantages of the hardware-based statistics management technique described herein may include, for example: 1) Processing elements update statistics with a posted write, which does not require read response buffers or tracking of multiple transactions; 2) Multiple statistics updates to the same or different records can be in flight at the same time from one or more processing elements. The atomic update mechanism will serialize and update memory accurately, without race conditions. For performance reasons, the atomic update system can detect multiple write requests to the same data index and collapse the write requests into a single memory read plus memory write transaction; 3) The atomic update mechanism can employ acceleration devices such as a transaction buffer or memory cache which are difficult and expensive to implement at the processing elements; 4) The internal transaction network of the system can be used to update many statistics in a single transaction, reducing the overall traffic on the internal bus resource; and 5) Processing elements write to the atomic update register array using only the statistics index as an addressing parameter. The atomic update mechanism can be programmed with the statistic data structure offset in memory, centralizing this parameter and removing the burden of more complex addressing from individual processing elements.
Although the operations of the method(s) herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be implemented in an intermittent and/or alternating manner.
It should also be noted that at least some of the operations for the methods described herein may be implemented using software instructions stored on a computer useable storage medium for execution by a computer. As an example, an embodiment of a computer program product includes a computer useable storage medium to store a computer readable program.
The computer-useable or computer-readable storage medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device). Examples of non-transitory computer-useable and computer-readable storage media include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk. Current examples of optical disks include a compact disk with read only memory (CD-ROM), a compact disk with read/write (CD-R/W), and a digital video disk (DVD).
Although specific embodiments of the invention have been described and illustrated, the invention is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the invention is to be defined by the claims appended hereto and their equivalents.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5794252, | Aug 28 1996 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Remote duplicate database facility featuring safe master audit trail (safeMAT) checkpointing |
7035988, | Mar 17 2003 | RIBBON COMMUNICATIONS OPERATING COMPANY, INC | Hardware implementation of an N-way dynamic linked list |
7117197, | Apr 26 2000 | Oracle International Corporation | Selectively auditing accesses to rows within a relational database at a database server |
9262554, | Feb 16 2010 | MICROSEMI SOLUTIONS U S , INC | Management of linked lists within a dynamic queue system |
20040220959, | |||
20070074014, | |||
20090311659, | |||
20100223227, | |||
20140241361, | |||
20160216913, | |||
20190132198, | |||
20190347133, | |||
20200044931, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 09 2020 | Pensando Systems, Inc. | (assignment on the face of the patent) | / | |||
Jul 09 2020 | GALLES, MICHAEL B | PENSANDO SYSTEMS INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 053194 | /0877 |
Date | Maintenance Fee Events |
Jul 09 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Jul 09 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Jul 17 2020 | SMAL: Entity status set to Small. |
Jul 17 2020 | SMAL: Entity status set to Small. |
Jan 03 2024 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Mar 15 2025 | 4 years fee payment window open |
Sep 15 2025 | 6 months grace period start (w surcharge) |
Mar 15 2026 | patent expiry (for year 4) |
Mar 15 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 15 2029 | 8 years fee payment window open |
Sep 15 2029 | 6 months grace period start (w surcharge) |
Mar 15 2030 | patent expiry (for year 8) |
Mar 15 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 15 2033 | 12 years fee payment window open |
Sep 15 2033 | 6 months grace period start (w surcharge) |
Mar 15 2034 | patent expiry (for year 12) |
Mar 15 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |