The present subject disclosure provides a switch architecture with data and control path systolic array that can be used for real time data analysis or Artificial Intelligence (AI) learning. A systolic array is described which analyzes the tlps received by an uplink port and processes the tlps according to pre-programmed rules. Then the tlp is forwarded to a destination port. The reverse operation is described as well.
|
1. A method for switching, comprising:
receiving a transaction layer packet (tlp) at an uplink port of a switch;
at the switch:
determining a nature of the tlp by evaluating a parameter of the tlp at a filter coupled to the uplink port;
determining if the tlp should be routed to a systolic array at the switch based on the nature of the tlp determined by the filter;
routing the tlp to the systolic array at the switch based on a determination that the tlp should be routed to the systolic array at the switch;
analyzing, at the switch, the tlp in the systolic array; and
forwarding the tlp to a destination port of the switch based on the analysis of the tlp performed by the systolic array of the switch.
7. A switch, comprising:
an uplink port for receiving a transaction layer packet (tlp) at the switch;
a filter coupled to the uplink port, the filter for determining a nature of the tlp received at the uplink port by evaluating a parameter of the received tlp, determining if the tlp should be routed to a systolic array at the switch based on the nature of the tlp determined by the filter and routing the tlp to the systolic array at the switch based on a determination that the tlp should be routed to the systolic array at the switch;
the systolic array for analyzing the tlp at the switch and forwarding the tlp to a destination port of the switch based on the analysis of the tlp performed by the systolic array of the switch; and
a destination port for receiving the tlp.
4. The method of
5. The method of
9. The switch of
10. The switch of
12. The switch of
14. The switch of
|
This application is a continuation of, and claims a benefit of priority under 35 U.S.C. 120 of the filing date of U.S. patent application Ser. No. 15/494,606 filed on Apr. 24, 2017, issued as U.S. Pat. No. 10,261,936, entitled “PCIe SWITCH WITH DATA AND CONTROL PATH SYSTOLIC ARRAY”, the entire contents of which are hereby expressly incorporated by reference for all purposes.
The subject disclosure relates generally to computer software and hardware design and architecture. In particular; the subject disclosure relates to PCIe switch with data and control path systolic array.
Peripheral Component Interconnect Express (PCIe) is a modern, high speed standard for a serial computer expansion bus. It operates more effectively and efficiently than other older, conventional buses in part because of its bus topology. While standard buses (such as PCI) use a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data and control lines, PCIe is based on point-to-point topology, with separate serial links connecting every device to the root complex (host). The conventional PCI clocking scheme limits the bus clock to the slowest peripheral on the bus (regardless of the devices involved in the bus transaction). In contrast, a PCIe bus link supports full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints.
Typically, a PCIe bus allows one device each on each endpoint of each connection. PCIe switches can create multiple endpoints out of one endpoint to allow sharing one endpoint with multiple devices.
A traditional PCIe switch can switch transaction layer packets (TLPs) from an uplink port to a downlink port based on address or requirement identifier (Req ID) only. However, as fields advance and require more robust communication, the conventional techniques for relaying information become increasingly inefficient and require improvement.
Today's Artificial Intelligence (AI) learning and data analytics need more than just switching the PCIe TLPs to improve their performance.
The present subject disclosure provides a PCIe switch architecture with data and control path systolic array that can be used for real time data analysis or Artificial Intelligence (AI) learning.
In one exemplary embodiment, the present subject matter is a method for Peripheral Component Interconnect Express (PCIe) switching. The method includes receiving a transaction layer packet (TLP) at an uplink port; determining the nature of the TLP; routing the TLP to a systolic array; analyzing the TLP in the systolic array; and forwarding the TLP to a destination port.
In another exemplary embodiment, the present subject matter is a Peripheral Component Interconnect Express (PCIe) switch. The switch includes an uplink port for receiving a transaction layer packet (TLP); a programmable filter for determining the nature of the TLP; a systolic array for analyzing the TLP; and a destination port for receiving the TLP.
In yet another exemplary embodiment, the present subject matter is a Peripheral Component Interconnect Express (PCIe) switch. The switch includes an uplink port for receiving a transaction layer packet (TLP); a first programmable filter for determining the nature of the TLP; a systolic array for analyzing the TLP; a plurality of destination ports for receiving the TLP; and a plurality of second programmable filters associated with each of destination ports.
Various exemplary embodiments of this disclosure will be described in detail, wherein like reference numerals refer to identical or similar components or steps, with reference to the following figures, wherein:
Particular embodiments of the present subject disclosure will now be described in greater detail with reference to the figure.
Having a systolic array in both data and control paths and ability to use the array in a programmable fashion opens up a variety of options for real time data analysis and learning.
By describing a particular exemplary embodiment, the present subject matter describes a PCIe switch with data path and control path systolic array.
The programmable filter 120 can guide the routing of received TLPs from an uplink port 110 to the systolic array 130 through path 121. Alternatively, programmable filter 120 can guide the routing of received TLPs from an uplink port 110 through path 122 which can lead to various path options 140. Path 141 leads to destination port 161. Path 142 leads to destination port 162. Path 143 leads to destination port 163. Path 144 leads to destination port 164.
The filter 120 can provide a wide range of options for the routing of the TLPs. It can provide address based filter or Requester ID (Req ID) based filter for various type of PCIe TLPs. These include, but are not limited to, Memory Reads, Memory Writes, Completions, Configuration Reads, Configuration Writes, IO Writes, IO reads, Messages, etc. Other options are also possible and within the scope of the present disclosure.
The output of the filter 120 will also specify if just the TLP header has to be sent to the systolic array 130, or if the complete TLP has to be sent to the systolic array 130. Programmable filter 120 can also be programmed to make systolic array 130 just a snooper, in which case TLPs will be replicated to both systolic array 130 and also to the destination ports 161, 162, 163, 164. Filter 120 can be programmed to target a particular Data Processing Unit 131 in the systolic array 130. Further, there can be an array of systolic arrays 130.
The systolic array 130 includes a homogeneous array of Data Processing Units (DPUs) 131. Systolic array 130 can be programmed to analyze just TLP headers, or both TLP headers and TLP Data. After analysis, the systolic array 130 can choose to forward the TLP to any destination port 161, 162, 163, 164. For example, path 136 leads from the systolic array 130 to downlink port 161. Path 132 leads from the systolic array 130 to downlink port 162. Path 133 leads from the systolic array 130 to downlink port 163. Path 134 leads from the systolic array 130 to downlink port 164.
Programmable filters 151, 152, 153, and 154 function similarly to programmable filter 120 but are positioned adjacent to downlink ports 161, 162, 163, and 164, respectively. These ports 151, 152, 153, 154 may be programmed to review TLPs and supply back information to the systolic array 130 and/or uplink port 110. Thus, the system described can work equally effectively providing data and information from uplink port 110 to downlink ports 161, 162, 163, and 164, as well as from downlink ports 161, 162, 163, 164 back up to uplink port 110.
There are numerous applications of the present subject disclosure, as would be appreciated by one having ordinary skill in the art. Examples include, but are not limited to, real time data analysis of the PCIe traffic, and use in Artificial Intelligence to learning algorithms helping the Graphical processing Units (GPUs). Other examples are also possible, and are within the scope of the present subject disclosure.
In the example of GPUs, the present system and method may be used to allow GPUs to communicate directly with each other. This may be accomplished by direct provisioning of GPUs, or acting as a switch between GPUs. Control processing may also be possible. Many possible applications and uses of the present subject disclosure are evident. A few such non-limiting examples will be presented herein.
In a typical NVME RAID controller, data will be moved from host to NVME devices and vice-versa, with host posting the address for moving data, xoring/duplicating of data and reconstruction of lost data. In the present example, control software running in PSSA 200 can be used to perform both regular RAID or in a mode for moving data, xoring/duplicating of data and reconstruction of lost data. PSSA 200 can also support a hybrid mode where host 201 can directly transfer data to NVME devices 205 or can use control software for managing data transfer.
In a typical NVMEoF device, data will be transferred from NIC 306 to host 301 and then to NVME devices 305, or from NVME devices 305 to host 301 and then to NIC 306. Control software running in PSSA 300 can move data directly from NIC 306 to NVME devices 305 or vice-versa if needed thus eradicating unnecessary trips of data to host 301. For sake of completeness, as show in the figure, RoCE stands for Remote Direct Memory Access over Converged Ethernet, and iWARP stands for internet Wide Area RDMA Protocol. Both are computer networking protocols that implement RDMA for efficient data transfer.
In Artificial Intelligence applications, data will be moved from NVME device 405 to host 401 and then from host 401 to GPUs 406/407 or data will be moved from GPU 406 to host 401 and then to another GPU 407 or NVME device 405. Control software running in PSSA 400 can allow the regular mode of operation and also allow for data to be transferred directly from GPU 406 to GPU 407 or from NVME device 405 to GPU 406/407 or from GPU 406/407 to NVME device 405. This is a simplified example and many other possibilities and configurations are possible and within the scope of the present subject disclosure, as appreciated by one having ordinary skill in the art after considering the present disclosure.
The illustrations and examples provided herein are for explanatory purposes and are not intended to limit the scope of the appended claims. It will be recognized by those skilled in the art that changes or modifications may be made to the above described embodiment without departing from the broad inventive concepts of the subject disclosure. It is understood therefore that the subject disclosure is not limited to the particular embodiment which is described, but is intended to cover all modifications and changes within the scope and spirit of the subject disclosure.
Enz, Michael, Kamath, Ashwin, Shakamuri, Harish Kumar
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
1026193, | |||
4493048, | Dec 11 1978 | Carnegie-Mellon University | Systolic array apparatuses for matrix computations |
4698151, | Apr 27 1983 | Canon Kabushiki Kaisha | Dyestuff refining system |
4807183, | Sep 27 1985 | Carnegie-Mellon University | Programmable interconnection chip for computer system functional modules |
5689508, | Dec 21 1995 | Xerox Corporation | Reservation ring mechanism for providing fair queued access in a fast packet switch networks |
7382787, | Jul 30 2001 | Cisco Technology, Inc | Packet routing and switching device |
7450438, | Jun 20 2002 | Cisco Technology, Inc | Crossbar apparatus for a forwarding table memory in a router |
7694047, | Feb 17 2005 | CAVIUM INTERNATIONAL; MARVELL ASIA PTE, LTD | Method and system for sharing input/output devices |
8995302, | Jan 16 2013 | MICROSEMI STORAGE SOLUTIONS U S , INC ; MICROSEMI STORAGE SOLUTIONS US , INC | Method and apparatus for translated routing in an interconnect switch |
9146890, | Jan 25 2013 | MICROSEMI STORAGE SOLUTIONS U S , INC ; MICROSEMI STORAGE SOLUTIONS US , INC | Method and apparatus for mapped I/O routing in an interconnect switch |
20060117126, | |||
20070195951, | |||
20140214911, | |||
20180082084, | |||
20180307648, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 17 2017 | SHAKAMURI, HARISH KUMAR | MANGSTOR, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 048499 | /0889 | |
Apr 17 2017 | ENZ, MICHAEL | MANGSTOR, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 048499 | /0889 | |
Apr 20 2017 | KAMATH, ASHWIN | MANGSTOR, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 048499 | /0889 | |
Sep 25 2017 | MANGSTOR, INC | EXTEN TECHNOLOGIES, INC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 048499 | /0886 | |
Feb 28 2019 | EXTEN TECHNOLOGIES, INC. | (assignment on the face of the patent) | / | |||
Aug 19 2020 | EXTEN TECHNOLOGIES, INC | OVH US LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 054013 | /0948 |
Date | Maintenance Fee Events |
Feb 28 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Feb 28 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Mar 18 2019 | SMAL: Entity status set to Small. |
Mar 18 2019 | SMAL: Entity status set to Small. |
Sep 11 2023 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Sep 11 2023 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Apr 07 2023 | 4 years fee payment window open |
Oct 07 2023 | 6 months grace period start (w surcharge) |
Apr 07 2024 | patent expiry (for year 4) |
Apr 07 2026 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 07 2027 | 8 years fee payment window open |
Oct 07 2027 | 6 months grace period start (w surcharge) |
Apr 07 2028 | patent expiry (for year 8) |
Apr 07 2030 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 07 2031 | 12 years fee payment window open |
Oct 07 2031 | 6 months grace period start (w surcharge) |
Apr 07 2032 | patent expiry (for year 12) |
Apr 07 2034 | 2 years to revive unintentionally abandoned end. (for year 12) |