Systems and methods of affinity modeling in data center networks that allow bandwidth to be efficiently allocated within the data center networks, while reducing the physical interconnectivity requirements of the data center networks. Such systems and methods of affinity modeling in data center networks further allow computing resources within the data center networks to be controlled and provisioned based at least in part on the network topology and an application component topology, thereby enhancing overall application program performance. Using an affinity topology describing requirements for communications between applications and a network topology, network nodes are configured to satisfy multicast dispersion and latency requirements associated with communications between applications.
|
10. A method for configuring a computer network having a plurality of network elements, the network elements including a plurality of nodes interconnected by network links, the method comprising:
obtaining a network topology specifying a topology of the network elements and interconnections by the network elements;
obtaining an affinity topology including a multicast dispersion descriptor requirement specifying a maximum acceptable difference in hop counts for a multicast message transmitted by a first affinity element to at least two destination affinity elements;
computing an affinity-network topology that represents a logical combination of the network topology and the affinity topology that satisfies the multicast dispersion descriptor requirement; and
configuring the plurality of nodes in accordance with the computed affinity-network topology so that the multicast message transmitted by the first affinity element is received by the at least two destination affinity elements in satisfaction of the multicast dispersion descriptor requirement.
1. A method for configuring a computer network having a plurality of network elements, the network elements including a plurality of nodes interconnected by network links, the method comprising:
obtaining a network topology specifying a topology of the network elements and interconnections of the network elements by the network links;
obtaining an affinity topology including a multicast dispersion descriptor requirement specifying a maximum acceptable difference in arrival times of a multicast message transmitted by a first affinity element at at least two destination affinity elements;
computing an affinity-network topology that represents a logical combination of the network topology and the affinity topology that satisfies the multicast dispersion descriptor requirement; and
configuring the plurality of nodes in accordance with the computed affinity-network topology so that the multicast message transmitted by the first affinity element is received by the at least two destination affinity elements in satisfaction of the multicast dispersion descriptor requirement.
2. The method of
3. The method of
5. The method of
wherein the step of configuring comprises the step of configuring the plurality of nodes in accordance with the computed affinity-network topology so that the multicast message transmitted by the first affinity element is received by the at least two destination affinity elements in satisfaction of the multicast dispersion descriptor requirement and the latency descriptor requirement.
6. The method of
9. The method of
11. The method of
12. The method of
14. The method of
wherein the configuring step comprises the step of configuring the plurality of nodes in accordance with the computed affinity-network topology so that a multicast message transmitted by the first affinity element is received by the at least two destination affinity elements in satisfaction of the multicast dispersion descriptor requirement and the maximum hop count descriptor requirement.
15. The method of
18. The method of
|
This application is a continuation of U.S. patent application Ser. No. 13/651,212 filed Oct. 12, 2012 entitled AFFINITY MODELING IN A DATA CENTER NETWORK, which is a continuation-in-part of U.S. patent application Ser. No. 13/528,501 issued on Jun. 23, 2015 as U.S. Pat. No. 9,065,582 entitled OPTICAL ARCHITECTURE AND CHANNEL PLAN EMPLOYING MULTI-FIBER CONFIGURATIONS FOR DATA CENTER NETWORK SWITCHING, and a continuation-in-part of U.S. patent application Ser. No. 13/528,211 issued on Sep. 23, 2014 as U.S. Pat. No. 8,842,988 entitled OPTICAL JUNCTION NODES FOR USE IN DATA CENTER NETWORKS and which claims priority to U.S. Provisional Patent Application No. 61/554,107 filed Nov. 1, 2011 entitled DATA CENTER NETWORK SWITCHING. This application is also related to U.S. patent application Ser. No. 13/651,213 filed Oct. 12, 2012 entitled DATA CENTER NETWORK ARCHITECTURE, U.S. patent application Ser. No. 13/651,224 filed Oct. 12, 2012 and U.S. patent application Ser. No. 13/651,255 filed issued on Dec. 1, 2015 as U.S. Pat. No. 9,204,207.
Not applicable
The present disclosure relates generally to data center network architectures and switching technologies, and more specifically to data center networks that can employ optical network topologies and optical nodes to efficiently allocate bandwidth within the data center networks, while reducing the physical interconnectivity requirements of the data center networks. The present disclosure further relates to systems and methods of affinity modeling in data center networks that allow computing resources within the data center networks to be controlled and provisioned based at least in part on the network topology and an application component topology, thereby enhancing overall application program performance.
In recent years, university, government, business, and financial service entities, among others, have increasingly relied upon data center networks that incorporate racks of server computers (“servers”) to implement application programs (“applications”) for supporting their specific operational requirements, including, but not limited to, data base management applications, document and file sharing applications, searching applications, gaming applications, and financial trading applications. Such data center networks are generally expanding in terms of the number of servers incorporated therein, as well as the networking equipment needed to interconnect the servers for accommodating the data transfer requirements of the respective applications.
Conventional data center networks typically have hierarchical architectures, in which each server co-located in a particular rack is connected via one or more Ethernet connections to a top-of-rack Ethernet switch (the “top-of-rack switch”). A plurality of such top-of-rack switches form what is referred to herein as the “access layer”, which is generally the lowest level of the hierarchical network architecture. The next higher level of the hierarchy is referred to herein as the “aggregation layer”, which can include a plurality of Ethernet switches (the “aggregation switch(es)”) and/or Internet protocol (IP) routers. Each top-of-rack switch in the access layer can be connected to one or more aggregation switches and/or IP routers in the aggregation layer. The highest level of the hierarchy is referred to herein as the “core layer”, which generally includes a plurality of IP routers (the “core switches”) that can be configured to provide ingress/egress points for the data center network. Each aggregation switch and/or IP router in the aggregation layer can be connected to one or more core switches in the core layer, which, in turn, can be interconnected to one another. In such conventional data center networks, the interconnections between the racks of servers, the top-of-rack switches in the access layer, the aggregation switches/IP routers in the aggregation layer, and the core switches in the core layer, are typically implemented using point-to-point Ethernet links.
Although conventional data center networks like those described above have been employed to satisfy the operational requirements of many university, government, business, and financial service entities, such conventional data center networks have drawbacks. For example, data communications between servers that are not co-located within the same rack may experience excessive delay (also referred to herein as “latency”) within the data center networks, due to the multitude of switches and/or routers that the data may be required to traverse as it propagates “up”, “down”, and/or “across” the hierarchical architecture of the networks. Data communications between such servers may also experience latency within the respective switches and/or routers of the data center networks due to excessive node and/or link utilization. Further, because multiple paths may be employed to deliver broadcast and/or multicast data to different destinations within the data center networks, such broadcast and/or multicast data may experience excessive latency skew. Such latency and/or latency skew may be exacerbated as the sizes of the data center networks and/or their loads increase.
In addition, conventional data center networks typically include network management systems that employ configuration data for proper allocation of computing resources within the data center networks. However, such configuration data frequently lack contextual information, such as how the topology of a data center network should be configured in view of the available computing resources to achieve a desired level of application performance. For example, such network management systems may employ the Open Virtualization Format (also referred to herein as the “OVF standard”) to facilitate the control and provisioning of such computing resources. However, the OVF standard generally lacks contextual information pertaining to the network topology, and may therefore be incapable of assuring that the available computing resources are being properly provisioned for the desired application performance level. As a result, problems with latency, data bottlenecks, etc., may be further exacerbated, thereby slowing down or otherwise inhibiting data movement within the data center networks.
It would therefore be desirable to have data center networks that avoid at least some of the drawbacks of the conventional data center networks described above.
In accordance with the present disclosure, systems and methods of affinity modeling in data center networks are disclosed that allow bandwidth to be efficiently allocated with the data center networks, while reducing the physical interconnectivity requirements of the data center networks. Such systems and methods of affinity modeling in data center networks further allow computing resources within the data center networks to be controlled and provisioned based at least in part on the network topology and an application component topology, thereby enhancing overall application program performance.
In one aspect, the disclosed systems and methods of affinity modeling are employed in conjunction with a data center network architecture that includes one or more physical or logical optical ring networks. Each of the optical ring networks includes a plurality of optical nodes, in which at least two optical nodes each have an associated local co-resident controller. The data center network architecture further includes one or more central controllers, zero, one, or more governing central controllers, a functional component referred to herein as the “affinity modeling component”, and an affinity-network topology database. Each of the co-resident controllers associated with the optical ring networks is communicably coupled to a respective one of the central controllers. Each co-resident controller is operative to send one or more messages to the respective central controller communicably coupled thereto. Moreover, each of the central controllers is operative to receive and process the messages sent to it by the co-resident controllers, and to control the respective co-resident controllers.
Each governing central controller can be communicably coupled to one or more of the central controllers. In an exemplary aspect, the governing central controller, the central controllers, and the local co-resident controllers can be configured to provide a hierarchy of network control. For example, the governing central controller may control the respective central controllers to perform load balancing with regard to network traffic carried on the optical ring networks. In addition, the governing central controller and the central controllers are each operative to receive information pertaining to the affinity-network topology from the affinity-network topology database. Having received the affinity-network topology information, the central controllers, in conjunction with the governing central controller, can control some or all of the co-resident controllers to modify and/or implement the affinity-network topology across the respective optical ring networks.
In another aspect, the affinity modeling component includes a plurality of functional components operative to generate the affinity-network topology information. In an exemplary aspect, the plurality of functional components can include at least an affinity element harvester, a network topology harvester, an affinity topology calculator, and an affinity-network topology calculator. The affinity element harvester is operative to harvest information pertaining to one or more affinity elements, along with their mappings to one or more physical elements within the optical ring networks. Each such affinity element is defined herein as an application component that may be virtualized (e.g., virtual machines, virtualized storage blocks, etc.) or non-virtualized (e.g., physical storage servers, units of non-virtualized software running on hardware platforms, hardware firewalls, hardware load balancers, etc.). Further, each affinity element can be a member of an affinity group, which is defined herein as a collection of servers or virtual machines (VMs), a cluster (e.g., a set of servers that are load balanced and provide high availability), and/or data center resident (or between multiple data centers) applications that require persistent interconnectivity bandwidth, low latency, multicast or broadcast services, and/or isolation from other services. The network topology harvester is operative to harvest information pertaining to the topology of the data center network architecture. The affinity topology calculator is operative to employ at least (1) the information pertaining to the affinity elements and their mappings to the physical elements within the network, (2) the information pertaining to the network topology, and/or (3) information pertaining to specific application requirements, to compute, calculate, derive, or otherwise obtain a logical topology (also referred to herein as the “affinity topology”) describing a functional and/or performance-driven relationship between the affinity groups and/or the affinity elements. For example, the affinity topology can specify policies and attributes that describe communications between a plurality of application components in the network.
Using at least the information pertaining to the network topology and the affinity topology, the affinity-network topology calculator is operative to form or otherwise obtain a combined affinity-network topology that takes into account both the network topology and the affinity topology. Such a combined affinity-network topology is defined herein as an overall topology that can be obtained by logically combining, e.g., by logically stitching together or overlaying, the network topology and the affinity topology. For example, the affinity-network topology calculator may stitch together the network topology and the affinity topology by binding affinity elements to their counterparts in the network topology, yielding one or more logical links between the affinity groups/elements and the physical and/or virtualized elements within the data center network architecture. The central controllers are operative to receive information pertaining to the affinity-network topology, and, based at least on the received information, to control one or more optical nodes, and zero, one, or more optical junction nodes, to modify the network topology, as appropriate, for implementing the affinity-network topology within the data center network, thereby providing enhanced levels of application program performance and network utilization.
In a further aspect, a system for providing enhanced application program performance and network utilization in a network includes a modeling component. The network has an associated topology and a current network state, which, as employed herein, pertains to the operational status of all of the network segments in the network and the sources and destinations to which that operational status relates, as well as the endpoint addresses (such as MAC or IP addresses) of all of the host computers communicably coupled to the respective nodes on the network. A “network segment” is defined herein as a unidirectional link in the network from a source to a destination considered at the applicable OSI model layer, e.g., layer-1, layer-2, or layer-3. The topology associated with the network includes a network topology and an affinity topology. The modeling component has at least one processor operative to execute at least one program out of at least one memory to model an affinity-network topology that represents a logical combination of the affinity topology and the network topology. The system further includes a central controller operative to receive information pertaining to at least the affinity-network topology, and to compute one or more forwarding topologies based at least in part on the affinity-network topology. Each forwarding topology identifies one or more network segments for forwarding traffic through the network. For example, the forwarding topologies can represent a network abstraction layer, and the affinity-network topology can represent a workload abstraction layer. The central controller is further operative to provide the forwarding topologies for use in deterministically arriving at a consistent end-to-end forwarding configuration for the network as a function of the current network state.
In another aspect, a method of providing enhanced application program performance and network utilization in a network includes modeling an affinity-network topology by a computerized modeling component, and receiving, at a central controller, information pertaining to at least the affinity-network topology. The method further includes computing, by the central controller, one or more forwarding topologies based at least in part on the affinity-network topology, and providing, by the central controller, the forwarding topologies for use in deterministically arriving at the consistent end-to-end forwarding configuration for the network as a function of the current network state.
Other features, functions, and aspects of the invention will be evident from the Drawings and/or the Detailed Description of the Invention that follow.
The invention will be more fully understood with reference to the following Detailed Description of the Invention in conjunction with the drawings of which:
The disclosures of U.S. patent application Ser. No. 13/651,213 filed Oct. 12, 2012 entitled DATA CENTER NETWORK ARCHITECTURE, U.S. patent application Ser. No. 13/651,224 filed Oct. 12, 2012 entitled CONTROL AND PROVISIONING IN A DATA CENTER NETWORK WITH AT LEAST ONE CENTRAL CONTROLLER, U.S. patent application Ser. No. 13/651,255 filed Oct. 12, 2012 entitled HIERARCHY OF CONTROL IN A DATA CENTER NETWORK, U.S. Provisional Patent Application No. 61/554,107 filed Nov. 1, 2011 entitled DATA CENTER NETWORK SWITCHING, U.S. patent application Ser. No. 13/528,501 filed Jun. 20, 2012 entitled OPTICAL ARCHITECTURE AND CHANNEL PLAN EMPLOYING MULTI-FIBER CONFIGURATIONS FOR DATA CENTER NETWORK SWITCHING, and U.S. patent application Ser. No. 13/528,211 filed Jun. 20, 2012 entitled OPTICAL JUNCTION NODES FOR USE IN DATA CENTER NETWORKS, are incorporated herein by reference in their entirety.
Systems and methods of affinity modeling in data center networks are disclosed that allow bandwidth to be efficiently allocated within the data center networks, while reducing the physical interconnectivity requirements of the data center networks. Such systems and methods of affinity modeling in data center networks further allow computing resources within the data center networks to be controlled and provisioned based at least in part on the network topology and an application component topology. Such control and provisioning of computing resources includes determining a combined affinity-network topology for a data center network, and controlling one or more optical nodes and zero, one, or more optical junction nodes to implement the affinity-network topology within the data center network, thereby providing an enhanced level of application program performance.
Each of the co-resident controllers (C2) associated with the optical ring networks A, B, C, and D is communicably coupled to a respective one of the central controllers (C3) 108, 110, 112. For example, the co-resident controllers (C2) associated with the optical ring network A are each communicably coupled to the central controller (C3) 108, and the co-resident controllers (C2) associated with the optical ring network B are each communicably coupled to the central controller (C3) 110. Further, the co-resident controllers (C2) associated with the optical ring network C are each communicably coupled to the central controller (C3) 112, and, likewise, the co-resident controllers (C2) associated with the optical ring network D are each communicably coupled to the central controller (C3) 112. Each co-resident controller (C2) is operative to transmit one or more messages to the respective central controller (C3) communicably coupled thereto. Moreover, each of the central controllers (C3) 108, 110, 112 is operative to receive and process the messages sent to it by the co-resident controllers (C2), and to control the respective co-resident controllers (C2). As shown in
As further shown in
The affinity modeling component 102 (see
With reference to
Using at least the information pertaining to the network topology and the affinity topology, the affinity-network topology calculator 208 is operative to form or otherwise obtain a combined affinity-network topology that takes into account both the network topology and the affinity topology. Such a combined affinity-network topology is defined herein as an overall topology that can be obtained by effectively stitching together or overlaying the network topology and the affinity topology. For example, the affinity-network topology calculator 208 may stitch together the network topology and the affinity topology by binding affinity elements to their counterparts in the network topology, yielding one or more logical links between the affinity groups/elements and the physical and/or virtualized elements within the data center network 100 (see
Using at least the affinity-network topology, the affinity-network topology score calculator 210 is operative to determine an affinity group score for each affinity group, an affinity link score for each affinity link, and a network-wide affinity score for the overall network. The affinity-network topology viewer 212 is operative to graphically display the affinity-network topology, and to display a set of affinity group scores for each affinity group, a set of affinity link scores for each affinity link, and the network-wide affinity score. Based at least on the affinity group scores, the affinity link scores, and/or the network-wide affinity score, the user can specify an expected level of performance of the application running within the network, as well as determine the actual level of application performance. Moreover, by perturbing and/or modifying the network topology and/or the affinity topology, and obtaining new affinity group scores for each affinity group, new affinity link scores for each affinity link, and a new network-wide affinity score for the network, the user can use the disclosed systems and methods to determine, based at least on the affinity group scores, the affinity link scores, and/or the network-wide affinity score, the network topology and/or the affinity topology that can provide a desired level of application performance.
The operation of the affinity modeling component 102 of
As shown in
The first VM 318 is associated with a virtual switch (vSwitch_host1) 330, a first web server 334 for the first application (Appl1.W1), and a firewall 332 for the second application (Appl2.FW1). The second VM 320 is associated with a virtual switch (vSwitch_host2) 336, a first application server 338 for the first application (Appl1.A1), and a second web server 340 for the first application (Appl1.W2). The third VM 328 is associated with a virtual switch (vSwitch_host1) 342, a web server 344 for the second application (Appl2.W1), and a second application server 346 for the first application (Appl1.A2). A virtual switch (“vSwitch”) is defined herein as a software entity that provides networking functionality to a group of VMs and/or other vSwitches, and is substantially equivalent to the functionality provided by a physical switch to a group of physical machines and/or other physical switches. It is noted that different implementations of vSwitches can have varying levels of functionality, e.g., some implementations of a vSwitch may not implement a spanning tree protocol, and/or may not allow vSwitch-to-vSwitch connections. Similarly, while some implementations of a vSwitch are on top of a single hypervisor and are bound to VMs on the same hypervisor, other implementations include a distributed vSwitch that can be spread across multiple hypervisors, and/or can be bound to VMs that exist on different hypervisors.
With reference to the first VM 318, the firewall 332 and the first web server 334 are each typically communicably coupled to the virtual switch 330 through a virtual network interface card (“vNIC”). Similarly, with reference to the second VM 320, the first application server 338 and the second web server 340 are each typically communicably coupled to the virtual switch 336 through a vNIC; and, with reference to the third VM 328, the web server 344 and the second application server 346 are each typically communicably coupled to the virtual switch 342 through a vNIC. A vNIC is defined herein as a software entity that provides functionality to a virtual machine (VM), and is substantially equivalent to the functionality provided by a physical network interface card (NIC) to a physical machine.
As further shown in
As shown in
With further reference to the first application, App1 402, it is noted that, within an exemplary affinity descriptor modeling framework (as further described below), one or more affinity requirements can be established for (1) an affinity link 412 coupling the firewall/load balancer 302 to the affinity group 406, (2) an affinity link 414 coupling the affinity group 406 to the affinity group 408, and (3) an affinity link 416 coupling the affinity group 408 to the affinity group 410. With further reference to the second application, App2 404, one or more affinity requirements can also be established for (1) an affinity link 418 coupling the firewall 332 to the web server 344, and (2) an affinity link 420 coupling the web server 344 to the data store 326. In addition, one or more affinity requirements can be established for an affinity link 422 coupling the affinity group 403 to the affinity group 405. Such affinity requirements can include (1) communication-related affinity requirements relating to bandwidth, switch hops, layer-1 hops, latency, multicast dispersion, oversubscription, underlying network state, etc., (2) reliability-related affinity requirements relating to layer-2 switch failures, layer-3 router failures, link failures, single points of failure, etc., (3) security-related affinity requirements relating to shared physical machines, shared switches, isolation, communication path interconnection, etc., or any other suitable affinity requirements. As employed herein, the terms “layer-1”, “layer-2”, and “layer-3” correspond to the physical layer, the data link layer, and the network layer, respectively, of the Open System Interconnection (OSI) model.
For example, with regard to the first application, App1 402, the affinity link 412 coupling the firewall/load balancer 302 to the affinity group 406 can have a reliability-related affinity requirement relating to single points of failure (“#SPoF=sensitive”), the affinity link 414 coupling the affinity group 406 to the affinity group 408 can have a communication-related affinity requirement relating to hops (“Hops=sensitive”) and a reliability-related affinity requirement relating to single points of failure (“#SPoF=sensitive”), and the affinity link 416 coupling the affinity group 408 to the affinity group 410 can have two communication-related affinity requirements relating to bandwidth (“BW=sensitive”) and hops (“Hops=bounded(1)”), and a reliability-related affinity requirement relating to single points of failure (“#SPoF=sensitive”). With regard to the second application, App2 404, the affinity link 418 coupling the firewall 332 to the web server 344 can have a communication-related affinity requirement relating to hops, (“Hops=sensitive”) and a reliability-related affinity requirement relating to single points of failure (“#SPoF=sensitive”), and the affinity link 420 coupling the web server 344 to the data store 326 can have two communication-related affinity requirements relating to bandwidth (“BW=sensitive”) and hops (“Hops=sensitive”), and a reliability-related affinity requirement relating to single points of failure (“#SPoF=sensitive”). In addition, the affinity link 422 coupling the affinity group 403 to the affinity group 405 can have a security-related affinity requirement relating to the number of shared links between the respective affinity groups 403, 405.
With reference to
With further reference to
With regard to the affinity link 420 (see
As described above, the representative central controller 214 (see
As further shown in
In addition, each of the co-resident controllers (C2) associated with the respective optical nodes 710.1-710.n (see
It is noted that a logical break may be established on a supervisor channel, and/or a flooding break may be established on one or more outer rings of an optical ring network, to prevent the creation of a so-called “bridge loop” in the layer-2 broadcast domain. For example, an optical node can place such a logical break on the supervisor channel 734 (see
For example, an optical node may place a logical break on the supervisor channel, and/or a flooding break on one or more of the outer rings of an optical ring network, by filtering network traffic in both directions on the eastbound uplink ports of the optical node. Specifically, when the optical node places the logical break on the supervisor channel, the optical node can filter the network traffic on its eastbound uplink ports to prohibit the propagation of all unicast, broadcast, and multicast data packets or frames except for a specified multicast data packet/frame (referred to herein as the “beacon frame”), which can be permitted to traverse the logical break to enable the network to determine whether or not the supervisor channel is faulty. Moreover, when the optical node places the flooding break on the outer rings, the optical node can filter the network traffic on its eastbound uplink ports to prohibit the flooding of all multi-destination data packets or frames, while permitting unicast data packets/frames having known destinations to traverse the flooding break. Such multi-destination data packets or frames are defined herein as broadcast data packets/frames, multicast data packets/frames, and unicast data packets/frames having unknown destinations. As a result, following the placement of such a flooding break, an optical node can still transmit unicast data packets/frames having known destinations in either direction around an optical ring network, and have the unicast data packets/frames successfully reach their respective destinations.
The data center network 700 (see
It is noted that a data center network architecture, such as one that includes the data center network 700, may include zero, one, or more optical junction nodes, such as the optical junction node 712. The optical nodes 710.1-710.n deployed on the optical ring network 702 can be connected to neighboring nodes through uplink ports, while the remaining ports on the optical nodes 710.1-710.n can be used as access links for interconnection to ports on other optical nodes, servers, and/or any other suitable network equipment.
The network topology 300 (see
As described above, within an exemplary affinity descriptor modeling framework, one or more affinity requirements can be established for each of the affinity links 412, 414, 416, 418, 420, 422 within the affinity topology 400 (see
As employed herein, the affinity descriptor modeling framework can focus on various types of affinity requirements, including, but not limited to, communication-related affinity requirements, reliability-related affinity requirements, and security-related affinity requirements. Such affinity requirements generally fall into two categories, which relate to keeping certain network/affinity elements (e.g., VMs) together, and keeping certain network/affinity elements (e.g., VMs) apart. The affinity descriptor modeling framework described herein seeks to capture aspects relating to the network topology (e.g., the network topology 300; see
As further described above, an affinity group is defined herein as a collection of servers or virtual machines (VMs), a cluster (e.g., a set of servers that are load balanced and provide high availability), and/or data center resident applications that require high interconnectivity bandwidth, low latency, and/or multicast or broadcast services. Such an affinity group can correspond to an arbitrary set of other affinity groups. The base case is a singleton affinity group, or an affinity element (e.g., a VM, a virtual appliance, etc.) that cannot be broken down further into other affinity elements/groups. Each of the affinity requirements employed herein, such as the communication-related affinity requirements, the reliability-related affinity requirements, and the security-related affinity requirements, can be expressed with respect to pairs of affinity elements/groups within the affinity topology (e.g., the affinity topology 400; see
It is further noted that groupings of affinity elements/groups within the affinity topology (e.g., the affinity topology 400; see
Table I below is an affinity group relational table that illustrates exemplary “inter” and “intra” attributes of affinity links that can couple pairs of affinity elements/groups, in which “AG1”, “AG2”, “AG3”, and “AG4” each correspond to an exemplary affinity group (AG) within an exemplary affinity topology.
TABLE I
AG1
AG2
AG3
AG4
AG1
X
O
AG2
O
X
AG3
X
AG4
O
X
With reference to Table I above, cells at table locations (AGk, AGm), in which “k” is not equal to “m”, illustrate communication, reliability, and security-related affinity requirements between an ordered pair of affinity groups AGk, AGm. Further, cells at table locations (AGk, AGk) along the main diagonal of Table I illustrate communication, reliability, and security-related affinity requirements within sub-component affinity elements/groups that make up the AGk.
With regard to the inter-AG cells (AGk, AGm) in Table I above that contain the symbol entry, “O”, the affinity requirements generally apply between all sub-components of AGk, and all sub-components of AGm. An entry in a cell at a table location (AGk, AGm) implies the presence of an affinity link between the affinity groups AGk and AGm. Table I above therefore provides a representation of the exemplary affinity topology in table form. Further, an empty cell in Table I above implies no such affinity requirements between the affinity groups AGk, AGm. With regard to the intra-AG cells (AGk, AGk) in Table I above that contain the symbol entry, “X”, the affinity requirements can generally refer to another such relational table for all sub-components (of lower level) of AGk, as further described below.
For example, there can be three generic intra-AG table types, namely, (1) clustered, implying all-to-all communication of the same form and magnitude, (2) disassociated, implying no communication-related affinity requirements within any two sub-components of the AG, and (3) custom, implying that the user specifies the intra-AG relational table. It is noted that the above intra-AG table types provide templates for intra-AG affinity group relational tables that can apply to any communication, security, or reliability-related affinity requirements. In general, a clustered relational table denotes all-to-all affinity requirements of the same form, a disassociated relational table denotes no intra-AG requirements, and a custom relational table denotes a user specified table.
More specifically, the communication-related affinity requirements can relate to the following aspects: (1) the bandwidth from AGk to AGm {e.g., characterized as low, medium, high}; (2) the hops from AGk to AGm {e.g., characterized as insensitive, sensitive, bounded}; (3) the latency from AGk to AGm {e.g., characterized as insensitive, sensitive, bounded}; (4) the multicast dispersion from AGk to AGm {e.g., characterized as insensitive, sensitive, bounded}; (5) the oversubscription along AGk to AGm communication paths {e.g., characterized as insensitive, sensitive, bounded}; and (6) the underlying network state for AGk {e.g., characterized as insensitive, sensitive, bounded}.
Further, the reliability-related affinity requirements can relate to the following aspects: (1) the number of physical machine failures before AGk to AGm communication path disconnection {e.g., characterized as insensitive, sensitive, bounded}; (2) the number of layer-2 switch failures before AGk to AGm communication path disconnection {e.g., characterized as insensitive, sensitive, bounded}; (3) the number of layer-3 router failures before AGk to AGm communication path disconnection {e.g., characterized as insensitive, sensitive, bounded}; (4) the number of link failures before AGk to AGm communication path disconnection {e.g., characterized as insensitive, sensitive, bounded}; and (5) the total number of single points of failure (SPoF) before AGk to AGm communication path disconnection {e.g., characterized as insensitive, sensitive, bounded}.
Moreover, the security-related affinity requirements can relate to the following aspects: (1) the number of shared physical machines between AGk and AGm {e.g., characterized as insensitive, sensitive, bounded}; (2) the number of shared switches between AGk and AGm {e.g., characterized as insensitive, sensitive, bounded}; (3) the number of shared links between AGk and AGm {e.g., characterized as insensitive, sensitive, bounded}; (4) the isolation of AGk {e.g., characterized as insensitive, sensitive}; and (5) the communication path intersection of AGk with AGm {e.g., characterized as insensitive, sensitive, bounded}.
The use of affinity group relational tables, such as Table I above, is further described below with reference to the following illustrative example, as well as
Table II below is an exemplary affinity group relational table that illustrates the “inter” and “intra” attributes of affinity links that couple pairs of the affinity groups W*, A*, D*, LB*, FW* within the tiered application 900 (see
TABLE II
W*
A*
D*
LB*
FW*
W*
X
M
M
A*
M
X
L
D*
H
LB*
M
FW*
M
With reference to Table II above, the inter-AG cells at table locations (W*, A*), (W*, LB*), (A*, W*), (A*, D*), (LB*, W*), and (FW*, LB*) illustrate communication, reliability, and/or security-related affinity requirements (e.g., “L”=low, “M”=medium, “H”=high) between the corresponding pairs of affinity groups within the tiered application 900 (see
Table III below is an exemplary relational table for the intra-AG cell at the table location (W*, W*) in Table II above.
TABLE III
W1
W2
W3
W1
W2
W3
With reference to Table III above, the relational table for the intra-AG cell at the table location (W*, W*) in Table II above is of the disassociated type, implying no communication-related affinity requirements for the affinity elements W1, W2, W3 within the affinity group W*. The cells at all table locations in Table III above are therefore empty.
Table IV below is an exemplary relational table for the intra-AG cell at the table location (A*, A*) in Table II above.
TABLE IV
A1
A2
A1
H
A2
H
With reference to Table IV above, the type of the relational table for the intra-AG cell at the table location (A*, A*) in Table II is clustered, implying all-to-all communication of the same magnitude (e.g., H=high) for the affinity elements A1, A2 within the affinity group A*.
It is noted that multiple affinity groups can have affinity requirements with respect to the same affinity group, AGk (referred to herein as affinity group reuse/application affinity descriptor reuse). In this case, the user typically does not have to re-specify the affinity group AGk. The affinity group AGk can therefore be made to exist in multiple relational tables, simultaneously. Further, multiple affinity groups can have affinity requirements with respect to different parts of the same affinity group AGk. In this case, different versions of the affinity group AGk can be specified by the user for each relational table (this generally applies if the affinity group AGk can be implemented as a generic “black box”).
In addition, an affinity group may be formed as an amalgam of a set of non-mutually exclusive affinity groups. For example, affinity groups can have pre-defined intra-group affinity descriptions that may conflict with one another, particularly, if one or more of the affinity groups AGk are implemented as generic black boxes. Accordingly, it may be necessary to define a set of rules for building a disambiguated affinity element-to-affinity element relational table. The process of defining a set of rules for building such a disambiguated relational table is referred to herein as “affinity group disambiguation”.
The use of such a disambiguated affinity element-to-affinity element relational table, such as the exemplary disambiguated affinity element-to-affinity element relational table, Table V, is described below with reference to the following illustrative example, as well as
TABLE V
A1
A2
A3
A4
A1
L
H
H
A2
L
L
L
A3
H
L
H
A4
H
L
H
In this example, an affinity group A** contains two affinity groups A*1, A*2 (see
Table VI below is an exemplary affinity group relational table that illustrates the “inter” and “intra” attributes of affinity links that couple pairs of the affinity groups A*1, A*2 within the affinity group A** (see
TABLE VI
A*1
A*2
A*1
X
L
A*2
L
X
With reference to Table VI above, the inter-AG cells at table locations (A*2, A*1), (A*1, A*2) illustrate communication, reliability, and/or security-related affinity requirements (e.g., “L”=low) between the corresponding pairs of the affinity groups A*1, A*2. Further, the intra-AG cells at table locations (A*1, A*1) and (A*2, A*2), containing the symbol entry, “X”, along the main diagonal of Table VI, illustrate communication, reliability, and/or security-related affinity requirements within the affinity elements that make up the corresponding affinity groups A*1, A*2.
Table VII below is an exemplary relational table for the intra-AG cell at the table location (A*1, A*1) in Table VI above.
TABLE VII
A1
A2
A3
A1
M
A2
A3
With reference to Table VII above, the communication, reliability, and/or security-related affinity requirements between the pair of the affinity elements A1, A3 within the affinity group A*1 is designated as “M=medium”.
Table VIII below is an exemplary relational table for the intra-AG cell at the table location (A*2, A*2) in Table VI above.
TABLE VIII
A1
A3
A4
A1
H
A3
A4
With reference to Table VIII above, the communication, reliability, and/or security-related affinity requirements between the pair of affinity elements A1, A3 within the affinity group A*2 is designated as “H=high”. Accordingly, as indicated in Tables VII and VIII above, there is ambiguity between the two affinity groups A*1, A*2 within the affinity group A** (see
To avoid such ambiguity between the affinity groups A*1, A*2 (see
As employed herein, the term “affinity ports” is used to describe the interfaces of an affinity group that are external to other affinity groups. For example, such affinity ports can be a subset of the ports of affinity elements/groups within an overall affinity group. Moreover, affinity groups can include affinity ports that represent interfaces to other affinity groups, and at least some of the affinity requirements can represent communications requirements between the affinity ports of the respective affinity groups.
As described above, using at least the affinity-network topology, the affinity-network topology score calculator 210 (see
For example, if the information pertaining to an affinity link coupling an exemplary pair of affinity groups includes the following list of user-defined affinity requirements: (a) Bandwidth=medium; (b) Hops=sensitive; (c) Total single points of failure (SPoF)=bounded≦0; and (d) Shared links=sensitive, and further includes the following list of actual, computed affinity attributes: (a) Bandwidth=1×10 Gbps; (b) Hops=6; (c) Total SPoF=1; and (d) Shared links=2, then such affinity scoring can be expressed as follows: (a) the bandwidth score is 1; (b) the hops score is 0.5 (e.g., the affinity ports may be physically separated by just 2 hops); (c) the total single points of failure (SPoF) score is 0; and (d) the shared links score is 0.5. As a result, the total affinity score for this example can be calculated as follows:
Total Affinity Link Score=1+0.5+0+0.5=2, out of a possible total score of 4.
As described above, in an exemplary affinity descriptor modeling framework, application affinity descriptors can be defined with respect to user-defined descriptions of desired application intra-working requirements (generally qualitative), an analyzed view of the actual application intra-working requirements (generally quantitative), and an analyzed view of application affinities, given removal of layer-2/layer-3 constraints and/or slight perturbations of the physical and/or affinity topology constraints (generally both quantitative and qualitative).
With regard to such quantitative analysis, it can be assumed that an affinity link between affinity groups AGk and AGm generally implies an all-to-all affinity ports-to-affinity ports relationship requirement from affinity elements in P(AGk) to affinity elements in P(AGm) (e.g., “P(AGk)” represents the affinity ports of AGk, and “P(AGm)” represents the affinity ports of AGm). By default, the affinity ports of an affinity group are equal to the affinity ports of its sub-component affinity elements/affinity groups. It is further assumed that such affinity groups are disambiguated before the application affinity descriptors are determined.
Moreover, for each affinity link, the affinity-network topology viewer 212 (see
It is noted that certain assumptions can also be made with regard to affinity links, such as an affinity link coupling the affinity groups AGk, AGm. Such affinity links can be assumed to be ordered, e.g., the affinity link coupling the affinity groups AGk, AGm can be different from an affinity link coupling affinity groups AGm, AGk. It is also assumed that the affinity groups AGk, AGm can both contain one or more affinity elements (e.g., one or more VMs), and that the affinity groups AGk, AGm can have one or more common elements, implying an aggregate of all-to-all relationships between affinity ports (i,j), ∀iεP(AGk),jεP(AGm).
With reference to the quantitative analysis assumptions described above, the hop count, the latency, the multicast dispersion, the bandwidth, and the oversubscription between the exemplary affinity groups AGk, AGm will be discussed below, as well as the underlying network state for the affinity group AGk, the total number of failures before disconnection between the affinity groups AGk, AGm, the total number of single points of failure (SPoF) between the affinity groups AGk, AGm, the total number of shared links and/or switches between the affinity groups AGk, AGm, the isolation of the affinity group AGk, and the communication path intersection of the affinity groups AGk, AGm.
If the virtual machines VMi and VMj (see
It is noted that the headline value, which is the worst case bound on a relationship between two affinity port members in P(AGk) and P(AGm), can correspond to the maximum or worst case hop count between any two affinity ports in the affinity groups AGk and AGm, respectively, and/or any other suitable headline value. In addition, the set of descriptive values can include (1) a histogram of the hop counts between any two affinity ports in the affinity groups AGk and AGm, respectively, (2) the average hop count over (a) all pairs of affinity ports in the affinity groups AGk and AGm, and/or (b) the maximum or worst case hop count between any two affinity ports in the affinity groups AGk and AGm, respectively, and/or any other suitable descriptive value.
Moreover, the hop count descriptor (e.g., the hop count requirement between affinity groups/elements) can be specified in a temporal or conditional manner. For example, the hop count requirement between the affinity groups AGk, AGm may only exist for a specific amount of time during the day. Further, the hop count requirement between the affinity groups AGk, AGm may only exist if other conditions are met (e.g., the presence of another affinity group, AGp, or the presence of another affinity descriptor between the affinity groups AGk, AGm).
The latency between a pair of affinity elements/groups can be determined with reference to the hop count between that pair of affinity elements/affinity groups, multiplied by the forwarding latency. Further, if statistics such as link utilization, queue delay, etc., are available for network elements traversed by communication paths between the affinity groups AGk and AGm, then a more accurate determination of latency can be made as a function of such statistics. The latency descriptor (e.g., the latency requirement between affinity groups/elements) can be specified in a temporal or conditional manner.
It is noted that, with regard to latency, the headline value can correspond to the maximum or worst case latency between any two affinity ports in the affinity groups AGk and AGm, respectively, and/or any other suitable headline value. In addition, the set of descriptive values can include (1) a histogram of the latency between any two affinity ports in the affinity groups AGk and AGm, respectively, (2) the average latency over (a) all pairs of affinity ports in the affinity groups AGk and AGm, and/or (b) the maximum or worst case latency between any two affinity ports in the affinity groups AGk and AGm, respectively, and/or any other suitable descriptive value.
In general, the latency between a pair of affinity groups/elements is equal to the latency accrued across all traversed networking devices (e.g., switches, routers), subject to layer-2/layer-3 constraints. Moreover, what is referred to herein as an “unconstrained latency” between the affinity groups AGk and AGm can be determined, taking into account the physical connectivity between the affinity groups AGk, AGm, but ignoring the layer-2/layer-3 constraints.
The multicast dispersion between a pair of affinity elements/groups can be determined with reference to the worst case hop count or latency, minus the best case hop count or latency between an affinity port in a first affinity group/element (i.e., multicast transmitter(s)) and all affinity ports in a second affinity group/element (i.e., multicast receiver(s)). The multicast dispersion descriptor (e.g., the multicast dispersion requirement between affinity groups/elements) can be specified in a temporal or conditional manner.
With reference to
In general, the multicast dispersion (the “dispersion”) from the affinity element, i (see
in which “HC(i,j)” corresponds to the hop count between the affinity element, i, and each affinity element, j. Similarly, if latency is used, then equation (1) above can be modified as follows,
in which “L(i,j)” corresponds to the latency between the affinity element, i, and each affinity element, j.
It is noted that a “constrained” variant of multicast dispersion can be determined, taking into account the layer-2/layer-3 constraints (e.g., VLANs, spanning tree, etc.). Further, an “unconstrained” variant of multicast dispersion can be determined, taking into account the physical connectivity, but ignoring the layer-2/layer-3 constraints.
Moreover, with regard to multicast dispersion, the headline value can correspond to the maximum multicast dispersion over all of the affinity elements included in the affinity group AGk (see
With reference to
Such a bandwidth descriptor can be expressed as, for example, “Minimum/maximum flow from affinity element 1=1 unit”, and “Maximum aggregate flow=5 units”, in which a bandwidth “unit” refers to a basic unit of bandwidth, e.g., 1 Gbps.
It is noted that a “constrained” variant of bandwidth can be determined, taking into account the layer-2/layer-3 constraints (e.g., VLANs, spanning tree, etc.). Further, an “unconstrained” variant of bandwidth can be determined, taking into account the physical connectivity, but ignoring the layer-2/layer-3 constraints.
Moreover, with regard to bandwidth, the headline value can correspond to (1) the minimum/maximum flow from any affinity element i within the affinity group AGk to any affinity element i within the affinity group AGm, (2) the maximum aggregate flow from the affinity group AGk to the affinity group AGm (e.g., computed using maximum-flow/minimum-cut techniques), (3) the minimum/maximum flow and/or the maximum aggregate flow subject to hop count bounds, and/or any other suitable headline value. In addition, the set of descriptive values can include (1) the bandwidth versus hops descriptor, (2) a histogram of bandwidth from any affinity element within the affinity group AGk to any affinity element within the affinity group AGm, (3) the average of communication path lengths from a particular affinity element within the affinity group AGk to any affinity element within the affinity group AGm, and/or any other suitable descriptive value.
As employed herein, the term “oversubscription” between the exemplary affinity groups AGk, AGm refers to the worst case bottleneck affinity/network element (e.g., a link, a switch, a router) that lies along communication paths between all ports i that are a member of P(AGk), and all ports j that are a member of P(AGm), taking into account all application affinity descriptors that include communication paths with affinity/network elements that intersect the AGk to AGm affinity descriptor. For example, with reference to
It is noted that a “constrained” variant of oversubscription can be determined, taking into account the layer-2/layer-3 constraints (e.g., VLANs, spanning tree, etc.). Further, an “unconstrained” variant of oversubscription can be determined, taking into account the physical connectivity, but ignoring the layer-2/layer-3 constraints. Moreover, with regard to oversubscription, the headline value can correspond to the total size of all of the flows/communication paths going through the most oversubscribed link between P(AGk) and P(AGm), and/or any other suitable headline value. In addition, the set of descriptive values can include (1) a histogram of the size of the flows on each link between the affinity groups AGk, AGm, (2) a visual display of the most oversubscribed links, and/or any other suitable descriptive value. The oversubscription descriptor can be specified in a temporal or conditional manner.
As employed herein, the underlying network state for the exemplary affinity group AGk constitutes a temporal measurement of the network state (e.g., link utilization, queue depths, etc.) of the network elements (e.g., switches, links) that intra-AGk links and outgoing inter-AGk links traverse. For example, one way of handling the underlying network state for the affinity group AGk is to focus on abrupt changes in the network state due to major events, such as vMotions (“vMotion” is a feature of virtualization software sold by VMware, Inc., Palo Alto, Calif., USA; the vMotion feature can be used to relocate a virtual machine (VM) from one server to another server), backups, error tickets, etc.
It is noted that a “constrained” variant of the underlying network state for an affinity group can be determined, taking into account the layer-2/layer-3 constraints (e.g., VLANs, spanning tree, etc.). Further, an “unconstrained” variant of the underlying network state for an affinity group can be determined, taking into account the physical connectivity, but ignoring the layer-2/layer-3 constraints. Moreover, with regard to the underlying network state for the affinity group AGk, the headline value can be, e.g., stable or perturbed (e.g., over some predetermined time window). In addition, the set of descriptive values can include (1) a list of major change events that occurred within the predetermined time window during which the perturbation occurred, (2) a correlation with specific events that involve affinity elements within the affinity group AGk, network links that the intra/outgoing links of the affinity group AGk traversed, etc., (3) details of which network and affinity attribute metrics changed, and/or any other suitable descriptive value. The underlying network state descriptor can be specified in a temporal or conditional manner.
As employed herein, the term “total number of failures before affinity group AGk, AGm disconnection” refers to the total number of link and/or switch failures before the affinity ports P(AGk) become disconnected from the affinity ports P(AGm).
It is noted that a “constrained” variant of the total number of failures before affinity group disconnection can be determined, taking into account the layer-2/layer-3 constraints (e.g., VLANs, spanning tree, etc.). Further, an “unconstrained” variant of the total number of failures before affinity group disconnection can be determined, taking into account the physical connectivity, but ignoring the layer-2/layer-3 constraints. Moreover, with regard to the total number of failures before affinity group AGk, AGm disconnection, the headline value can be the number of link failures or switch failures before disconnection, and/or any other suitable headline value. In addition, the set of descriptive values can include (1) a list of bottleneck links, (2) a list of bottleneck switches, and/or any other suitable descriptive value. The total number of failures descriptor can be specified in a temporal or conditional manner.
As employed herein, the term “total number of single points of failure (SPoF) between affinity groups AGk, AGm” refers to the total number of network links and/or switches whose failure can result in the disconnection of the affinity ports P(AGk) from the affinity ports P(AGm). For example, with reference to
It is noted that a “constrained” variant of the total number of single points of failure (SPoF) between affinity groups can be determined, taking into account the layer-2/layer-3 constraints (e.g., VLANs, spanning tree, etc.). Further, an “unconstrained” variant of the total number of single points of failure (SPoF) between affinity groups can be determined, taking into account the physical connectivity, but ignoring the layer-2/layer-3 constraints. Moreover, with regard to the total number of single points of failure (SPoF) between the exemplary affinity groups AGk, AGm, the headline value can correspond to the total number of single points of failure, and/or any other suitable headline value. In addition, the set of descriptive values can include a list of all affinity/network elements that make up the single points of failure, and/or any other suitable descriptive value. The total number of single points of failure descriptor can be specified in a temporal or conditional manner.
As employed herein, the term “total number of shared links and/or switches between affinity groups AGk, AGm” refers to the affinity/network elements (e.g., links, switches, routers) that are shared among communication paths used by the exemplary affinity group AGk (e.g., the communication paths between P(AGki) and P(AGkj), where AGki and AGkj are sub-components of AGk), and the affinity/network elements (e.g., links, switches, routers) that are shared among communication paths used by the exemplary affinity group AGm (e.g., the communication paths between P(AGmi) and P(AGmj), where AGmi and AGmj are sub-components of AGm).
It is noted that a “constrained” variant of the total number of shared links and/or switches between affinity groups can be determined, taking into account the layer-2/layer-3 constraints (e.g., VLANs, spanning tree, etc.). Further, an “unconstrained” variant of the total number of shared links and/or switches between affinity groups can be determined, taking into account the physical connectivity, but ignoring the layer-2/layer-3 constraints. Moreover, with regard to the total number of shared links and/or switches between the affinity groups AGk, AGm, the headline value can correspond to (1) the total number of shared links between the affinity groups AGk, AGm, (2) the total number of shared switches between the affinity groups AGk, AGm, and/or any other suitable headline value. In addition, the set of descriptive values can include (1) a list of shared links, (2) a list of shared switches, and/or any other suitable descriptive value. The total number of shared links and/or switches descriptor can be specified in a temporal or conditional manner.
As employed herein, the term “isolation of affinity group AGk” refers to the network links and/or switches that are shared between the exemplary affinity group AGk and the rest of the network.
It is noted that a “constrained” variant of the isolation of an affinity group can be determined, taking into account the layer-2/layer-3 constraints (e.g., VLANs, spanning tree, etc.). Further, an “unconstrained” variant of the isolation of an affinity group can be determined, taking into account the physical connectivity, but ignoring the layer-2/layer-3 constraints. Moreover, with regard to the isolation of the affinity group AGk, the headline value can correspond to an indication of whether or not the affinity group AGk is isolated, and/or any other suitable headline value. In addition, the set of descriptive values can include a list of shared links/switches that the affinity group AGk shares with the rest of the network, and/or any other suitable descriptive value. The isolation descriptor can be specified in a temporal or conditional manner.
As employed herein, the term “path intersection of affinity groups AGk, AGm” refers to the intersection between the communication paths used within the exemplary affinity group AGk and the affinity elements within the exemplary affinity group AGm. For example, the affinity group AGm may describe a set of firewalls that communication paths between communicating affinity elements within the affinity group AGk must pass through.
It is noted that a “constrained” variant of the path intersection of affinity groups can be determined, taking into account the layer-2/layer-3 constraints (e.g., VLANs, spanning tree, etc.). Further, an “unconstrained” variant of the path intersection of affinity groups can be determined, taking into account the physical connectivity, but ignoring the layer-2/layer-3 constraints. Moreover, with regard to the path intersection of the affinity groups AGk, AGm, the headline value can correspond to an indication of whether or not each communication path within the affinity group AGk intersects at least one affinity element of the affinity group AGm, and/or any other suitable headline value. In addition, the set of descriptive values can include a list of communicating affinity elements within the affinity group AGk that do not pass through any affinity element within the affinity group AGm, and/or any other suitable descriptive value. The path intersection descriptor can be specified in a temporal or conditional manner.
Using at least the affinity-network topology, the affinity-network topology score calculator 210 (see
Affinity link scoring is described below with reference to the following illustrative example. In this example, “R1, . . . , RK” denotes values for the sensitive/bounded affinity requirements, “A1, . . . , AK” denotes computed values for the corresponding attributes of the affinity requirements, and “fi(Ri,Ai)” denotes the affinity score for attribute i. The total affinity score for an affinity link can be denoted as, G[f1(R1,A1), f2(R2,A2), . . . , fK(RK,AK)], which can distinguish between “independent attributes” (e.g., the hop count, the number of shared switches before disconnection, etc.) and dependent attributes (e.g., the hop count, the bounded hop count, the latency, the bandwidth, etc.).
Assuming that the total affinity score for an affinity link is linear and normalized with respect to fiε[0,1], the affinity link scores can be added up, with appropriate weighting for the independent affinity requirements. Further, assuming all sensitive/bounded attributes are pair-wise independent, the total normalized affinity score for an affinity link can be expressed as follows,
in which “αi” denotes a weighting value associated with the attribute i. It is noted that this example can be extended to non-pair-wise independent sensitive/bounded attributes by grouping dependent attributes into joint attributes for such affinity link scoring purposes.
Using at least the affinity-network topology, the affinity-network topology score calculator 210 (see
p=Hp[G1( . . . ),G2( . . . ), . . . , GD( . . . )], (3)
in which “D” denotes the total number of outgoing affinity links from the affinity group. Alternatively, the affinity group score can be a function of all of the individual attributes on all of the outgoing links. For example, in this case, the total affinity group score, “p”, can be expressed as follows,
p=G*[{f1( . . . )},{f2( . . . )}, . . . ,{fD( . . . )}], (4)
in which “{fL( . . . )}” denotes the set of sensitive/bounded attribute scores on the affinity link L, and “G*[ . . . ]” is similar to “G[ . . . ]” (see equation (2) above) but takes into account duplicate affinity requirements across multiple affinity links.
Using at least the affinity-network topology, the affinity-network topology score calculator 210 (see
Q[H1( . . . ), . . . , HM( . . . )], (5)
in which “H” denotes a respective affinity group score, and “M” denotes the total number of affinity groups.
It is noted that the network-wide affinity score, Q[ . . . ] (see equation (5) above), can distinguish between independent and dependent affinity groups. For example, two affinity groups can be regarded as being independent if they have no common affinity elements.
Illustrative methods of operating the system 200 (see
An illustrative method of operating the network topology harvester 204 (see
An illustrative method of operating the affinity element harvester 202 (see
An illustrative method of operating the affinity topology calculator 206 (see
An illustrative method of operating the affinity-network topology calculator 208 (see
An illustrative method of operating the central controller 214 (see
An illustrative method of operating the affinity-network topology score calculator 210 (see
It is noted that the operations depicted and/or described herein are purely exemplary. Further, the operations can be used in any sequence, as appropriate, and/or can be partially used. With the above illustrative embodiments in mind, it should be understood that such illustrative embodiments can employ various computer-implemented operations involving data transferred or stored in computer systems. Such operations are those requiring physical manipulation of physical quantities. Typically, though not necessarily, such quantities can take the form of electrical, magnetic, and/or optical signals capable of being stored, transferred, combined, compared, and/or otherwise manipulated.
Further, any of the operations depicted and/or described herein that form part of the illustrative embodiments are useful machine operations. The illustrative embodiments can also relate to a device or an apparatus for performing such operations. The apparatus can be specially constructed for the required purpose, or can be a general-purpose computer selectively activated or configured by a computer program stored in the computer to perform the function of a particular machine. In particular, various general-purpose machines employing one or more processors coupled to one or more computer readable media can be used with computer programs written in accordance with the teachings disclosed herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
Instructions for implementing the network architectures disclosed herein can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can thereafter be read by a computer system. Examples of such computer readable media include magnetic and solid state hard drives, read-only memory (ROM), random-access memory (RAM), Blu-ray™ disks, DVDs, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes, and/or any other suitable optical or non-optical data storage device. The computer readable code can be stored in a single location, or stored in a distributed manner in a networked environment.
The foregoing description has been directed to particular illustrative embodiments of this disclosure. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their associated advantages. Moreover, the procedures, processes, components, and/or modules described herein may be implemented in hardware, software, embodied as a computer-readable medium having program instructions, firmware, or a combination thereof. For example, the functions described herein may be performed by at least one processor executing program instructions out of at least one memory or other storage device.
It will be appreciated by those skilled in the art that modifications to and variations of the above-described systems and methods may be made without departing from the inventive concepts disclosed herein. Accordingly, the disclosure should not be viewed as limited except as by the scope and spirit of the appended claims.
Husak, David J., deRuijter, Denis H., Srinivas, Anand
Patent | Priority | Assignee | Title |
10306344, | Jul 04 2016 | HUAWEI TECHNOLOGIES CO , LTD | Method and system for distributed control of large photonic switched networks |
11507487, | Sep 28 2016 | VMware, Inc. | Control of a computing system to perform network fabric benchmark measurements |
Patent | Priority | Assignee | Title |
5008881, | May 18 1990 | AT&T Bell Laboratories; American Telephone and Telegraph Company | Chordal ring network |
6570685, | Mar 03 1998 | NEC Corporation | Node for optical communication and wavelength-division multiplexing transmission apparatus having a ring structure composed of the same nodes |
6711324, | Jul 11 2002 | Sprint Communications Company, L.P. | Software model for optical communication networks |
7254138, | Feb 11 2002 | SANDTROM, MARK; OPTIMUM COMMUNICATIONS SERVICES, INC , A DELAWARE CORPORATION | Transparent, look-up-free packet forwarding method for optimizing global network throughput based on real-time route status |
7333511, | Aug 29 2002 | SANDTROM, MARK; OPTIMUM COMMUNICATIONS SERVICES, INC , A DELAWARE CORPORATION | Dynamically channelizable packet transport network |
7477844, | Dec 17 2004 | Fujitsu Limited | Method and system for utilizing virtual local access network addressing in optical networks |
7522837, | Nov 21 2002 | Nippon Telegraph and Telephone Corporation | Optical communication system |
7743127, | Oct 10 2002 | Hewlett Packard Enterprise Development LP | Resource allocation in data centers using models |
7986713, | Dec 09 2006 | SANDTROM, MARK; OPTIMUM COMMUNICATIONS SERVICES, INC , A DELAWARE CORPORATION | Data byte load based network byte-timeslot allocation |
8027585, | Mar 31 2005 | NEC Corporation | Optical communication method, optical communication device, and optical communication system |
9210048, | Mar 31 2011 | Amazon Technologies, Inc. | Clustered dispersion of resource use in shared computing environments |
9686125, | Jul 20 2015 | Schwetizer Engineering Laboratories, Inc. | Network reliability assessment |
20020131118, | |||
20030046127, | |||
20040105364, | |||
20040131064, | |||
20050044195, | |||
20060123477, | |||
20060228112, | |||
20060275035, | |||
20080062891, | |||
20080144511, | |||
20090092064, | |||
20090138577, | |||
20090219817, | |||
20090268605, | |||
20090296719, | |||
20090328133, | |||
20100014518, | |||
20100115101, | |||
20100121972, | |||
20100284691, | |||
20110090892, | |||
EP2429122, | |||
WO2008073636, | |||
WO2008116309, | |||
WO2009042919, | |||
WO2009096793, | |||
WO2009151847, | |||
WO2010133114, | |||
WO2010138937, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 10 2012 | SRINIVAS, ANAND | PLEXXI INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037813 | /0898 | |
Oct 10 2012 | DERUIJTER, DENIS H | PLEXXI INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037813 | /0898 | |
Oct 11 2012 | HUSAK, DAVID J | PLEXXI INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037813 | /0898 | |
Feb 17 2016 | Plexxi Inc. | (assignment on the face of the patent) | / | |||
Oct 29 2018 | PLEXXI INC | Hewlett Packard Enterprise Development LP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 049559 | /0956 |
Date | Maintenance Fee Events |
Sep 18 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Jun 23 2021 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Jan 23 2021 | 4 years fee payment window open |
Jul 23 2021 | 6 months grace period start (w surcharge) |
Jan 23 2022 | patent expiry (for year 4) |
Jan 23 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 23 2025 | 8 years fee payment window open |
Jul 23 2025 | 6 months grace period start (w surcharge) |
Jan 23 2026 | patent expiry (for year 8) |
Jan 23 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 23 2029 | 12 years fee payment window open |
Jul 23 2029 | 6 months grace period start (w surcharge) |
Jan 23 2030 | patent expiry (for year 12) |
Jan 23 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |