Techniques and systems for establishing and maintaining networks. The technique includes assigning a network device to an interregional redirector system and load balancer systems. The network device can be assigned based upon the regions or subregions of the network device. The technique includes the load balancer systems assigning the network device to network device management engines. The status of the network device management engines can be monitored to determine if one of the network device management engines has failed. In the event that a network device management engine has failed, the network device can be assigned to a different network device management engine.

Patent
   10326707
Priority
Mar 15 2013
Filed
Jan 17 2014
Issued
Jun 18 2019
Expiry
Sep 20 2034
Extension
246 days
Assg.orig
Entity
unknown
0
271
EXPIRED<2yrs
7. A system for building and maintaining a network, the system comprising:
a plurality of access points provided in a region and configured to provide access to an enterprise network to one or more client devices in the region;
a plurality of load balancer systems uniquely associated with different regions;
a plurality of regional network device management engines associated with the region and coupled to different sets of one or more of the access points;
an interregional redirector engine associated with a plurality of regions including the region, coupled to the plurality of load balancer systems and the access points, and configured to:
receive network device information from one of the access points, the network device information including geography information of the region and enterprise network information of said one of the access points;
determine, based on the network device information, a load balancer system uniquely associated with the region selectively from the plurality of load balancer systems;
assign said one of the access points to the load balancer system;
the load balancer system configured to assign said one of the access points to a regional network device management engine determined selectively from the plurality of regional network device management engines based on the network device information and manage a failure of the regional network device management engine in communication with said one of the access points based on network device management engine failure information provided from said one of the access points to the load balancer system without passing through the failed regional network device management engine, the regional network device management engine configured to manage the access points in providing access to the enterprise network.
1. A method for building and maintaining a network, the method comprising:
operationally connecting an access point in a region that is connectable to one or more client devices in the region to an interregional redirector engine associated with a plurality of regions including the region;
receiving at the interregional redirector engine network device information of the access point, the network device information including geography information of the region and enterprise network information of the access point;
determining, by the interregional redirector engine, based on the network device information, a load balancer system uniquely associated with the region selectively from a plurality of load balancer systems that are uniquely associated with different regions and coupled to the interregional redirector engine associated with the plurality of regions;
assigning, by the interregional redirector engine, the access point to the load balancer system;
assigning, by the load balancer system, the access point to a regional network device management engine associated with the region based on the network device information, the regional network device management engine being determined selectively from a plurality of regional network device management engines that are associated with the region and coupled to different sets of one or more access points;
managing, by the load balancer system, a failure of the regional network device management engine in communication with the access point based on network device management engine failure information provided from the access point to the load balancer system without passing through the failed regional network device management engine;
managing, by the regional network device management engine, the access point in providing access to an enterprise network.
2. The method of claim 1, further comprising validating the access point.
3. The method of claim 1, further comprising:
receiving, by the load balancer system, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;
reassigning, by the load balancer system, the access point to the second network device management engine based on the network device management engine status information from the first and second network device management engines.
4. The method of claim 1, further comprising:
receiving, by the load balancer system, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines;
determining the first network device management engine has failed based on the network device management engine status information;
reassigning, by the load balancer system, the access point to a second network device management engine of the plurality of network device management engines.
5. The method of claim 4, further comprising assigning, by the load balancer system, a second access point to the second network device management engine.
6. The method of claim 4, further comprising sending, from the load balancer system, a network device management engine status notification to an administration engine, the network device management engine status notification indicating a reason why the first network device management engine was determined as failed.
8. The system of claim 7, wherein the interregional redirector engine is further configured to validate said one of the access points.
9. The system of claim 7, wherein the load balancer system is configured to:
receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;
reassign said one of the access points to the second network device management engine based on the network device management engine status information from the first and second network device management engines.
10. The method of claim 7, wherein the load balancer system is configured to:
receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines;
determine that the first network device management engine has failed;
reassign the access point to a second network device management engine of the plurality of network device management engines.
11. The system of claim 10, wherein the load balancer system is further configured to assign a second access point of the plurality of access points to the second network device management engine.
12. The system of claim 10, wherein the load balancer system is further configured to send a network device management engine status notification to an administration engine, the network device management engine status notification indicating a reason why the first network device management engine was determined as failed.
13. The method of claim 1, further comprising:
receiving, by the load balancer system, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;
reassigning, by the load balancer system, a first portion of a plurality of access points assigned to the first network device management engine, including the access point, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information received from the first and second network device management engines.
14. The method of claim 1, further comprising:
receiving, by a network device management engine message queue uniquely associated with the region, network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;
retrieving, by the load balancer system, the network device management engine status information received from the first and second network device management engines from the network device management engine message queue;
reassigning, by the load balancer system, a first portion of a plurality of access points assigned to the first network device management engine, including the access point, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information retrieved from the network device management engine message queue.
15. The method of claim 1, further comprising:
receiving, by the load balancer system, the network device management engine failure information from the access point coupled to the failed network device management engine, which is a first network device management engine of the plurality of network device management engines;
reassigning, by the load balancer system, the access point to a second network device management engine of the plurality of network device management engines, based on the network device management engine failure information from the access point.
16. The system of claim 7, wherein the load balancer system is further configured to:
receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines;
reassign a first portion of the plurality of access points assigned to the first network device management engine, including said one of the access points, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information received from the first and second network device management engines.
17. The system of claim 7, further comprising:
a network device management engine message queue uniquely associated with the region and configured to receive network device management engine status information from the network device management engine, which is a first network device management engine of the plurality of network device management engines, and a second network device management engine of the plurality of network device management engines,
wherein the load balancer system is further configured to:
retrieve the network device management engine status information received from the first and second network device management engines from the network device management engine message queue;
reassign a first portion of a plurality of access points assigned to the first network device management engine, including said one of the access points, to the second network device management engine without reassigning a second portion of the plurality of access points assigned to the first network device management engine, based on the network device management engine status information retrieved from the network device management engine message queue.
18. The system of claim 7, wherein the load balancer system is further configured to:
receive the network device management engine failure information from said one of the access points coupled to the failed network device management engine, which is a first network device management engine of the plurality of network device management engines;
reassign said one of the access points to a second network device management engine of the plurality of network device management engines, based on the network device management engine failure information from said one of the access points.

This application claims priority to U.S. Patent Provisional Application Ser. No. 61/788,621, filed Mar. 15, 2013, which is incorporated herein by reference.

An area of ongoing research and development is improving the ease by which a person or an enterprise can set up a network. In particular importance is improving the ease by which a person or an enterprise can add devices to an already existing network to further expand and improve the network. Specifically, in establishing a network or adding devices to an already existing network, an administrator must configure the device in order to establish a new network or incorporate a new device into an already existing network. There therefore exists a need for systems in which a person or an enterprise can easily setup a network or add devices to an already existing network without having to configure the device.

Another area of ongoing research and development is improving the ease by which a network can be monitored and managed to continue to function if a device fails. Typical systems usually connect a plurality of network devices to a single server to manage the network device. Therefore, if the server fails, all of the network devices managed by the failed server are inoperable. There therefore exists a need for a system that monitors the servers or engines that manage network devices to determine whether or not they have failed. There also exists a need for a system capable of reassigning the network devices to different servers or engines that manage network devices in the event that a server or engine that manages network devices has failed.

The foregoing examples of the related art are intended to be illustrative and not exclusive. Other limitations of the relevant art will become apparent to those of skill in the art upon a reading of the specification and a study of the drawings.

The following implementations and aspects thereof are described and illustrated in conjunction with systems, tools, and methods that are meant to be exemplary and illustrative, not necessarily limiting in scope. In various implementations, one or more of the above-described problems have been addressed, while other implementations are directed to other improvements.

Techniques and systems for building and maintaining a network. The technique involves assigning network devices to device management engines that mange the flow of data packets into and out of the network devices. The technique can include connecting a network device to an interregional redirector system. The network device can be a newly purchased device that is being powered on for the first time by the purchaser. The technique can include the interregional redirector system receiving network device information about the network device. The technique can also include the interregional redirector system validating the network device. The interregional redirector system can then assign the network to a load balancer system. The load balancer system can be associated with or part of one or multiple regional device management systems. The regional device management systems can be regionally unique in that they contain engines in specific regions or subregions. The load balancer systems can be regionally unique in that they are associated with or part of one or multiple regional network device management systems that are regionally unique. The interregional redirector system can assign the network device to a load balancer based upon the regions or subregions of the engines of the regional network device management systems or the engines themselves that which the load balancer systems are associated.

The technique can also involve a load balancer system assigning a network device to a network device management engine. The load balancer system can receive both network device information and network device management engine information. The load balancer system can assign the network device to a network device management engine based upon the regions or subregions of the network devices and the regions or subregions of the other network devices that they network device management engine already manages.

The technique can also include the load balancer system monitoring the status of the network device management engines associated with it and reassign network devices to different management engines in the event that one of the management engines fails. The load balancer system can monitor the status of the network device management engines associated with the load balancer system by retrieving network device management engine status messages from a network device management engine message queue. The status messages can be sent to the network device management engine message queue by the network device management engines. The load balancer system can use the status messages of the network device management engines to determine whether or not a network device management engine has failed. If the load balancer system determines that a network device management engine has failed, then the load balancer system can reassign the network device to another network device management engine that is not failing. The load balancer system can also send a notification to an administrator system that the network device management engine has failed.

These and other advantages will become apparent to those skilled in the relevant art upon a reading of the following descriptions and a study of the several examples of the drawings.

FIG. 1 depicts a diagram of an example of a system configured to couple a network device to a regional network device management system.

FIG. 2 depicts a diagram of an example of a system configured to couple a network device to a network device management engine and monitor the network device management engine.

FIG. 3 depicts a diagram of an example of a load balancer system.

FIG. 4 depicts a flowchart of an example of a method for assigning a network device to a regional network device management system.

FIG. 5 depicts a flowchart of an example of a method of a load balancer system for assigning a network device to a network device management engine.

FIG. 6 depicts a flowchart of an example of a method for determining that a network device management engine has failed by a network device managed by the network device management engine.

FIG. 1 depicts a diagram 100 of an example of a system configured to couple a network device to a regional network device management system. The system includes an interregional redirector system 102, a load balancer system 104, regional network device management systems 106-1 . . . 106-n, a computer readable medium 108, and network devices 110-1 . . . 110-n. As used in this paper, a system can be implemented as an engine or a plurality of engines.

While the system is shown to include multiple network devices 110-1 . . . 110-n, in a specific implementation, the system can only include one network device (e.g. 110-1). The network device s 110-1 . . . 110-n are coupled to client devices 112-1 . . . 112-n. Each client device can be coupled to a single network device 110-1 . . . 110-n (e.g. client device 112-1) or can be coupled to more than one network device (e.g. client device 112-2). The client devices 112-1 . . . 112-n can include a client wireless device, such as a laptop computer or a smart phone. The client devices 112-1 . . . 112-n can also include a repeater or a plurality of linked repeaters. Therefore, the client devices 112-1 . . . 112-n can be comprised of a plurality of repeaters and a client wireless device that can be coupled together as a chain.

A network device, as is used in this paper, can be an applicable device used in connecting a client device to a network. For example, a network device can be a virtual private network (hereinafter referred to as “VPN”) gateway, a router, an access point (hereinafter referred to as “AP”), or a device switch. The network devices 110-1 . . . 110-n can be integrated as part of router devices or as stand-alone devices coupled to upstream router devices. The network devices 110-1 . . . 110-n can be coupled to the client devices 112-1 . . . 112-n through either a wireless or a wired medium. The wireless connection may or may not be IEEE 802.11-compatible. In this paper, 802.11 standards terminology is used by way of relatively well-understood example to discuss implementations that include wireless techniques that connect stations through a wireless medium. A station, as used in this paper, may be referred to as a device with a media access control (MAC) address and a physical layer (PHY) interface to a wireless medium that complies with the IEEE 802.11 standard. Thus, for example, client devices 112-1 . . . 112-n and network devices 110-1 . . . 110-n with which the client devices 112-1 . . . 112-n associate can be referred to as stations, if applicable. IEEE 802.11a-1999, IEEE 802.11b-1999, IEEE 802.11g-2003, IEEE 802.11-2007, and IEEE 802.11n TGn Draft 8.0 (2009) are incorporated by reference.

As used in this paper, a system that is 802.11 standards-compatible or 802.11 standards-compliant complies with at least some of one or more of the incorporated documents' requirements and/or recommendations, or requirements and/or recommendations from earlier drafts of the documents, and includes Wi-Fi systems. Wi-Fi is a non-technical description generally correlated with the IEEE 802.11 standards, as well as Wi-Fi Protected Access (WPA) and WPA2 security standards, and the Extensible Authentication Protocol (EAP) standard. In alternative implementations, a station may comply with a different standard than Wi-Fi or IEEE 802.11 and may be referred to as something other than a “station,” and may have different interfaces to a wireless or other medium.

IEEE 802.3 is a working group and a collection of IEEE standards produced by the working group defining the physical layer and data link layer's MAC of wired Ethernet. This is generally a local area network technology with some wide area network applications. Physical connections are typically made between nodes and/or infrastructure devices (hubs, switches, routers) by various types of copper or fiber cable. IEEE 802.3 is a technology that supports the IEEE 802.1 network architecture. As is well-known in the relevant art, IEEE 802.11 is a working group and collection of standards for implementing wireless local area network (WLAN) computer communication in the 2.4, 3.6 and 5 GHz frequency bands. The base version of the standard IEEE 802.11-2007 has had subsequent amendments. These standards provide the basis for wireless network products using the Wi-Fi brand. IEEE 802.1 and 802.3 are incorporated by reference.

The network devices 110-1 . . . 110-n are coupled to the interregional redirector system 102, the load balancer system 104 and regional network device management systems 106-1 . . . 106-n through a computer-readable medium 108. The computer-readable medium 108 is intended to represent a variety of potentially applicable technologies. For example, the computer-readable medium 108 can be used to form a network or part of a network. Where two components are co-located on a device, the computer-readable medium 108 can include a bus or other data conduit or plane. Where a first component is co-located on one device and a second component is located on a different device, the computer-readable medium 108 can include a wireless or wired back-end network or LAN. The computer-readable medium 108 can also encompass a relevant portion of a WAN or other network, if applicable.

The computer-readable medium 108, the interregional redirector system 102, the load balancer system 104, the regional network device management systems 106-1 . . . 106-n, and other applicable systems described in this paper can be implemented as parts of a computer system or a plurality of computer systems. A computer system, as used in this paper, is intended to be construed broadly. In general, a computer system will include a processor, memory, non-volatile storage, and an interface. A typical computer system will usually include at least a processor, memory, and a device (e.g., a bus) coupling the memory to the processor. The processor can be, for example, a general-purpose central processing unit (CPU), such as a microprocessor, or a special-purpose processor, such as a microcontroller.

The memory can include, by way of example but not limitation, random access memory (RAM), such as dynamic RAM (DRAM) and static RAM (SRAM). The memory can be local, remote, or distributed. As used in this paper, the term “computer-readable storage medium” is intended to include only physical media, such as memory. As used in this paper, a computer-readable medium is intended to include all mediums that are statutory (e.g., in the United States, under 35 U.S.C. 101), and to specifically exclude all mediums that are non-statutory in nature to the extent that the exclusion is necessary for a claim that includes the computer-readable medium to be valid. Known statutory computer-readable mediums include hardware (e.g., registers, random access memory (RAM), non-volatile (NV) storage, to name a few), but may or may not be limited to hardware.

The bus can also couple the processor to the non-volatile storage. The non-volatile storage is often a magnetic floppy or hard disk, a magnetic-optical disk, an optical disk, a read-only memory (ROM), such as a CD-ROM, EPROM, or EEPROM, a magnetic or optical card, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory during execution of software on the computer system. The non-volatile storage can be local, remote, or distributed. The non-volatile storage is optional because systems can be created with all applicable data available in memory.

Software is typically stored in the non-volatile storage. Indeed, for large programs, it may not even be possible to store the entire program in the memory. Nevertheless, it should be understood that for software to run, if necessary, it is moved to a computer-readable location appropriate for processing, and for illustrative purposes, that location is referred to as the memory in this paper. Even when software is moved to the memory for execution, the processor will typically make use of hardware registers to store values associated with the software, and local cache that, ideally, serves to speed up execution. As used herein, a software program is assumed to be stored at an applicable known or convenient location (from non-volatile storage to hardware registers) when the software program is referred to as “implemented in a computer-readable storage medium.” A processor is considered to be “configured to execute a program” when at least one value associated with the program is stored in a register readable by the processor.

In one example of operation, a computer system can be controlled by operating system software, which is a software program that includes a file management system, such as a disk operating system. One example of operating system software with associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Wash., and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux operating system and its associated file management system. The file management system is typically stored in the non-volatile storage and causes the processor to execute the various acts required by the operating system to input and output data and to store data in the memory, including storing files on the non-volatile storage.

The bus can also couple the processor to the interface. The interface can include one or more input and/or output (I/O) devices. The I/O devices can include, by way of example but not limitation, a keyboard, a mouse or other pointing device, disk drives, printers, a scanner, and other I/O devices, including a display device. The display device can include, by way of example but not limitation, a cathode ray tube (CRT), liquid crystal display (LCD), or some other applicable known or convenient display device. The interface can include one or more of a modem or network interface. It will be appreciated that a modem or network interface can be considered to be part of the computer system. The interface can include an analog modem, isdn modem, cable modem, token ring interface, satellite transmission interface (e.g. “direct PC”), or other interfaces for coupling a computer system to other computer systems. Interfaces enable computer systems and other devices to be coupled together in a network.

The computer systems described throughout this paper can be compatible with or implemented through one or a plurality of cloud-based computing systems. As used in this paper, a cloud-based computing system is a system that provides virtualized computing resources, software and/or information to client devices. The computing resources, software and/or information can be virtualized by maintaining centralized services and resources that the client devices can access over a communication interface, such as a network. “Cloud” may be a marketing term and for the purposes of this paper can include any of the networks described herein. The cloud-based computing system can involve a subscription for services or use a utility pricing model. Users can access the protocols of the cloud-based computing system through a web browser or other container application located on their client device.

The computer systems described throughout this paper can be implemented as or can include engines to perform the functions of each system. An engine, as used in this paper, includes a dedicated or shared processor and, typically, firmware or software modules executed by the processor. Depending upon implementation-specific or other considerations, an engine can be centralized or its functionality distributed. An engine can include special purpose hardware, firmware, or software embodied in a computer-readable medium for execution by the processor.

The engines described throughout this paper can be cloud-based engines. A cloud-based engine is an engine that can run applications and/or functionalities using a cloud-based computing system. All or portions of the applications and/or functionalities can be distributed across multiple computing devices, and need not be restricted to only one computing device. In some embodiments, the cloud-based engines can execute functionalities and/or modules that end users access through a web browser or container application without having the functionalities and/or modules installed locally on the end-users' computing devices.

The computer systems described throughout this paper can include datastores. A datastore, as described in this paper, can be cloud-based datastores compatible with a cloud-based computing system.

The regional network device management systems 106-1 . . . 106-n can function to manage the network devices 110-1 . . . 110-n. Each regional network device management system 106-1 . . . 106-n can include a plurality of engines that manage the network devices 110-1 . . . 110-n. The engines can be grouped into regional network device management systems 106-1 . . . 106-n based upon regions and subregions of the network devices 110-1 . . . 110-n that the engines manage. Therefore the regional network device management systems 106-1 . . . 106-n can be characterized by the regions and subregions of the network devices 110-1 . . . 110-n that the engines within a specific network device management system 106-1 . . . 106-n manage. For example, the engines that manage network devices 110-1 . . . 110-n in the same region or subregion can be grouped into the same regional network device management system (e.g. 106-1). As a result, the regional network device management systems 106-1 . . . 106-n can be regionally unique in that they contain engines that manage network devices 110-1 . . . 110-n within specific regions or subregions.

As the regional network device management systems 106-1 . . . 106-n can be implemented as a cloud-based system, and as the regional network device management systems 106-1 . . . 106-n can be characterized by the region and subregions of the network devices 110-1 . . . 110-n, the regional network device management systems 106-1 . . . 106-n can be organized or located at regions within the cloud based upon the regions and subregions of the network devices 110-1 . . . 110-n. Specifically, the regional network device management systems 106-1 . . . 106-n can be organized or located at regions within the cloud based upon the regions or the subregions of the network devices 110-1 . . . 110-n that the regional network device management systems 106-1 . . . 106-n manage. In a specific implementation, the subregions of the network devices 110-1 . . . 110-n together form a region of the network devices 110-1 . . . 110-n.

The regions or subregions of the network devices 110-1 . . . 110-n can be defined based upon geography, an enterprise network or a combination of both geography and an enterprise network. In a specific implementation, the region can be defined based upon geography to include the network devices 110-1 . . . 110-n associated with or located within a geographical area or location, such as a city or a building within a city. Similarly, a subregion can be defined to include the network devices 110-1 . . . 110-n located in or associated with a geographical area or location within the geographical area or location used to define the region. For example, the region can be defined to include the network devices 110-1 . . . 110-n located in or associated with a state, while the subregion can be defined to include the network devices 110-1 . . . 110-n located in or associated with a city in the state that defines the region. In another implementation, the region can be defined based upon an enterprise to include the network devices 110-1 . . . 110-n associated with or are used in an enterprise network. In yet another implementation, the region can be defined based upon a combination of both geography and an enterprise to include the network devices 110-1 . . . 110-n associated with or located within a geographical location or area within an enterprise network. For example, the region can include the network devices 110-1 . . . 110-n associated with or located within a specific office site of the enterprise.

The regions of the network devices 110-1 . . . 110-n can not only be defined according to the previously described classifications but can also be defined based upon the number of network devices 110-1 . . . 110-n in or associated with the region. In a specific implementation, the region can be defined to include only basic service set (BSS). A BSS includes one network device and all of the stations or other devices (i.e. repeaters) coupled to the network device. The BSS can be identified by a unique basic service set identification (BSSID). The BSSID can be the MAC address of the network device in the BSS. In another implementation, the region can be defined to include an extended service set (ESS) that comprises plurality BSSs. The plurality of BSSs can be interconnected so that stations or devices are connected to multiple network devices within the ESS. The ESS can be identified by a unique extended service set identification (ESSID). The ESSID can be the MAC addresses of the network devices in the ESS.

The system shown in FIG. 1 includes a load balancer system 104 coupled to the regional network device management systems 106-1 . . . 106-n and the network devices 110-1 . . . 110-n through the computer readable medium 108. The load balancer system 104 is also coupled to the interregional redirector system 102 through the computer-readable medium 108. The system can include multiple load balancer systems 104 that can be coupled to and associated with different regional network device management systems 106-1 . . . 106-n. In a specific implementation, the specific regional network device management systems 106-1 . . . 106-n that the load balancer system 104 is coupled to and associated with can be based upon the regions and subregions of the network devices 110-1 . . . 110-n that the engines within the specific regional network device management systems 106-1 . . . 106-n manage. For example, a load balancer system 104 can be coupled to and associated with the regional network device management systems 106-1 . . . 106-n that manage the network devices 110-1 . . . 110-n within an entire state. As the load balancer systems 104 can be coupled to and associated with specific regional network device management systems 106-1 . . . 106-n based on the regions or the subregions of the network devices 110-1 . . . 110-n that the engines within the specific regional network device management systems 106-1 . . . 106-n manage, the load balancer systems 104 can be regionally unique. For example, the load balancer systems 104 can be regionally unique in that they are associated with network devices 106-1 . . . 106-n in the same enterprise network.

Additionally, a specific regional network device management system 106-1 . . . 106-n can be coupled to or associated with multiple load balancer systems 104 based upon the regions and subregions of the network devices 110-1 . . . 110-n managed by the engines in the specific regional network device management system 106-1 . . . 106-n. For example, a specific regional network device management system 106-1 . . . 106-n can be coupled to or associated with a first load balancer system 104 because the specific regional network device management system 106-1 . . . 106-n contains engines that manage network devices 110-1 . . . 110-n in a specific region, such as a state. Additionally, the specific regional network device management system 106-1 . . . 106-n can also be coupled to or associated with a second load balancer system 104 because the specific regional network device management system 106-1 . . . 106-n contains engines that manage network devices 110-1 . . . 110-n in a subregion of the specific region, such as a city within the state.

The load balancer system 104, as will be discussed in greater detail later with respect to FIG. 2, can function to monitor the usage of specific engines grouped into regional network device management systems 106-1 . . . 106-n as the engines within the regional network device management systems 106-1 . . . 106-n manage various network devices 110-1 . . . 110-n. The load balancer system 104 can also function to assign a network device 110-1 . . . 110-n to one or a plurality of engines within one or more of the regional network device management systems 106-1 . . . 106-n so that the assigned engine or engines can manage the assigned network devices 110-1 . . . 110-n. The load balancer system 104 can also function to assign a network device 110-1 . . . 110-n to another load balancer system 104 that can then assign the network devices 110-1 . . . 110-n to another load balancer system 104 or one or a plurality of engines within one or more of the regional network device management systems 106-1 . . . 106-n. In a specific implementation, the load balancer systems 104 can assign a newly purchased network device 110-1 . . . 110-n to either or both another load balancer system 104 and engines in a regional network device management system 106-1 . . . 106-n.

The load balancer systems 104 can assign the network devices 110-1 . . . 110-n to specific engines within the regional network device management systems 106-1 . . . 106-n based upon the regions or subregions of the other network devices 110-1 . . . 110-n that the specific engines manage. The load balancer systems 104 can assign the network devices 110-1 . . . 110-n to other load balancers systems. The other load balancer systems can be coupled to or associated with specific engines within the regional network device management systems 106-1 . . . 106-n. Specifically, the other load balancer systems can be associated with specific engines within the regional network device management systems 106-1 . . . 106-n based upon the regions or subregions of the network devices 110-1 . . . 110-n that the other load balancer systems assign.

The system shown in the example of FIG. 1 includes an interregional redirector system 102. The interregional redirector system 102 is coupled to the network devices 110-1 . . . 110-n and the load balancer systems 104 through the computer readable medium 108. In a specific implementation, the interregional redirector system 102 is not associated with any specific region. Specifically, the interregional redirector system 102 can be coupled to all of the load balancer systems 104, and through the load balancer systems to all of the regional network device management systems 106-1 . . . 106-n. As the regional network device management systems 106-1 . . . 106-n and the load balancer systems 104 can be regionally unique, and as the interregional redirector system 102 can be coupled to all of the regional network device management systems 106-1 . . . 106-n, the interregional redirector system 102 is associated with every region or subregion. Therefore, the interregional redirector system 102 is not unique to a single region, but is rather globally applicable to at least a subplurality of the regions.

In being coupled to the network devices 110-1 . . . 110-n, the interregional redirector system 102 can function to receive identification information from the network devices 110-1 . . . 110-n and validate the network devices. In being coupled to the load balancer systems 104, the interregional redirector system 102 can further function to assign specific network devices 110-1 . . . 110-n to one or a plurality of load balancer systems 104. As the load balancer systems can be regionally unique, the interregional redirector system 102 can assign the network devices 110-1 . . . 110-n to one or a plurality of specific load balancer systems 104 based upon the regions or subregions of the network devices 110-1 . . . 110-n that are being assigned.

In a specific implementation, a newly purchased network device 110-1 . . . 110-n is configured to be directed to the interregional redirector system 102 upon the first turning on of the network device 110-1 . . . 110-n by the purchaser of the network device. In being directed to the interregional redirector system 102, the network device 110-1 . . . 110-n can send the identification information of the network device to the interregional redirector system 102. The interregional redirector system 102 can both validate the network device 110-1 . . . 110-n and assign the newly purchased network device 110-1 . . . 110-n based on the region or subregion of the network device to one or a plurality of load balancer systems 104. The one or a plurality of load balancer systems 104 can then assign the newly purchased network device 110-1 . . . 110-n to one or a plurality of regional network device management systems 106-1 . . . 106-n. In another example, additional server resources, such as additional new regional network device management systems can be added and the load balancer system 104 can assign the newly purchased network device 110-1 . . . 110-n to an added new regional network device management system.

In a specific implementation, the regions or subregions of the network devices 110-1 . . . 110-n can be part of the identification information received by the interregional redirector system 102 from the network devices 110-1 . . . 110-n. In another implementation, the interregional redirector system 102 can trace through the computer readable medium 108 to determine the region or subregion of the newly purchased network device 110-1 . . . 110-n. Alternatively, the interregional redirector system 102 can trace through the computer readable medium 108 the regions or subregions of already activated network devices 110-1 . . . 110-n that neighbor the newly purchased network device 110-1 . . . 110-n either physically or on a network structure level to determine the region or subregion of the newly purchased network device 110-1 . . . 110-n. The interregional redirector system 102 can determine the regions or subregions of neighboring network devices 110-1 . . . 110-n based upon the MAC addresses of the neighboring network devices 110-1 . . . 110-n. In an alternate implementation, the interregional director system 102 can determine the region or subregion of the newly purchased network device 110-1 . . . 110-n through the identity of the purchaser of the network device. Specifically, the interregional redirector system 102 can use the MAC address of the newly purchased network device 110-1 . . . 110-n that can be received from the newly purchased network device 110-1 . . . 110-n to determine the identity of the purchaser of the network device 110-1 . . . 110-n and the region or subregion of the network device 110-1 . . . 110-n. For example, the interregional redirector system 102 can determine that company A purchased the network device 110-1 . . . 110-n and because company A occupies a specific location within a region, such as a city, determine that the city is the region of the newly purchased network device 110-1 . . . 110-n.

FIG. 2 depicts a diagram 200 of an example of a system configured to couple a network device to a network device management engine and monitor the network device management engine. The system includes a regional network device management system 202 coupled to network devices 204-1, 204-3 and 204-3. While only three network devices 204-1, 204-2 and 204-3 are shown, the regional network device management system 202 can be coupled to more or less than three network devices. The system can also include an administrator system 214 coupled to the regional network device management system 202.

The regional network device management system 202 includes a load balancer system 206, network device management engines 208-1, 208-2 and 208-3 and a network device management engine message queue 210. While only three network device management engines 208-1, 208-2 and 208-3 are shown, the regional network device management system 202 can include more or less than three network device management engines.

The network device management engines 208-1, 208-2 and 208-3, within the regional network device management systems 202, are coupled to the network devices 204-1, 204-2 and 204-3. A network device (e.g. 204-3) can be coupled to more than one network device management engines (e.g. 208-2 and 208-3). The network device management engines (e.g.

208-1) can manage the flow of data into and out of the network devices (e.g. 204-1) coupled to the specific network device management engines (e.g. 208-1). The network device management engines (e.g. 208-1) can be regionally unique in that they manage the flow of data into and out of the network devices in specific regions or subregions. Furthermore, the network device management engines can be grouped into a network device management system 202 based upon the regions or subregions of the network devices that the network device management engines manage.

In a specific implementation, the network device management engines 208-1, 208-2 and 208-3 can manage the flow of data into and out of the network devices 204-1, 204-2 and 204-3 by controlling routers connected to the network devices. In another implementation, the network device management engines 208-1, 208-2 and 208-3 can control the flow of data into and out of the network devices 204-1, 204-2 and 204-3 by functioning as a router themselves, and switching between different data paths coupled to the network device management engines. For example, network device management engines 208-1, 208-2 and 208-3 that manage network devices 204-1, 204-2 and 204-3 within the same region or subregion can be grouped into the same network device management system 202.

The network device management engines 208-1, 208-2 and 208-3 can be a server that can perform the previously described functions. In a specific implementation, the network device management engines 208-1, 208-2 and 208-3 can be configured in accordance with the control and provisioning of wireless access points (CAPWAP) protocol. Specifically, the network device management engines 208-1, 208-2 and 208-3 can be CAPWAP servers. CAPWAP servers are servers that can be configured in accordance with the CAPWAP protocol. The CAPWAP protocol is similar to the light weight access point protocol (LWAPP), but differs in that it includes the integration of a full datagram transport layer security (DTLS) tunnel. Data is transmitted through the CAPWAP protocol over an unencrypted data channel while control messages of the data are transmitted in the DTLS tunnel. The CAPWAP protocol is described in RFC 5415 (2009), which is hereby incorporated by reference, and IEEE 802.11 which was previously incorporated by reference.

In a specific implementation, the network devices 204-1, 204-2 and 204-3 can function to determine whether a network device management engine 208-1, 208-2 and 208-3 that the network devices are coupled to has failed. For example, if a network device 204-1, 204-2 and 204-3 does not receive traffic from a network device management engine 208-1, 208-2 and 208-3 that is coupled to the network device, then the network device can determine that the network device management engine has failed. Further in the specific implementation, the network device 204-1, 204-2 and 204-3 can alert the load balancer system 206 to a failure of a network device management engine 208-1, 208-2 and 208-3. For example, upon detecting a failure in a network device management engine 208-1, 208-2 and 208-3 can generate and send a network device management engine failure message to the load balancer system 206. In one example, the network device management engine failure message identifies the specific network device management engine 208-1, 208-2 and 208-3 that has failed.

The network device management engines 208-1, 208-2 and 208-3 are coupled to the network device management engine message queue 210. The network device management engine message queue 210 is coupled to the load balancer system 206. The network device management engine message queue 210 can function to receive status messages sent from the network device management engines 208-1, 208-2 and 208-3. The status messages can be sent from the network device management engines 208-1, 208-2 and 208-3 periodically, after a predetermined interval of time. In another implementation the status messages can be sent from the network device management engines 208-1, 208-2 and 208-3 when the load balancer system 206 sends a status request to the network device management engines 208-1, 208-2 and 208-3.

The status messages sent from the network device management engine 208-1, 208-2 and 208-3 can include the amount of used bandwidth and available bandwidth that exist on network devices coupled to the specific network device management engines 208-1, 208-2 and 208-3. The status messages can also include information about the number of network devices 204-1, 204-2 and 204-3 that the network device management engines 208-1, 208-2 and 208-3 are managing. The status messages can further include the amount of bandwidth on the network device management engines 208-1, 208-2 and 208-3 that each network device 204-1, 204-2 and 204-3 is using. In a specific implementation, the regions and subregions of the network devices 204-1, 204-2 and 204-3 that the network device management engines 208-1, 208-2 and 208-3 are managing is included in the status messages. Further, the status message can include the amount of memory available and is being used by the network device management engines 208-1, 208-2 and 208-3 and how much memory of the network device management engines is being used by each network device 204-1, 204-2 and 204-3 managed by the network device management engines 208-1, 208-2 and 208-3.

The load balancer system 206 is also coupled to the network devices 204-1, 204-2 and 204-3. The load balancer system 206 can become coupled to network devices 204-1, 204-2 and 204-3 when the network device 204-1, 204-2 and 204-3 is assigned to the specific load balancer system 206 of a specific regional network device management system 202. The network devices 204-1, 204-2 and 204-3 can be assigned to a specific load balancer system 206 within a specific regional network device management system 202 by either another load balancer system 104 or the interregional redirector system 102, shown in FIG. 1. As discussed previously with FIG. 1, the network device 204-1, 204-2 and 204-3 can be assigned to a specific regional network device management system 202 based on the region or subregion of the network device 204-1, 204-2 and 204-3.

The load balancer system 206 can function to assign a network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 when the network device 204-1, 204-2 and 204-3 is assigned to the load balancer system 206. In a specific implementation, the load balancer system 206 can assign a newly purchased network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3. The load balancer system 206 can assign a network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 based upon the region or subregion of the network devices already assigned to a network device management engine. For example, the load balancer system 206 can assign a network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 that already manages network devices in the same or related region or subregion of the network device that is being assigned.

Additionally, the load balancer system 206 can assign the network device 204-1, 204-2 and 204-3 to one or a plurality of network device management engines 208-1, 208-2 and 208-3 based in part upon the status message that the load balancer system 206 reads for each network device management engine 208-1, 208-2 and 208-3 from the network device management engine message queue 210. For example, if the status messages retrieved by the load balancer system 206 indicate that network device management engine 208-1 has a greater amount of bandwidth than network device management engine 208-2, the load balancer system 206 can assign network device 204-1 to network device management engine 208-1. As a result, network device 204-1 is managed by network device management engine 208-1. In another implementation, the load balancer system 206 can also assign the network device 204-1, 204-2 and 204-3 to a network device management engine 208-1, 208-2 and 208-3 based not only on the available bandwidth of the network device management engines, but also on the expected amount of resources, such as bandwidth that the specific network device 204-1, 204-2 and 204-3 will use from the network device management engines.

The load balancer system 206 can also function to monitor the status of the network device management engines 208-1, 208-2 and 208-3 and reassign the network devices 204-1, 204-2 and 204-3 to other network device management engines in the event of a failure of the network device management engine or engines 208-1, 208-2 and 208-3 to which specific network devices are assigned. For example, the load balancer system 206 can detect a failure in network device management engine 208-1 connected along dashed line 212 to network device 204-2. In response to the failure of network device management engine 208-1, the load balancer system 206 can assign the network device 204-2 to network device management engine 208-2 that is not failing.

In a specific implementation, the load balancer system 206 detects a failure of a network device management engine 208-1, 208-2 and 208-3 when the network device management engine does not send a status message to the network device management engine message queue 210. In another implementation, the load balancer system 206 detects a failure of a network device management engine 208-1, 208-2 and 208-3 when the engine does not send a specific number of status messages to the network device management engine message queue 210. The number of status messages that a network device management engine 208-1, 208-2 and 208-3 fails to send to the network device management engine message queue 210 before the load balancer system 206 determines that a failure has occurred can be predefined. In another implementation, the load balancer system 206 detects a failure of a network device management engine 208-1, 208-2 and 208-3 when the status message sent by a network device management engine indicates that the resources of the engine reach a certain level. For example, the load balancer system 206 can detect a failure of one of the specific network device management engine 208-1, 208-2 and 208-3 when the amount of available bandwidth of the specific network device management engines falls below a certain predefined available bandwidth level.

In a specific implementation, the load balancer system 206 functions to detect a failure of a network device management engine based on network device management engine failure messages generated by the network devices 204-1, 204-2 and 204-3. For example, if the load balancer system 206 receives a network device management engine failure message from the network devices 204-1, 204-2 and 204-3 identifying the specific network device management engine 208-1, 208-2 and 208-3 that has failed, then the load balancer system 206 can determine/detect that the specific network device management engine 208-1, 208-2 and 208-3 has failed.

In a specific implementation, if the load balancer system 206 detects a failure in one of the network device management engines 208-1, 208-2 and 208-3, the load balancer system 206 can reassign the one or plurality of network devices 204-1, 204-2 and 204-3 connected to the network device management engine to other network device management engines in either the same regional network device management system 202 or different regional network device management systems. Alternatively, the load balancer system 206 can reassign all of the network devices 204-1, 204-2 and 204-3 connected to a failed network device management engine 208-1, 208-2 and 208-3 to one or a plurality of other network device management engines. In yet another alternative, the load balancer system can reassign a portion of the network devices 204-1, 204-2 and 204-3 connected to a failed network device management engine 208-1, 208-2 and 208-3 so that the failed network device management engine is cured, and is no longer failing. For example, if an network device management engine is failing due to a lack of available bandwidth, the load balancer system 206 can reassign a portion of the network devices assigned to the failed network device management engine so that the available bandwidth of the failing network device management engine is increased to a level where the network device management engine is no longer failing.

The load balancer system 206 can also be coupled to the administrator system 214, thereby coupling the regional network device management system 202 to the administrator system 214. The load balancer system 206 can send a notification to the administrator system 214 in the event that the load balancer system detects a failure of one of the network device management engines 208-1, 208-2 and 208-3 from the status messages sent to the network device management engine queue 210. In a specific implementation, when the load balancer system 206 detects a failure of one of the network device management engines 208-1, 208-2 and 208-3 because a specific network device management engine does not send a status message to the network device management engine message queue 210, the load balancer system can send a notification to the administrator system 214. The notification sent to the administrator system can include the reason why the load balancer system 206 detects a fault in a specific network device management engine, such as a failure caused by not sending a status message to network device management engine message queue 210, or caused by the resources of a specific network device management engine have reached a specific level. The administrator system 214 can include a computer implemented process for fixing the failed network device management engine based upon the reason why the load balancer system 206 detects a failure in a specific network device management engine.

FIG. 3 is a diagram 300 of an example of a load balancer system 302. The load balancer system 302 can be configured to assign network devices to network device management engines, monitor the status of the network management engines, reassign a network device to a new network device management engine if the network device management engine fails and notify the administrator system of a failure of a network management engine.

In the example of FIG. 3, the load balancer system 302 is coupled through computer-readable medium 304 to an administrator system 306, network devices 308 and the network device management engine message queue 310. The load balancer system 302 includes a message queue access engine 314 coupled through the computer readable medium to the network device management engine message queue 310. The message queue access engine 314 can be configured to retrieve status information of network device management engines from the status messages in the network device management engine message queue 310. The status messages can include information as to the status of the network device management engines, such as the amount of available bandwidth on the network device management engines. The status message also can include information as to when a status message was sent to the network device management engine message queue 310, which can be used to determine whether a network device management engine has stopped sending status messages, and thus may have failed. The message queue access engine 314 can be configured to retrieve status information each time a status message is sent to the network device management engine message queue 310. The status information retrieved by the message queue access engine 314 can be stored on a network device management engine status profiles datastore 318.

The load balancer system 302 can also include a network device access engine 312. The network device access engine 312 can be coupled to network devices 308 coupled to the load balancer system 302 through computer-readable medium 304. The network device access engine 312 can be configured to retrieve or receive information from network devices 308 coupled to the load balancer system 302. In a specific implementation, the network device access engine 312 is configured to retrieve or receive information from newly purchased network devices 308 coupled to the load balancer system 302 for the first time. The newly purchased network devices can become coupled to the load balancer system 302 after being assigned to the load balancer system 302 by either or both another load balancer system or an interregional redirector system, as is shown in FIG. 1. The information retrieved or received by the network device access engine 312 can include the region or subregions of the network device 308. The information can also include the amount of bandwidth that the network device 308 expects to use. The network device access engine 312 can store the information retrieved or received from the network devices 308 on a network device profiles datastore 316.

The load balancer system 302 includes a network device assignment engine 320. The network device assignment engine 320 is coupled to the network device management engine status profiles datastore 318 and the network device profiles datastore 316. The network device assignment engine 320 is also coupled to the network devices 308 through the computer-readable medium 304. The network device assignment engine 320 can function to assign a network device 308 to one or a plurality of network device management engines. Specifically, as the network devices 308 are coupled to the network device assignment engine 320, in assigning the network devices 308 to network device management engines, the network device assignment engine 320 can direct the network devices 308 to couple to network device management engines, so that the engines can manage the flow of data packets into and out of the network devices 308. The network device assignment engine 320 can store the assignment information on the network device management engine assignment profiles datastore 322. The assignment information can include which network devices 308 are assigned to be managed by specific network device management engines.

The network device assignment engine 320 can also function to determine that a network device 308 is assigned to a failing network device management engine and reassign the network device 308 to another one or plurality of network device management engines that are not failing. Specifically, the network device assignment engine 320 can determine that a network device management engine is failing form the information stored in the network device management engine status profiles datastore 318. The network device assignment engine can then determine which network devices 308 are being managed by the specific network device management engine that is failing from the network device management engine assignment profiles datastore 322. The network device assignment engine 320 can then reassign the network devices 308 that are being managed by failing network device management engines to different network device management engines that are not failing. In a specific implementation, in reassigning the network devices 308 to different network device management engines the network device assignment engine 320 can use the information about the network devices 308 stored in the network device profiled datastore 316. For example, the network device assignment engine 320 can use the information about the region or subregion of the network device 308 to reassign the network device 308 to another network device management engine.

The network device assignment engine 320 can also be coupled to the administrator system notification engine 324. The administrator system notification engine 324 is coupled to the administrator system 306 through the computer-readable medium 304. In a specific implementation, the network device assignment engine 320 can function to initiate the sending of a notification about the failure of an AP management engine to the administrator system 306. Specifically, the network device assignment engine 320 can send failure information about a network device management engine to the administrator system notification engine 324. The failure information can include why the network device assignment engine 320 has determined that a network device management engine has failed. The administrator system notification engine 324 can send a notification to the administrator system that a network device management engine has failed, as can be determined by the network device assignment engine 320. The notification sent by the administrator system notification engine 324 can include the information used by the network device assignment engine 320 to determine that the network device management engine has failed.

FIG. 4 depicts a flowchart 400 of an example of a method for assigning a network device to a regional network device management system. The flowchart starts at module 402 with powering on a network device. In a specific implementation, the network device can be a newly purchased device that is powered on for the first time by the purchaser of the network device.

In the example of FIG. 4, the flowchart continues to module 404 with connecting the network device to the interregional redirector system. The flowchart continues to module 406 where the interregional redirector system receives information about the network device connected to the interregional redirector system at module 404. The information about the network device can include information about the region or subregion of the network device. The information about the network device can also include the MAC address of the network device and information about the purchaser of the network device. The information about the network device can also include the amount of bandwidth that the network device expects to use.

The flowchart then continues to module 408, where the network device is validated. In one example, the interregional redirector system validates the network device by using the MAC address received from the network device. The flowchart continues to module 410, where the network device is assigned to a load balancer system. In one example, the interregional redirector system can assign the network device to a load balancer system based on the region or the subregion of the network device. In another example, the load balancer system can be associated with a single or multiple regional AP management systems. In still another example, the region or the subregion of the network device can be determined form the network device at module 406.

FIG. 5 depicts a flowchart 500 of an example of a method of a load balancer system assigning a network device to a network device management engine. In one example, the flowchart can further include the load balancer system determining whether or not a network device management engine has failed and reassigning network devices that are being managed by the failed network device management engine to other network device management engine.

The flowchart beings at module 502, where a load balancer system receives network device information. The network device information can be received from a network device assigned to the load balancer system or from another load balancer system or interregional redirector system that assigns the network device to the load balancer system. The network device information can include information about the region or the subregion of the network device assigned to the load balancer system. The flowchart continues to module 504, where the load balancer system receives network device management engine information. The management engine information can be status information of the network device management engines. The status information can be determined by the load balancer system from messages sent to a network device management engine message queue from network device management engines. The status information can include the amount of bandwidth available on a network device management engine. The status information can also include whether or not the network device management engine has failed. The status information can also include any other information related to network device management engines that has been discussed in this paper.

The flowchart continues to module 506 where the load balancer system assigns a network device to a network device management engine or a plurality of network device management engines for management of the network device. As discussed previously, the load balancer system can assign a network device to a network device management engine based on the region of the network device and the regions or subregions of the other network devices that the assigned network device management engine are managing. The load balancer system can also assign a network device to a network device management engine based on the amount of available bandwidth that the network device management has or any other method described in this paper.

The flowchart continues to module 508, where the load balancer system retrieves network device management engine status messages from a network device management engine message queue. The status messages can include information as to the amount of available bandwidth that a network device management engine has. The status messages can also include time stamps to determine when the status message was sent to the network device management engine message queue by the network device management engines. The load balancer system can continuously retrieve status messages form the network device management engine message queue, or at set times when the network device management engines are scheduled to send a status message.

The flowchart continues to module 510, where the load balancer system monitors a network device management engine and determines the status of a network device management engine. The load balancer system can use the number of times that a network device management engine was supposed to send a status message and did not do so in order to determine the status of the network device management engine. Alternatively, the load balancer system can use the available bandwidth to determine the status of a network device management engine.

The flowchart continues to decision point 512, where the load balancer system determines whether the network device management engine has failed. The load balancer system can determine whether a network device management engine has failed based on the status of the network device management engine determined at module 510. For example, if the network device management engine was supposed to send a status message and did not do so, then the load balancer system can determine that the network device management engine has failed. Alternatively, if the network device management engine does not have enough available bandwidth or the network devices coupled to the network device management engine do not have enough available bandwidth then the load balancer system can determine that the network device management engine has failed. If it is determined at decision point 512, that the network device management engine has not failed, then the flowchart proceeds to module 508, where the load balancer system retrieves network device management engine status messages. At decision point 512, if the load balancer system determines that a network device management engine has failed, then the flowchart continues to module 514, where the load balancer system sends a notification to an administrator system that a specific network device management engine has failed. The flowchart then proceeds to module 506, where the load balancer system reassigns the network device to a new network device management engine. In an alternative implementation, if the load balancer system determines at decision point 512 that a network device management engine has failed, then the flowchart skips module 514 and proceeds to module 506, where the load balancer system reassigns the network device to a new network device management engine.

FIG. 6 depicts a flowchart 600 of an example of a method for determining that a network device management engine has failed by a network device managed by the network device management engine. The flowchart begins at module 602, with determining that a network device management engine has failed by a network device that is managed by the network device management engine. In one example, the network device determines that the network device management engine has failed when the network device stops receiving traffic from the network device management engine.

The flowchart continues to module 604 where a network device management engine failure message is sent form the network device to a load balancer system. In one example, the network device generates and sends the network device management engine failure message to the load balancer system after determining that the network device management engine has failed. In another example, the network device management engine failure message identifies the network device management engine that has failed.

The flowchart continues to module 606 where the load balancer system detects that the network device management engine has failed. In one example, the load balancer system detects that the network device management engine has failed after receiving the network device management engine failure message sent from the network device at module 604. In another example, the load balancer system determines the identification of the failed network device management engine from the network device management engine failure message sent by the network device.

The flowchart continues to module 608 where the load balancer reassigns a new network device management engine to the network device. In one example, the new network device management engine is in the same region or subregion as the network device. In another example, the new network device management engine manages other network devices in the same region or subregion as the network device to which the network device management engine is being assigned.

While preferred implementations of the present inventive apparatus and method have been described, it is to be understood that the implementations described are illustrative only and that the scope of the implementations of the present inventive apparatus and method is to be defined solely by the appended claims when accorded a full range of equivalence, many variations and modifications naturally occurring to those of skill in the art from a perusal thereof.

Liu, Changming, Bao, Dalun

Patent Priority Assignee Title
Patent Priority Assignee Title
5471671, Jan 21 1992 Motorola, Inc. Hanset for a multiple channel communication system and use thereof
5697059, May 19 1994 Treble Investments Limited Liability Company System for dynamically allocating channels among base stations in a wireless communication system
5726984, Jan 31 1989 AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD Hierarchical data collection network supporting packetized voice communications among wireless terminals and telephones
5956643, Jan 13 1994 THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT Apparatus and method for adaptive dynamic channel assignment in wireless communication networks
6061799, Oct 31 1997 GOOGLE LLC Removable media for password based authentication in a distributed system
6112092, Apr 18 1996 THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT Self-configurable channel assignment system and method
6154655, Mar 05 1998 THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT Flexible channel allocation for a cellular system based on a hybrid measurement-based dynamic channel assignment and a reuse-distance criterion algorithm
6201792, May 14 1998 Hewlett Packard Enterprise Development LP Backpressure responsive multicast queue
6233222, Mar 06 1998 Telefonaktiebolaget LM Ericsson Telecommunications inter-exchange congestion control
6314294, Oct 25 1996 AT&T Corp. Method for self-calibration of a wireless communication system
6473413, Jun 22 1999 Institute For Information Industry Method for inter-IP-domain roaming across wireless networks
6496699, Apr 18 1996 AT&T MOBILITY II LLC Method for self-calibration of a wireless communication system
6519461, Oct 29 1999 Telefonaktiebolaget LM Ericsson Channel-type switching from a common channel to a dedicated channel based on common channel load
6628623, May 24 1999 Hewlett Packard Enterprise Development LP Methods and systems for determining switch connection topology on ethernet LANs
6628938, Aug 14 2000 Philips Electronics North America Corporation Wireless system, a method of selecting an application while receiving application specific messages and user location method using user location awareness
6636498, Jan 08 1999 Cisco Technology, Inc.; Cisco Technology Inc; Cisco Technology, Inc Mobile IP mobile router
6775549, Apr 18 1996 AT&T MOBILITY II LLC Method for self-calibration of a wireless communication system
6865393, Mar 03 2000 MOTOROLA SOLUTIONS, INC Method and system for excess resource distribution in a communication system
6957067, Sep 24 2002 Hewlett Packard Enterprise Development LP System and method for monitoring and enforcing policy within a wireless network
7002943, Dec 08 2003 ARISTA NETWORKS, INC Method and system for monitoring a selected region of an airspace associated with local area networks of computing devices
7057566, Jan 20 2004 Cisco Technology, Inc. Flexible multichannel WLAN access point architecture
7085224, Jun 14 2001 Cisco Technology, Inc. Method and apparatus for fast failure detection in switched LAN networks
7085241, Jul 19 1999 British Telecommunications public limited company Method of controlling routing of packets to a mobile node in a telecommunications network
7130629, Mar 08 2000 Cisco Technology, Inc. Enabling services for multiple sessions using a single mobile node
7154874, Dec 08 2003 ARISTA NETWORKS, INC Method and system for monitoring a selected region of an airspace associated with local area networks of computing devices
7164667, Jun 28 2002 CLUSTER LLC; TELEFONAKTIEBOLAGET LM ERICSSON PUBL Integrated wireless distribution and mesh backhaul networks
7174170, Feb 12 2003 Apple Self-selection of radio frequency channels to reduce co-channel and adjacent channel interference in a wireless distributed network
7177646, Oct 26 2000 British Telecommunications public limited company Telecommunication routing using multiple routing protocols in a single domain
7181530, Jul 27 2001 Cisco Technology, Inc. Rogue AP detection
7216365, Feb 11 2004 ARISTA NETWORKS, INC Automated sniffer apparatus and method for wireless local area network security
7224697, Nov 04 2002 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Dynamic channel selector and method of selecting a channel in a wireless local area network
7251238, Sep 07 2004 RUCKUS IP HOLDINGS LLC System and method for routing data between different types of nodes in a wireless network
7336670, Jun 30 2003 Cisco Technology, Inc Discovery of rogue access point location in wireless network environments
7339914, Feb 11 2004 ARISTA NETWORKS, INC Automated sniffer apparatus and method for monitoring computer systems for unauthorized access
7346338, Apr 04 2003 Cisco Technology, Inc Wireless network system including integrated rogue access point detection
7366894, Jun 25 2002 Cisco Technology, Inc. Method and apparatus for dynamically securing voice and other delay-sensitive network traffic
7369489, Mar 12 2002 Cisco Technology, Inc. Unbiased token bucket
7370362, Mar 03 2005 Cisco Technology, Inc. Method and apparatus for locating rogue access point switch ports in a wireless network
7440434, Feb 11 2004 ARISTA NETWORKS, INC Method and system for detecting wireless access devices operably coupled to computer local area networks and related methods
7512379, Oct 29 2004 Hewlett Packard Enterprise Development LP Wireless access point (AP) automatic channel selection
7536723, Feb 11 2004 ARISTA NETWORKS, INC Automated method and system for monitoring local area computer networks for unauthorized wireless access
7562384, Mar 07 2003 Cisco Technology, Inc. Method and apparatus for providing a secure name resolution service for network devices
7593356, Jun 25 2002 Cisco Technology, Inc Method and system for dynamically assigning channels across multiple access elements in a wireless LAN
7656822, Dec 22 2003 Oracle America, Inc Method and apparatus for decentralized device and service description and discovery
7706789, Mar 31 2005 Intel Corporation Techniques to manage roaming
7716370, Oct 18 2002 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Redundancy support for network address translation (NAT)
7751393, Feb 11 2004 ARISTA NETWORKS, INC Method and system for detecting wireless access devices operably coupled to computer local area networks and related methods
7768952, Aug 18 2006 Wifi Rail, Inc System and method of wirelessly communicating with mobile devices
7793104, Sep 07 2006 ARRIS ENTERPRISES LLC Security authentication and key management within an infrastructure-based wireless multi-hop network
7804808, Dec 08 2003 ARISTA NETWORKS, INC Method and system for monitoring a selected region of an airspace associated with local area networks of computing devices
7843907, Feb 13 2004 Oracle International Corporation Storage gateway target for fabric-backplane enterprise servers
7844057, Nov 26 2002 Cisco Technology, Inc. Roaming using reassociation
7856209, Dec 08 2003 ARISTA NETWORKS, INC Method and system for location estimation in wireless networks
7921185, Mar 29 2006 Dell Products L.P. System and method for managing switch and information handling system SAS protocol communication
7949342, Jan 08 2004 InterDigital Technology Corporation Radio resource management in wireless local area networks
7961725, Jul 31 2007 Extreme Networks, Inc Enterprise network architecture for implementing a virtual private network for wireless users by mapping wireless LANs to IP tunnels
7970894, Nov 15 2007 ARISTA NETWORKS, INC Method and system for monitoring of wireless devices in local area computer networks
8000308, Jun 30 2003 Cisco Technology, Inc. Containment of rogue systems in wireless network environments
8069483, Oct 19 2006 The United States States of America as represented by the Director of the National Security Agency; National Security Agency Device for and method of wireless intrusion detection
8219688, Apr 28 2007 Huawei Technologies Co., Ltd.; HUAWEI TECHNOLOGIES CO , LTD Method, apparatus and system for service selection, and client application server
8249606, Jul 30 2008 Ericsson Inc Frequency planning optimization for mobile communications
8493918, Sep 17 2002 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Communication system and method for discovering end-points that utilize a link layer connection in a wired/wireless local area network
8553612, Oct 05 2007 STMICROELECTRONICS INTERNATIONAL N V Coexistence of wireless personal area network and wireless local area network
8789191, Feb 11 2004 ARISTA NETWORKS, INC Automated sniffer apparatus and method for monitoring computer systems for unauthorized access
8824448, Jul 30 2010 AVAYA LLC Method for enhancing redundancy in a wireless system using location attributes
8948046, Apr 27 2007 Extreme Networks, Inc Routing method and system for a wireless network
8953453, Dec 15 2011 Amazon Technologies, Inc System and method for throttling service requests using work-based tokens
9003527, Feb 11 2004 ARISTA NETWORKS, INC Automated method and system for monitoring local area computer networks for unauthorized wireless access
20010006508,
20020012320,
20020021689,
20020041566,
20020071422,
20020091813,
20020114303,
20020116463,
20020128984,
20030005100,
20030039212,
20030084104,
20030087629,
20030104814,
20030129988,
20030145091,
20030179742,
20030198207,
20040003285,
20040013118,
20040022222,
20040054774,
20040064467,
20040077341,
20040103282,
20040109466,
20040162037,
20040185876,
20040192312,
20040196977,
20040236939,
20040255028,
20050053003,
20050074015,
20050085235,
20050099983,
20050122946,
20050154774,
20050207417,
20050259682,
20050262266,
20050265288,
20050266848,
20060010250,
20060013179,
20060026289,
20060062250,
20060107050,
20060117018,
20060140123,
20060146748,
20060146846,
20060165015,
20060187949,
20060221920,
20060233128,
20060234701,
20060245442,
20060251256,
20060268802,
20060294246,
20070004394,
20070010231,
20070025274,
20070025298,
20070030826,
20070049323,
20070077937,
20070078663,
20070082656,
20070087756,
20070091859,
20070115847,
20070116011,
20070121947,
20070133407,
20070140191,
20070150720,
20070153697,
20070153741,
20070156804,
20070160017,
20070171885,
20070192862,
20070195761,
20070206552,
20070247303,
20070248014,
20070249324,
20070263532,
20070280481,
20070288997,
20080002642,
20080022392,
20080037552,
20080080369,
20080080377,
20080090575,
20080095094,
20080095163,
20080107027,
20080109879,
20080130495,
20080146240,
20080151751,
20080159128,
20080159135,
20080170527,
20080186932,
20080194271,
20080207215,
20080209186,
20080212562,
20080219286,
20080225857,
20080229095,
20080240128,
20080253370,
20080273520,
20080279161,
20090019521,
20090028052,
20090040989,
20090043901,
20090082025,
20090088152,
20090097436,
20090111468,
20090113018,
20090141692,
20090144740,
20090168645,
20090172151,
20090197597,
20090207806,
20090239531,
20090240789,
20090247170,
20090257380,
20090303883,
20090310557,
20100020753,
20100046368,
20100057930,
20100061234,
20100067379,
20100112540,
20100115278,
20100115576,
20100132040,
20100195585,
20100208614,
20100228843,
20100238871,
20100240313,
20100254316,
20100260091,
20100290397,
20100304738,
20100311420,
20100322217,
20100325720,
20110004913,
20110040867,
20110051677,
20110055326,
20110055928,
20110058524,
20110064065,
20110085464,
20110182225,
20110185231,
20110222484,
20110258641,
20110292897,
20120014386,
20120290650,
20120322435,
20130003729,
20130003739,
20130003747,
20130028158,
20130059570,
20130086403,
20130103833,
20130188539,
20130227306,
20130227645,
20130230020,
20130250811,
20140269327,
20140298467,
20150120864,
CN1642143,
EP940999,
EP1490773,
EP1732276,
EP1771026,
WO59251,
WO179992,
WO2004042971,
WO2006129287,
WO2009141016,
///
Executed onAssignorAssigneeConveyanceFrameReelDoc
Dec 17 2013BAO, DALUNAEROHIVE NETWORKS, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0320870951 pdf
Dec 18 2013LIU, CHANGMINGAEROHIVE NETWORKS, INC ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0320870951 pdf
Jan 17 2014Aerohive Networks, Inc.(assignment on the face of the patent)
Date Maintenance Fee Events


Date Maintenance Schedule
Jun 18 20224 years fee payment window open
Dec 18 20226 months grace period start (w surcharge)
Jun 18 2023patent expiry (for year 4)
Jun 18 20252 years to revive unintentionally abandoned end. (for year 4)
Jun 18 20268 years fee payment window open
Dec 18 20266 months grace period start (w surcharge)
Jun 18 2027patent expiry (for year 8)
Jun 18 20292 years to revive unintentionally abandoned end. (for year 8)
Jun 18 203012 years fee payment window open
Dec 18 20306 months grace period start (w surcharge)
Jun 18 2031patent expiry (for year 12)
Jun 18 20332 years to revive unintentionally abandoned end. (for year 12)