A client node may execute an application that communicates with a first messaging service component of a first broker node in a server segment and a second messaging service component of a second broker node in the server segment. A load balancing component is coupled to the client node, and a first virtual provider entity for the first messaging service component is coupled to the load balancing component. The first virtual provider entity may represent a first ha message broker pair, including: (i) a first leader message broker entity, and (ii) a first follower message broker entity to take control when there is a problem with the first leader message broker entity. A shared database is accessible by the first broker node, the first ha message broker pair, and the second broker node, and includes an administration registry data store.
|
12. A computer-implemented method to provide load balancing for a messaging system in a multi-tenant High Availability (“HA”) computing environment, comprising:
executing, by a client node in an application segment, an application that communicates with a first messaging service component of a first broker node in a server segment and a second messaging service component of a second broker node in the server segment;
providing a load balancing component in the server segment and coupled to the client node;
representing, by a first virtual provider entity for the first messaging service component, a first ha message broker pair, the first pair including:
a first leader message broker entity, and
a first follower message broker entity to take control when there is a problem with the first leader message broker entity; and
accessing, by the first broker node, the ha first message broker pair, the second broker node, a second ha message broker pair, a third broker node, and a fourth broker node, a shared database, the shared database including:
an administration registry data store,
wherein the first broker node accesses the administration registry data store via an administration registry component that also communicates with the first virtual provider via an administration service, the first and second broker nodes communicate with one local broker management data store, and the third and fourth broker nodes communicate with another local broker management data store.
1. A system to provide load balancing for a messaging system in a multi-tenant High Availability (“HA”) computing environment, comprising:
a client node, in an application segment, executing an application that communicates with a first messaging service component of a first broker node in a server segment and a second messaging service component of a second broker node in the server segment;
a load balancing component in the server segment and coupled to the client node;
a first virtual provider entity for the first messaging service component, in the server segment and coupled to the load balancing component, the first virtual provider entity representing a first ha message broker pair, the first pair including:
a first leader message broker entity, and
a first follower message broker entity to take control when there is a problem with the first leader message broker entity; and
a shared database accessible by the first broker node, the first ha message broker pair, the second broker node, a second ha message broker pair, a third broker node, and a fourth broker node, including:
an administration registry data store,
wherein the first broker node accesses the administration registry data store via an administration registry component that also communicates with the first virtual provider via an administration service, the first and second broker nodes communicate with one local broker management data store, and the third and fourth broker nodes communicate with another local broker management data store.
14. A non-transitory, computer-readable medium storing instructions, that, when executed by a processor, cause the processor to perform a method to provide load balancing for a messaging system in a multi-tenant High Availability (“HA”) computing environment, the method comprising:
executing, by a client node in an application segment, an application that communicates with a first messaging service component of a first broker node in a server segment and a second messaging service component of a second broker node in the server segment;
providing a load balancing component in the server segment and coupled to the client node;
representing, by a first virtual provider entity for the first messaging service component, a first ha message broker pair, the first pair including:
a first leader message broker entity, and
a first follower message broker entity to take control when there is a problem with the first leader message broker entity; and
accessing, by the first broker node, the first ha message broker pair, the second broker node, a second ha message broker pair, a third broker node, and a fourth broker node, a shared database, the shared database including:
an administration registry data store,
wherein the first broker node accesses the administration registry data store via an administration registry component that also communicates with the first virtual provider via an administration service, the first and second broker nodes communicate with one local broker management data store, and the third and fourth broker nodes communicate with another local broker management data store.
2. The system of
3. The system of
4. The system of
5. The system of
6. The system of
7. The system of
9. The system of
10. The system of
11. The system of
a second virtual provider entity for the second messaging service component, in the server segment and coupled to the load balancing component, the second virtual provider entity representing a second ha message broker pair, the second pair including:
a second leader message broker entity, and
a second follower message broker entity to take control when there is a problem with the second leader message broker entity;
a third messaging service component of a third broker node in the server segment; and
a fourth messaging service component of a fourth broker node in the server segment.
13. The method of
15. The medium of
16. The medium of
|
An enterprise, such as a business, may use a messaging system (e.g., a stateful messaging system) at the center of an Information Technology (“IT”) infrastructure (such as a cloud-based or on-premises infrastructure. As used herein, the phrase “messaging system” may refer to, for example, an Enterprise Messaging System (“EMS”) or protocol that lets organizations to send semantically precise messages between computer systems. A messaging system may promote a loosely coupled architectures that are facilitated by the use of structured messages (such as Extendible Mark-up Language (“XML”) or JavaScript Object Notation (“JSON”)) and appropriate protocols (such as Data Distribution Service (“DDS”), Microsoft Message Queuing (“MSMQ”), Advanced Message Queuing Protocol (“AMQP”) or Simple Object Access Protocol (“SOAP”) with web services).
In many cases, a messaging system may be associated with mission-critical, asynchronous message buffers, stream and event processing engines, connectivity, etc. and use message brokers provide important service qualities that are required for delivery guarantees (such as an “exactly once, in order” guarantee). As a result, the messaging system needs to always be available (e.g., a High Availability (“HA”)) system that utilizes disaster recovery) even when handling heavy message loads, substantial message sizes, and increasing message rates. Besides “vertical” scaling (adding more resources to a messaging systems host machine), which usually cannot be maintained beyond a certain load, “horizontal” scaling across multiple cloud resources may be able to support increasing workloads.
There are several problems with current approaches to messaging systems, including scalability, availability, and migration. For horizontal scaling or load balancing, coordination between brokers is required. That is usually done through a so-called HA “broker network” which essentially is a point-to-point coordination between all (n*(n−1))/2 pairs of brokers. However, the overhead of such a mechanism is costly in terms of performance. Even worse, in practice the coordination in such a broker network often negatively impacts the stability and availability of a messaging system (e.g., because the nodes are busy coordinating the requests between the different brokers of the messaging system). Moreover, existing (or “legacy”) single broker messaging systems may need a migration path to a scalable solution that does not cause significant downtime and that can be reconfigured during runtime.
It would therefore be desirable to provide collaborative message broker scaling and migration in a secure, automatic, and efficient manner.
According to some embodiments, methods and systems may provide load balancing for an HA messaging system in a multi-tenant High Availability (“HA”) computing environment. A client node may execute an application that communicates with a first messaging service component of a first broker node in a server segment and a second messaging service component of a second broker node in the server segment. A load balancing component is coupled to the client node, and a first virtual provider entity for the first messaging service component is coupled to the load balancing component. The first virtual provider entity may represent a first HA message broker pair, including: (i) a first leader message broker entity, and (ii) a first follower message broker entity to take control when there is a problem with the first leader message broker entity. A shared database is accessible by the first broker node, the first HA message broker pair, and the second broker node, and includes an administration registry data store.
Some embodiments comprise: means for executing, by a client node in an application segment, an application that communicates with a first messaging service component of a first broker node in a server segment and a second messaging service component of a second broker node in the server segment; means for providing a load balancing component in the server segment and coupled to the client node; means for representing, by a first virtual provider entity for the first messaging service component, a HA first message broker pair, the first pair including: (i) a first leader message broker entity, and (ii) a first follower message broker entity to take control when there is a problem with the first leader message broker entity; and means for accessing, by the first broker node, the HA first message broker pair, and the second broker node, a shared database, the shared database including an administration registry data store.
Some technical advantages of some embodiments disclosed herein are improved systems and methods to provide collaborative message broker scaling and migration in a secure, automatic, and efficient manner.
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of embodiments. However, it will be understood by those of ordinary skill in the art that the embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the embodiments.
One or more specific embodiments of the present invention will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
As used herein, devices, including those associated with the system 100 and any other device described herein, may exchange information via any communication network which may be one or more of a Local Area Network (“LAN”), a Metropolitan Area Network (“MAN”), a Wide Area Network (“WAN”), a proprietary network, a Public Switched Telephone Network (“PSTN”), a Wireless Application Protocol (“WAP”) network, a Bluetooth network, a wireless LAN network, and/or an Internet Protocol (“IP”) network such as the Internet, an intranet, or an extranet. Note that any devices described herein may communicate via one or more such communication networks.
The elements associated with the system 100 and any other device described herein, may store information into and/or retrieve information from various data stores (e.g., a data storage device such as shared database 190), which may be locally stored or reside remote from other elements. Although a single messaging client 102 and load balancing component 104 are shown in
An operator or administrator may access the system 100 via a remote device (e.g., a Personal Computer (“PC”), tablet, or smartphone) to view information about and/or manage operational information in accordance with any of the embodiments described herein. In some cases, an interactive graphical user interface display may let an operator or administrator define and/or adjust certain parameters (e.g., to define broker rules or mapping) and/or provide or receive automatically generated recommendations, alerts, and/or results from the system 100.
At S210, a client node in an application segment may execute an application that communicates with a first messaging service component of a first broker node in a server segment and a second messaging service component of a second broker node in the server segment. The client node may be associated with a multi-tenant HA computing environment, such as a cloud computing environment, an on-premises computing environment, an integration system, a workflow, application, a mobile end point, etc. Moreover, the multi-tenant HA computing environment may be associated with multiple clusters, multiple groups, multiple customers, etc.
At S220, the system may provide a load balancing component in a server segment and coupled to the client node. At S230, a first virtual provider entity for the first messaging service component may represent a first HA message broker pair, including: (i) a first leader message broker entity, and (ii) a first follower message broker entity to take control when there is a problem with the first leader message broker entity. At S240, the first broker node, the first HA message broker pair, and the second broker node, may access a shared database, including an administration registry data store.
Thus, embodiments may provide scalability via a base system definition with a multi-tenant HA broker pair (virtual provider→physical broker decoupling). For example,
As used herein, the phrase “messaging administration service” may refer to a central component for monitoring and operations of a messaging system. It may take care of integration into an environment like cloud infrastructure (e.g., metering), monitoring, and alerting. Each data center and/or landscape may have a HA message broker pair with a leader and a follower. For example, a load balancing component 304 may coupled to the client node 302, and a first virtual provider entity 350 for the first messaging service component is coupled to the load balancing component 304. As used herein, the phrase “virtual provider” may refer to a logical representation of a physically available message broker that allows to support multi-tenant style configurations. Each virtual provider may be highly isolated, each with its own persistence, security, and runtime constraints configuration. On the level of a virtual provider, embodiments may allow for secure protocols, tune how message delivery works, and/or to delete the complete persistence state. Note that at (A) the first virtual provider entity 350 may represent a first HA message broker pair, including: (i) a first leader message broker entity, and (ii) a first follower message broker entity to take control when there is a problem with the first leader message broker entity. In this way, message consumers and producers may use a messaging client library to register with one of the brokers and fail over to the other broker instance (in case of failure of the current leader).
The messaging client 302 may have access to application-specific communication mechanisms, such as a node manager similar to a control instance (like container composition) of a number of worker nodes (i.e., doing the actual processing and running the application) that can be seen as separate compute resources like containers. According to some embodiments, a WN can only communicate with other WNs of the same cluster via a special multi-tenant node manager, called a Tenant Node Cluster (“TNC”).
During subscription, the client may request a handle to the virtual provider 350, or creates one, if none exists at (B) and (C), via a messaging administration service. Note that the administration service may communicate with broker management via an administration registry. The virtual provider 350 contains the concrete broker endpoint (pair) information. Additionally, messaging broker resources may be created via a Representational State Transfer (“REST”) Application Programming Interface (“API”) (e.g., authorization objects). A shared database 390 is accessible by the first broker node 310, the first HA message broker pair, and the second broker node 320, and includes an administration registry data store 392 and a broker management data store 394. Administration information, such as virtual provider 350 to physical broker assignments, may be stored in the shared database 390 and can be accessed by both paired brokers (via limited means of communication and/or coordination). An authentical or identity provider (e.g., Java Authentication and Authorization Service (“JAAS”)) user group cache between Active MQ and broker management denotes an abstraction that deals with user management and/or privileges.
In this way, a base system 300 may provide a foundation for a Collaborative Load Balancing Solution (“CLBS”). As shown in
CSLB offers several options on how to balance the load over additional HA broker pairs. Briefly, the load can be balanced distributed (i.e., by the clients) or centralized (i.e., by the server).
One variant of CSLB is Client-CSLB (“C-CSLB”) where the client node decides broker selection (e.g., by configuration or decision based on the current load situation). The C-CSLB variant allows for fine granular load balancing, letting the client decide the broker it wants to use. Therefore, additional information from the broker or the platform might be required. Rules would have to be enforced such that tenants within one cluster would not use different brokers. Furthermore, changes to the re-implemented control logic may result in client updates.
Another variant of CSLB is Server-CSLB (“S-CSLB”) where the server node decides assignments. The S-CSLB variant allows for central load balancing with a single point of configuration, which may require less development effort as compared to other options. However, a differentiation on the account level may be required (which can ultimately mean substantially higher operations costs for the additional provider account maintenance).
Still another variant of CSLB is Hybrid-CSLB (“H-CSLB”) where the client and server collaborate to find a suitable assignment. The H-CSLB variant allows for fine granular load balancing using decision selection information, such as the messaging system's group information, that is injected via the system's cluster system settings on the client level. Existing and/or legacy systems may have to follow the operations rule “new clusters must get a new service group setting (“pinning existing”: none of the existing clusters shall get it, however, the setting must be persistent even after cluster updates).” This approach requires no further operational effort and no client update, because the mechanism is already in place. However, the group assignment must be set automatically as a system property on the cluster controller level (to avoid manual configuration effort and misconfiguration).
A characteristic for H-CSLB may comprise a provider selection decision based on the client input in the form of a messaging system's “group” setting (hence “hybrid”). Thereby, the client gets a group setting from the system properties on a cluster level. Consequently, clients use a unique identifier (e.g., a Globally Unique Identifier (“GUID”) or account and cluster) as group information. The base mechanism may be extended by adding S410 as illustrated in
At S710, an on-boarding procedure may involve an on-boarding script that adds the system property “messaging.admin.client.group” in the on-boarding script. The value may be set to a unique identifier on cluster level, such as provider account identifier and cluster identifier (or maybe a GUID, if necessary). A “tenant node controller” may be associated with essentially a cluster controller with zero to many tenant node controllers (specific to a tenant/customer→multi-tenancy) and a tenant node controller has zero to many worker nodes that do the actual processing (e.g., application logic). The cluster controller may help manage the tenants and the tenant node controller may help manage the workers of that tenant. In a tenant node controller layer, the system may propagate the property to all nodes of the cluster controller, e.g., through white-listing. At S720, in an update/upgrade procedure the system may determine the group setting from the virtual provider within the service client library. This may have the disadvantage of requiring a message broker client update. The logic in the service client is therefore extended, which checks whether there is a “not null” and “non empty” group property and sets it as system property. At S730, an off-boarding procedure may delete the virtual provider of off-boarded node using the message broker PI, which should delete all dependent service objects. Note that the migration of tenants from one cluster to another is a not documented operations service that is used frequently.
Therefore, the cluster should set a group system property during on-boarding S710, the group assignment has to be ensured during update/upgrade at S720 and the message broker resources have to be free during off-boarding at S730. A migration of a tenant from one cluster to another will then result in the creation of a new virtual provider and a re-assignment to a new group. This leaves a consistent group assignment within one cluster. The manual effort of this procedure should be less time consuming than the continuous manual task required for variant S-CSLB.
Note that H-CSLB scales HA broker pairs well but is still limited to the coordination through a single shared database (e.g., a bottleneck). To avoid this, some embodiments may separate the administration and broker management databases. This removes the shared database as a single point of failure in a multi-HA-pair setup with client-side load balancing.
According to some embodiments, a migration process may be performed. For example, for the migration of already existing and/or legacy HA cluster pairs to scalable HA cluster pairs, a “safe” migration is considered a migration that does not migrate currently productively used and/or running clusters. Productive, but idle clusters are referred to as “safe” clusters.
Thus, embodiment may provide improved broker networks and avoid problems associated with a gossiping protocol approach (which still requires coordination between the message brokers, consumes network bandwidth, only supports eventual consistency and have high latency—that is, broker selection takes a substantial amount of time because messages are exchanged randomly between neighbors).
Note that the embodiments described herein may be implemented using any number of different hardware configurations. For example,
The processor 1510 also communicates with a storage device 1530. The storage device 1530 may comprise any appropriate information storage device, including combinations of magnetic storage devices (e.g., a hard disk drive), optical storage devices, mobile telephones, and/or semiconductor memory devices. The storage device 1530 stores a program 1512 and/or a message broker engine 1514 for controlling the processor 1510. The processor 1510 performs instructions of the programs 1512, 1514, and thereby operates in accordance with any of the embodiments described herein. For example, the processor 1510 might provide load balancing for an HA messaging system in a multi-tenant HA computing environment.
The programs 1512, 1514 may be stored in a compressed, uncompiled and/or encrypted format. The programs 1512, 1514 may furthermore include other program elements, such as an operating system, clipboard application, a database management system, and/or device drivers used by the processor 1510 to interface with peripheral devices.
As used herein, information may be “received” by or “transmitted” to, for example: (i) the platform 1500 from another device; or (ii) a software application or module within the platform 1500 from another software application, module, or any other source.
In some embodiments (such as the one shown in
Referring to
The message broker identifier 1602 might be a unique alphanumeric label that is associated with a message broker of a collaborative scaling and migration system. The type 1604 might indicate that the broker is a follower or a leader. The virtual provider identifier 1606 might comprise a logical representation of a physically available message broker to support multi-tenant configuration. The client node identifiers 1608 might be associated with a messaging client in an application segment that communicates with a messaging service component (e.g., ActiveMQ or other message system). The status 1610 might indicate that the message broker is active, following, experiencing a problem, etc.
Thus, embodiments may provide a messaging service for monitoring and coordination between the nodes of broker pair along with a novel collaborative scaling and/or load balancing mechanism between the messaging administration service and the clients (e.g., H-CSLB). A mechanism may apply H-CSLB to existing and/or legacy systems and new HA broker pairs and offboarding (providing elastic scaling). Embodiments may also support a novel shared and local data store architecture to reduce database bottlenecks. A migration procedure and program for “safe” automated migration and manual “unsafe” migration or troubleshooting may also be provided according to some embodiments along with configurable broker monitoring and alerting.
The following illustrates various additional embodiments of the invention. These do not constitute a definition of all possible embodiments, and those skilled in the art will understand that the present invention is applicable to many other embodiments. Further, although the following embodiments are briefly described for clarity, those skilled in the art will understand how to make any changes, if necessary, to the above-described apparatus and methods to accommodate these and other embodiments and applications.
Although specific hardware and data configurations have been described herein, note that any number of other configurations may be provided in accordance with some embodiments of the present invention (e.g., some of the information associated with the databases described herein may be combined or stored in external systems). Moreover, although some embodiments are focused on particular types of application integrations and microservices, any of the embodiments described herein could be applied to other types of applications. Moreover, the displays shown herein are provided only as examples, and any other type of user interface could be implemented. For example,
The present invention has been described in terms of several embodiments solely for the purpose of illustration. Persons skilled in the art will recognize from this description that the invention is not limited to the embodiments described, but may be practiced with modifications and alterations limited only by the spirit and scope of the appended claims.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10560404, | Jun 14 2017 | Citrix Systems, Inc. | Real-time cloud-based messaging system |
11126483, | Apr 17 2020 | Oracle International Corporation | Direct message retrieval in distributed messaging systems |
7111063, | Feb 26 2002 | T-MOBILE INNOVATIONS LLC | Distributed computer network having a rotating message delivery system suitable for use in load balancing and/or messaging failover |
8880621, | Jun 29 2010 | Verizon Patent and Licensing Inc. | Load balancing in broker-based messaging systems and methods |
9009683, | Mar 28 2012 | Software AG | Systems and/or methods for testing client reactions to simulated disruptions |
20040225893, | |||
20130007505, | |||
20130060834, | |||
20170373860, | |||
20200396306, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 24 2021 | SAP SE | (assignment on the face of the patent) | / | |||
Mar 24 2021 | RITTER, DANIEL | SAP SE | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 055701 | /0527 |
Date | Maintenance Fee Events |
Mar 24 2021 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Jul 26 2025 | 4 years fee payment window open |
Jan 26 2026 | 6 months grace period start (w surcharge) |
Jul 26 2026 | patent expiry (for year 4) |
Jul 26 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 26 2029 | 8 years fee payment window open |
Jan 26 2030 | 6 months grace period start (w surcharge) |
Jul 26 2030 | patent expiry (for year 8) |
Jul 26 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 26 2033 | 12 years fee payment window open |
Jan 26 2034 | 6 months grace period start (w surcharge) |
Jul 26 2034 | patent expiry (for year 12) |
Jul 26 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |