The technology disclosed relates to failure recovery in cloud-based services. In particular, the technology disclosed relates to a service instance ba that identifies a service instance bb as having a secondary role for packets carrying a stream affinity code which is specified in a service map distributed to service instances. service instance ba state information is synchronized with the service instance bb after processing a first packet. After failure of the service instance ba, a service instance AA receives an updated service map, and prepares to forward to the service instance ba a second packet. The second packet includes a same stream affinity code as the first packet forwarded before the failure. The updated service map is used to determine that the service instance bb is available and servicing the same stream affinity code as the service instance ba. The second packet is forwarded to the service instance bb.
|
1. A computer-implemented method of recovery from failure of a service instance, in a service chain of services that perform at least services A and b, using service instance AA and service instances ba and bb to perform the services A and b, respectively, and performing actions including:
the service instance ba identifying the service instance bb as having a secondary role for packets carrying a stream affinity code which is specified in a service map distributed to service instances, and synchronizing service instance ba state information with the service instance bb after processing a first packet;
after failure of the service instance ba, the service instance AA receiving an updated service map, and preparing to forward to the service instance ba a second packet, wherein the second packet includes a same stream affinity code as the first packet forwarded before the failure;
determining from the updated service map that the service instance bb is available and servicing the same stream affinity code as the service instance ba; and
forwarding the second packet to the service instance bb instead of the service instance ba.
10. A tangible non-transitory computer readable storage media, including program instructions loaded into memory that, when executed on processors, cause the processors to implement a computer-implemented method of recovery from failure of a service instance, in a service chain of services that perform at least services A and b, using service instance AA and service instances ba and bb to perform the services A and b, respectively, the computer-implemented method performing actions including:
the service instance ba identifying the service instance bb as having a secondary role for packets carrying a stream affinity code which is specified in a service map distributed to service instances, and synchronizing service instance ba state information with the service instance bb after processing a first packet;
after failure of the service instance ba, the service instance AA receiving an updated service map, and preparing to forward to the service instance ba a second packet, wherein the second packet includes a same stream affinity code as the first packet forwarded before the failure;
determining from the updated service map that the service instance bb is available and servicing the same stream affinity code as the service instance ba; and
forwarding the second packet to the service instance bb instead of the service instance ba.
2. The computer-implemented method of
3. The computer-implemented method of
4. The computer-implemented method of
5. The computer-implemented method of
6. The computer-implemented method of
7. The computer-implemented method of
monitoring the service instance ba for packet processing activity; and
causing updating of the service map for the service b to remove the service instance ba from availability, should it be inactive for a configurable predetermined amount of time.
8. The computer-implemented method of
processing the second packet and based on the processing:
identifying a next service, among at least two additional services to which a subscriber has subscribed, that should next handle the second packet; and
routing the processed second packet to the identified next service upon egress from the service instance bb.
9. The computer-implemented method of
11. The tangible non-transitory computer readable storage media of
12. The tangible non-transitory computer readable storage media of
13. The tangible non-transitory computer readable storage media of
14. The tangible non-transitory computer readable storage media of
15. The tangible non-transitory computer readable storage media of
processing the second packet and based on the processing:
identifying a next service, among at least two additional services to which the subscriber has subscribed, that should next handle the packet; and
routing the processed second packet to the identified next service upon egress from the service instance bb.
16. A system for improved recovery from failure of a service instance, in a service chain of services that perform at least services A and b, using service instance AA and service instances ba and bb to perform the services A and b, respectively, the system including a processor, memory coupled to the processor, and computer instructions from the non-transitory computer readable storage media of
17. The system of
18. The system of
19. The system of
monitoring the service instance ba, for packet processing activity; and
causing updating of the service map for the service b to remove the service instance ba from availability should it be inactive for a configurable predetermined amount of time.
20. The system of
|
This application is a continuation of U.S. patent application Ser. No. 16/807,132, entitled “Recovery From Failure in a Dynamic Scalable Services Mesh,” filed on 2 Mar. 2020, which claims the benefit of U.S. Provisional Patent Application No. 62/812,791, entitled “Recovery From Failure in a Dynamic Scalable Services Mesh,” filed on 1 Mar. 2019 and the benefit of U.S. Provisional Patent Application No. 62/812,760 entitled “Load Balancing in a Dynamic Scalable Services Mesh,” filed 1 Mar. 2019. The provisional and non-provisional applications are incorporated by reference for all purposes.
The following materials are incorporated by reference in this filing:
U.S. Non Provisional Patent Application 62/812,760 entitled “Load Balancing in a Dynamic Scalable Services Mesh,” by Ravi Ithal and Umesh Muniyappa, filed 2 Mar. 2020.
U.S. Non Provisional application Ser. No. 14/198,508, entitled “SECURITY FOR NETWORK DELIVERED SERVICES”, filed on Mar. 5, 2014 (now U.S. Pat. No. 9,270,765, issued Feb. 23, 2016),
U.S. Non Provisional application Ser. No. 14/198,499, entitled “SECURITY FOR NETWORK DELIVERED SERVICES”, filed Mar. 5, 2014 (now U.S. Pat. No. 9,398,102, issued on Jul. 19, 2016),
U.S. Non Provisional application Ser. No. 14/835,640, entitled “SYSTEMS AND METHODS OF MONITORING AND CONTROLLING ENTERPRISE INFORMATION STORED ON A CLOUD COMPUTING SERVICE (CCS)”, filed on Aug. 25, 2015 (now U.S. Pat. No. 9,928,377, issued on Mar. 27, 2018),
U.S. Non Provisional application Ser. No. 15/368,246, entitled “MIDDLE WARE SECURITY LAYER FOR CLOUD COMPUTING SERVICES”, filed on Dec. 2, 2016, which claims the benefit of U.S. Provisional Application No. 62/307,305, entitled “SYSTEMS AND METHODS OF ENFORCING MULTI-PART POLICIES ON DATA-DEFICIENT TRANSACTIONS OF CLOUD COMPUTING SERVICES”, filed on Mar. 11, 2016,
“Cloud Security for Dummies, Netskope Special Edition” by Cheng, Ithal, Narayanaswamy, and Malmskog, John Wiley & Sons, Inc. 2015,
“Netskope Introspection” by Netskope, Inc.,
“Data Loss Prevention and Monitoring in the Cloud” by Netskope, Inc.,
“Cloud Data Loss Prevention Reference Architecture” by Netskope, Inc.,
“The 5 Steps to Cloud Confidence” by Netskope, Inc.,
“The Netskope Active Platform” by Netskope, Inc.
“The Netskope Advantage: Three “Must-Have” Requirements for Cloud Access Security Brokers” by Netskope, Inc.,
“The 15 Critical CASB Use Cases” by Netskope, Inc.
“Netskope Active Cloud DLP” by Netskope, Inc.,
“Repave the Cloud-Data Breach Collision Course” by Netskope, Inc.; and
“Netskope Cloud Confidence Index™” by Netskope, Inc.
which are incorporated by reference for all purposes as if fully set forth herein.
The technology disclosed relates generally to security for network delivered services, and in particular relates to improved recovery from failure and load balancing in a dynamic service chain with a cloud access security broker (CASB), for reducing latency and increasing availability and scalability in flexibly configurable data paths through service chains while applying security services in the cloud.
The subject matter discussed in this section should not be assumed to be prior art merely as a result of its mention in this section. Similarly, a problem mentioned in this section or associated with the subject matter provided as background should not be assumed to have been previously recognized in the prior art. The subject matter in this section merely represents different approaches, which in and of themselves can also correspond to implementations of the claimed technology.
The use of cloud services for corporate functionality is common. According to International Data Corporation, almost half of all information technology (IT) spending will be cloud-based in 2018, “reaching 60% of all IT infrastructure and 60-70% of all software, services and technology spending by 2020.” For example, enterprise companies often utilize software as a service (SaaS) solutions instead of installing servers within the corporate network to deliver services.
Data is the lifeblood of many businesses and must be effectively managed and protected. With the increased adoption of cloud services, companies of all sizes are relying on the cloud to create, edit and store data. This presents new challenges as users access cloud services from multiple devices and share data, including with people outside of an organization. It is easy for data to get out of an organization's control.
Customers want to be able to securely send all of their data between customer branches and data centers. *All* data includes peer-to-peer file sharing (P2P) via protocols for portal traffic such as BitTorrent (BT), user data protocol (UDP) streaming and file transfer protocol (FTP); voice, video and messaging multimedia communication sessions such as instant message over Internet Protocol (IP) and mobile phone calling over LTE (VoLTE) via the Session Initiation Protocol (SIP) and Skype; Internet traffic, cloud application data, and generic routing encapsulation (GRE) data. As an example of the size of the P2P file sharing segment of data that needs to be handled securely, BitTorrent, one common protocol for transferring large files such as digital video files containing TV shows or video clips or digital audio files containing songs, had 15-27 million concurrent users at any time and was utilized by 150 million active users as of 2013. Based on these figures, the total number of monthly BitTorrent users was estimated at more than a quarter of a billion, with BitTorrent responsible for 3.35% of worldwide bandwidth, more than half of the 6% of total bandwidth dedicated to file sharing.
As the number of data sources increases, there are hundreds of ways data can be compromised. Employees might send a wrong file, not be careful when rushing to a deadline, or share data and collaborate with people outside of their organization. The native cloud storage sync clients also pose a significant risk to organizations. A continuous sync takes place between the end point and the cloud service without employees realizing they may be leaking confidential company information. In one use case, companies may want to allow employees and contractors to make voice calls and participate in video conferences, while not enabling them to transfer files over LTE via SIP and Skype. In another example, an enterprise may want to enable their users to view videos and not be able to upload or download video content files.
Accordingly, it is vital to facilitate the use of cloud services so people can continue to be productive and use the best tools for the job without compromising sensitive information such as intellectual property, non-public financials, strategic plans, customer lists, personally identifiable information belonging to customers or employees, and the like.
An opportunity arises to apply security services to all customer traffic while reducing latency and increasing availability and scalability in flexibly configurable data paths in a services mesh through service chains, expanding beyond cloud apps and web traffic firewalls to securely process P2P traffic over BT, FTP and UDP-based streaming protocols as well as Skype, voice, video and messaging multimedia communication sessions over SIP and web traffic over other protocols.
In the drawings, like reference characters generally refer to like parts throughout the different views. Also, the drawings are not necessarily to scale, with an emphasis instead generally being placed upon illustrating the principles of the technology disclosed. In the following description, various implementations of the technology disclosed are described with reference to the following drawings.
The following detailed description is made with reference to the figures. Sample implementations are described to illustrate the technology disclosed, not to limit its scope, which is defined by the claims. Those of ordinary skill in the art will recognize a variety of equivalent variations on the description that follows.
Existing approaches for applying security services to customer traffic include a security device point of presence (PoP) in the path of data flow between customer branches of organization networks and data centers accessed in the cloud via the Internet.
Organizations want to utilize a single security service that can apply security services to all customer traffic, expanding beyond cloud apps and web traffic firewalls to securely process P2P traffic over BT, FTP and UDP-based streaming protocols as well as Skype, voice, video and messaging multimedia communication sessions over SIP, and web traffic over other protocols. In one example, the security service needs to allow employees and contractors at an organization to make calls, but not transfer files, a policy that the service can enforce by encoding a SIP control channel and data channel. The enforcement of this policy necessitates more than a SIP proxy to enable the ability to anticipate where the data is getting transferred, and the ability to either avoid or block that channel, based on information in the channel. A streaming agent sending traffic looks at the port only, so needs to know all available ports before sending. If handling all protocols, the security service can catch web traffic over non-standard ports, but it is hard to gather the traffic. An existing workaround for securing files from being transferred is to block access to ports, but security services want to load everything, safely—not block ports. P2P data packets try standard ports first, and then often fall back, hopping from port to port, which also limits the usefulness of blocking a port, because the P2P data service can hop to a different port.
Security administrators can install security service devices in each of the customer branches of organization networks, in data centers and headquarters, to create a management network for applying security policies, so that all traffic goes through security devices. On-premise security administrators would then be responsible for managing deployment to ensure high availability of devices with failover management, managing software life cycles with patches, and administering upgrades to respond to hardware life cycles. Issues for this hands-on approach to security include scaling when company size changes and load balancing for ensuring adequate service availability as data loads vary.
The disclosed technology for load balancing in a dynamic service chain offers a security services platform that scales horizontally and uniformly to administer customized security services and policies for organizations and avoids single points of failure. Security services customers using the disclosed technology are able to specify which security services apply for different types of tenant data, and to customize security policies for the data being transmitted via the devices of their organizations. Tenant configurations can be documented using a service chain to specify the path for the flow of data packets that are to be sequentially routed to multiple security services for the specific tenant. The tenant configuration specifies the ordered sequence of services in a service chain for the customer. The subsequent dynamic steering of traffic flows of data packets through the ordered set of services needs to be fast to provide acceptable security services for tenants. Also, new third party services can be deployed using the security services platform, without affecting existing flows of packets. Additional disclosed technology for improved recovery from failure of a service instance in a service chain identifies primary and secondary roles for service instances and synchronizes state information when processing packets, to improve recovery from failure of service instances. An example system for load balancing of a dynamic service chain is described next.
Architecture
System 100 includes organization network 102, data center 152 with Netskope cloud access security broker (N-CASB) 155 and cloud-based services 108. System 100 includes multiple organization networks 104 for multiple subscribers, also referred to as multi-tenant networks, of a security services provider and multiple data centers 154, which are sometimes referred to as branches. Organization network 102 includes computers 112a-n, tablets 122a-n, cell phones 132a-n and smart watches 142a-n. In another organization network, organization users may utilize additional devices. Cloud services 108 includes cloud-based hosting services 118, web email services 128, video, messaging and voice call services 138, streaming services 148, file transfer services 158, and cloud-based storage service 168. Data center 152 connects to organization network 102 and cloud-based services 108 via public network 145.
Continuing with the description of
Continuing further with the description of
Embodiments can also interoperate with single sign-on (SSO) solutions and/or corporate identity directories, e.g. Microsoft's Active Directory. Such embodiments may allow policies to be defined in the directory, e.g. either at the group or user level, using custom attributes. Hosted services configured with the system are also configured to require traffic via the system. This can be done through setting IP range restrictions in the hosted service to the IP range of the system and/or integration between the system and SSO systems. For example, integration with a SSO solution can enforce client presence requirements before authorizing the sign-on. Other embodiments may use “proxy accounts” with the SaaS vendor—e.g. a dedicated account held by the system that holds the only credentials to sign in to the service. In other embodiments, the client may encrypt the sign on credentials before passing the login to the hosted service, meaning that the networking security system “owns” the password.
Storage 186 can store information from one or more tenants into tables of a common database image to form an on-demand database service (ODDS), which can be implemented in many ways, such as a multi-tenant database system (MTDS). A database image can include one or more database objects. In other implementations, the databases can be relational database management systems (RDBMSs), object oriented database management systems (OODBMSs), distributed file systems (DFS), no-schema database, or any other data storing systems or computing devices. In some implementations, the gathered metadata is processed and/or normalized. In some instances, metadata includes structured data and functionality targets specific data constructs provided by cloud services 108. Non-structured data, such as free text, can also be provided by, and targeted back to cloud services 108. Both structured and non-structured data are capable of being aggregated by introspective analyzer 175. For instance, the assembled metadata is stored in a semi-structured data format like a JSON (JavaScript Option Notation), BSON (Binary JSON), XML, Protobuf, Avro or Thrift object, which consists of string fields (or columns) and corresponding values of potentially different types like numbers, strings, arrays, objects, etc. JSON objects can be nested and the fields can be multi-valued, e.g., arrays, nested arrays, etc., in other implementations. These JSON objects are stored in a schema-less or NoSQL key-value metadata store 148 like Apache Cassandra™ 158, Google's BigTable™, HBase™ Voldemort™, CouchDB™, MongoDB™, Redis™, Riak™, Neo4j™, etc., which stores the parsed JSON objects using keyspaces that are equivalent to a database in SQL. Each keyspace is divided into column families that are similar to tables and comprise of rows and sets of columns.
In one implementation, introspective analyzer 175 includes a metadata parser (omitted to improve clarity) that analyzes incoming metadata and identifies keywords, events, user IDs, locations, demographics, file type, timestamps, and so forth within the data received. Parsing is the process of breaking up and analyzing a stream of text into keywords, or other meaningful elements called “targetable parameters”. In one implementation, a list of targeting parameters becomes input for further processing such as parsing or text mining, for instance, by a matching engine (not shown). Parsing extracts meaning from available metadata. In one implementation, tokenization operates as a first step of parsing to identify granular elements (e.g., tokens) within a stream of metadata, but parsing then goes on to use the context that the token is found in to determine the meaning and/or the kind of information being referenced. Because metadata analyzed by introspective analyzer 175 are not homogenous (e.g., there are many different sources in many different formats), certain implementations employ at least one metadata parser per cloud service, and in some cases more than one. In other implementations, introspective analyzer 175 uses monitor 184 to inspect the cloud services and assemble content metadata. In one use case, the identification of sensitive documents is based on prior inspection of the document. Users can manually tag documents as sensitive, and this manual tagging updates the document metadata in the cloud services. It is then possible to retrieve the document metadata from the cloud service using exposed APIs and use them as an indicator of sensitivity.
Continuing further with the description of
In the interconnection of the elements of system 100, network 145 couples computers 112a-n, tablets 122a-n, cell phones 132a-n, smart watches 142a-n, cloud-based hosting service 118, web email services 128, video, messaging and voice call services 138, streaming services 148, file transfer services 158, cloud-based storage service 168 and N-CASB 155 in communication. The communication path can be point-to-point over public and/or private networks. Communication can occur over a variety of networks, e.g. private networks, VPN, MPLS circuit, or Internet, and can use appropriate application program interfaces (APIs) and data interchange formats, e.g. REST, JSON, XML, SOAP and/or JMS. All of the communications can be encrypted. This communication is generally over a network such as the LAN (local area network), WAN (wide area network), telephone network (Public Switched Telephone Network (PSTN), Session Initiation Protocol (SIP), wireless network, point-to-point network, star network, token ring network, hub network, Internet, inclusive of the mobile Internet, via protocols such as EDGE, 3G, 4G LTE, Wi-Fi, and WiMAX. Additionally, a variety of authorization and authentication techniques, such as username/password, OAuth, Kerberos, SecureID, digital certificates, and more, can be used to secure the communications.
Further continuing with the description of the system architecture in
N-CASB 155 provides a variety of functions via a management plane 174 and a data plane 180. Data plane 180 includes an extraction engine 171, a classification engine 172, and a security engine 173, according to one implementation. Other functionalities, such as a control plane, can also be provided. These functions collectively provide a secure interface between cloud services 108 and organization network 102. Although we use the term “network security system” to describe N-CASB 155, more generally the system provides application visibility and control functions as well as security. In one example, thirty-five thousand cloud applications are resident in libraries that intersect with servers in use by computers 112a-n, tablets 122a-n, cell phones 132a-n and smart watches 142a-n in organization network 102.
Computers 112a-n, tablets 122a-n, cell phones 132a-n and smart watches 142a-n in organization network 102 include management clients with a web browser with a secure web-delivered interface provided by N-CASB 155 to define and administer content policies 187, according to one implementation. N-CASB 155 is a multi-tenant system, so a user of a management client can only change content policies 187 associated with their organization, according to some implementations. In some implementations, APIs can be provided for programmatically defining and or updating policies. In such implementations, management clients can include one or more servers, e.g. a corporate identities directory such as a Microsoft Active Directory, pushing updates, and/or responding to pull requests for updates to the content policies 187. Both systems can coexist; for example, some companies may use a corporate identities directory to automate identification of users within the organization while using a web interface for tailoring policies to their needs. Management clients are assigned roles and access to the N-CASB 155 data is controlled based on roles, e.g. read-only vs. read-write.
In addition to periodically generating the user-by-user data and the file-by-file data and persisting it in metadata store 178, an active analyzer and introspective analyzer (not shown) also enforce security policies on the cloud traffic. For further information regarding the functionality of active analyzer and introspective analyzer, reference can be made to, for example, commonly owned U.S. Pat. No. 9,398,102 (NSKO 1000-2); U.S. Pat. No. 9,270,765 (NSKO 1000-3); U.S. Pat. No. 9,928,377 (NSKO 1001-2); and U.S. patent application Ser. No. 15/368,246 (NSKO 1003-3); Cheng, Ithal, Narayanaswamy and Malmskog Cloud Security For Dummies, Netskope Special Edition, John Wiley & Sons, Inc. 2015; “Netskope Introspection” by Netskope, Inc.; “Data Loss Prevention and Monitoring in the Cloud” by Netskope, Inc.; “Cloud Data Loss Prevention Reference Architecture” by Netskope, Inc.; “The 5 Steps to Cloud Confidence” by Netskope, Inc.; “The Netskope Active Platform” by Netskope, Inc.; “The Netskope Advantage: Three “Must-Have” Requirements for Cloud Access Security Brokers” by Netskope, Inc.; “The 15Critical CASB Use Cases” by Netskope, Inc.; “Netskope Active Cloud DLP” by Netskope, Inc.; “Repave the Cloud-Data Breach Collision Course” by Netskope, Inc.; and “Netskope Cloud Confidence Index™” by Netskope, Inc., which are incorporated by reference for all purposes as if fully set forth herein.
For system 100, a control plane may be used along with or instead of management plane 174 and data plane 180. The specific division of functionality between these groups is an implementation choice. Similarly, the functionality can be highly distributed across a number of points of presence (POPs) to improve locality, performance, and/or security. In one implementation, the data plane is on premises or on a virtual private network and the management plane of the network security system is located in cloud services or with corporate networks, as described herein. For another secure network implementation, the POPs can be distributed differently.
While system 100 is described herein with reference to particular blocks, it is to be understood that the blocks are defined for convenience of description and are not intended to require a particular physical arrangement of component parts. Further, the blocks need not correspond to physically distinct components. To the extent that physically distinct components are used, connections between components can be wired and/or wireless as desired. The different elements or components can be combined into single software modules and multiple software modules can run on the same hardware.
Moreover, this technology can be implemented using two or more separate and distinct computer-implemented systems that cooperate and communicate with one another. This technology can be implemented in numerous ways, including as a process, a method, an apparatus, a system, a device, a computer readable medium such as a computer readable storage medium that stores computer readable instructions or computer program code, or as a computer program product comprising a computer usable medium having a computer readable program code embodied therein. The technology disclosed can be implemented in the context of any computer-implemented system including a database system or a relational database implementation like an Oracle™ compatible database implementation, an IBM DB2 Enterprise Server™ compatible relational database implementation, a MySQL™ or PostgreSQL™ compatible relational database implementation or a Microsoft SQL Server™ compatible relational database implementation or a NoSQL non-relational database implementation such as a Vampire™ compatible non-relational database implementation, an Apache Cassandra™ compatible non-relational database implementation, a BigTable™ compatible non-relational database implementation or an HBase™ or DynamoDB™ compatible non-relational database implementation. In addition, the technology disclosed can be implemented using different programming models like MapReduce™, bulk synchronous programming, MPI primitives, etc. or different scalable batch and stream management systems like Amazon Web Services (AWS)™, including Amazon Elasticsearch Service™ and Amazon Kinesis™, Apache Storm™, Apache Spark™, Apache Kafka™, Apache Flink™, Truviso™, IBM Info-Sphere™, Borealis™ and Yahoo! S4™.
Continuing with the description of
Further continuing with the description of
Data center 152 includes a set of pods that process packets for each of five different services in a services mesh in the example described relative to
The CHT represents the ports that are deployed, specifying the nodes that are available to process packets. The CHT changes when a port is added or a port is taken out of service, and specifies, for a given 6-tuple, where to send the packet. Individual pods can store a local copy of the CHT in a flow table to record the service chain specified in the CHT for faster table access. Every container on every service pod utilizes the same consistent hashing table (CHT) content that holds the list of available pods. A service map lists the PODS and containers that are available and identifies each service instance as having a primary role or a secondary role, in some implementations. In other cases, roles can include four distinct roles: primary role, secondary role, tertiary role and quaternary role.
Continuing the description of
Packet Structure and Flow Routing Example
Next we describe an example for processing a stream of packets. Construct the 6-tuple using the data from an exemplary packet described earlier. Access the service map for the packet, in the local flow table. If the service action is allowed, transmit the packet and update the stats. If the action is blocked, drop the packet and update the stats in the flow table. If the action is allowed, assert action==inspect and AppID==inspecting and store the states in the flow table.
In a service map example described next, an app firewall security service instance uses the service map when making the decision of where to send a received packet next after processing by the first service, accessing a flow table using outer IP header data carried by the packet to select a second service node, from among a plurality of service nodes performing the second service in the service chain. The next step is routing the packet to the selected second service node upon egress from the first service node. In this example, the service map shows ipsec service is available on pod 8 and pod 9, appfwl is available on pod 1, pod 2 and pod 3, and IPS service is available on pod 4, pod 5 and pod 6. The example shows the IP addresses for each of pods 1, 2 and 3. Additional pod IP addresses are omitted for brevity.
Pods:
{
ipsec:
[p8, p9],
appfwl
[p1, p2, p3],
ips:
[p4, p5, p6]
}
Ipaddrs:
{
p1:
1.1.1.1,
p2:
1.1.1.2,
p3:
1.1.1.3
. . .
}
}
For the example, we use the Maglev load balancing algorithm to calculate the CHT for each service (appfwl/ips) and we represent the hashed values as integers for simplicity. The hashed keys in the CHT are shown with associated values next.
At IPsec pod:
Appfwl:
Key
value
0
p1
1
p2
2
p2
3
p1
4
p3
5
p3
6
p1
7
p2
8
p3
. . .
At packet processing time, a packet is received, with the format described relative to
During real time ingress of packets to security services and egress of processed packets from security services, with flows and pods running, infrastructure manager 222 and HALB controller 212 monitor utilization of the available bandwidth for processing packets. If the traffic volume to the data center 152 exceeds a configurable high water mark (100 GB traffic in one example implementation) or if traffic volumes spike beyond a defined threshold, then HALB controller 212 signals workload orchestrator 216 to provision a new pod to become available for the impacted security service. In another implementation, the services processing packets can be performing functions different from security services. After the newly added pod is provisioned, workload orchestrator 216 schedules the packets and streams coming in for processing. HALB controller 212 communicates the updated CHT that includes the added pod or pods to all the containers in all the pods and can redistribute the load to lessen the traffic volume per worker service in the system. In another case, a pod stops responding, effectively dropping out of service, and workload orchestrator 216 updates the CHT to reflect the change in available pods and stores the updated CHT in publish subscribe configuration store 225. HALB controller 212 accesses the updated CHT in publish subscribe configuration store 225 and communicates the updated CHT to the remaining pods. Future packets get processed by available services on containers in available pods.
State Synchronization for Service Assurance
Continuing the description of
Next, we describe an example workflow for load balancing in a dynamic service chain, and a workflow for improved recovery from failure of a service instance in a service chain that performs at least two services, using multiple service instances to perform the services.
Workflows
Flowchart 500 can be implemented at least partially with a computer or other data processing system; that is, by one or more processors configured to receive or retrieve information, process the information, store results, and transmit the results. Other implementations may perform the actions in different orders and/or with different, fewer or additional actions than those illustrated in
The method described in this section and other sections of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features.
Process 500 continues at action 525, based on the processing, determining a second service, among at least second and third services which the subscriber has selected, that should next handle the packet.
Action 535 includes accessing a flow table using outer IP header data carried by the packet to select a second service pod, from among a plurality of service pods performing the second service, which perform the second service in the service chain.
Action 545 includes routing the packet to the selected second service pod upon egress from the first service pod.
If the flow table lacks an entry corresponding to the header data, take action 555 accessing an available pods list in a consistent hash (lookup) table (CHT) of service pods performing the second service.
At action 565, select one of the available pods using a six-tuple hash as a key to the CHT table to select the second service pod, and update the flow table to specify the second service pod as providing the second service for packets sharing the header data.
Other implementations may perform the actions in different orders and/or with different, fewer or additional actions than those illustrated in
Flowchart 700 can be implemented at least partially with a computer or other data processing system; that is, by one or more processors configured to receive or retrieve information, process the information, store results, and transmit the results. Other implementations may perform the actions in different orders and/or with different, fewer or additional actions than those illustrated in
The method described in this section and other sections of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features.
Process 700 continues at action 725, with service instance BA, in a primary role specified in a service map distributed to service instances, processing the first packet by performing service B.
At action 735, service instance BA identifies service instance BB as having a secondary role specified in the service map distributed to service instances, and synchronizing state information with service instance BB after processing the first packet.
At action 745, after failure of service instance BA, service instance AA receives an updated service map and prepares to forward a second packet, which includes the same stream affinity code as the first packet, to service instance BA for performance of service B.
Action 755 includes determining from the updated service map that service instance BA is no longer available and determining from the updated service map that the service instance BB has the secondary role.
At action 765, service instance AA forwards the second packet to service instance BB instead of service instance BA.
Other implementations may perform the actions in different orders and/or with different, fewer or additional actions than those illustrated in
Computer System
In one implementation, Netskope cloud access security broker (N-CASB) 155 of
User interface input devices 638 can include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the display; audio input devices such as voice recognition systems and microphones; and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system 600.
User interface output devices 676 can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem can include an LED display, a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem can also provide a non-visual display such as audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system 600 to the user or to another machine or computer system.
Storage subsystem 610 stores programming and data constructs that provide the functionality of some or all of the modules and methods described herein. Subsystem 678 can be graphics processing units (GPUs) or field-programmable gate arrays (FPGAs).
Memory subsystem 622 used in the storage subsystem 610 can include a number of memories including a main random access memory (RAM) 632 for storage of instructions and data during program execution and a read only memory (ROM) 634 in which fixed instructions are stored. A file storage subsystem 636 can provide persistent storage for program and data files, and can include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations can be stored by file storage subsystem 636 in the storage subsystem 610, or in other machines accessible by the processor.
Bus subsystem 655 provides a mechanism for letting the various components and subsystems of computer system 600 communicate with each other as intended. Although bus subsystem 655 is shown schematically as a single bus, alternative implementations of the bus subsystem can use multiple busses.
Computer system 600 itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a television, a mainframe, a server farm, a widely-distributed set of loosely networked computers, or any other data processing system or user device. Due to the ever-changing nature of computers and networks, the description of computer system 600 depicted in
Particular Implementations
Some particular implementations and features for distributed routing and load balancing in a dynamic service chain, and for improved recovery from failure of a service instance in a service chain of services are described in the following discussion.
In one disclosed implementation, a method of distributed routing and load balancing in a dynamic service chain includes receiving a packet for a subscriber at a first service instance, wherein the packet includes an added header, which includes a stream affinity code that is consistent for packets in a stream. The method also includes processing the packet at the first service instance, wherein the first service instance performs a first service in a service chain. The disclosed method further includes, based on the processing, the first service instance determining a second service, among at least second and third services to which the subscriber has subscribed, that should next handle the packet. Also included is the first service instance accessing a flow table using the stream affinity code to select a second service instance, from among a plurality of service instances performing the second service, which performs the second service in the service chain, and the first service instance routing the packet to the selected second service instance upon egress from the first service instance.
The method described in this section and other sections of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this method can readily be combined with sets of base features identified as implementations.
In some implementations of the disclosed method, the stream affinity code is included in an added IP header as IP source and destination. One implementation includes the first service instance routing the packet to the selected second service instance by updating the added IP header with the IP destination of the selected second service instance. The disclosed method also includes hashing the stream affinity code to access the flow table. In one implementation, wherein the flow table lacks an entry for the second service corresponding to the stream affinity code in the added header, the method further includes accessing a consistent hash table (CHT) of service instances performing the second service, selecting an available instance using a six-tuple hash as a key to the CHT table to select the second service instance, and updating the flow table to specify the second service instance as providing the second service for packets sharing the header. In some implementations, the disclosed method further includes selecting an alternate second service instance from the CHT and updating the flow table to specify the alternate second service instance. Some implementations also include a network service header (NSH) in the added header, which includes a client ID, and wherein the six-tuple hash is generated using the client ID and using source IP, source port, destination IP, destination port and IP protocol number for the packet. IN some implementations of the disclosed method, the flow table is maintained locally by the first service or instances of the first service.
In one implementation of the disclosed method, the service chain is a security service chain and at least the second and third services are security services. In many implementations of the method, instances of the first, second and third services run in containers and the containers are hosted in pods. In some implementations of the disclosed method, instances of the first, second and third services are implemented on virtual machines, bare metal servers or custom hardware. In some implementations, the packet further includes an added UDP header, which includes UDP source and/or destination ports, wherein the UDP source and/or destination ports are random or pseudo random values that are consistent for packets in a stream. This disclosed method further includes determining a core, from multiple cores running the second service instance, using the UDP source and/or destination values, and forwarding the received packet to the determined core and applying the second service to the packet. The forwarding can be accomplished by spraying the packet, in some implementations.
Some implementations of the disclosed method also include hashing values of the UDP source and/or destination ports to a hash key and using the hash key when determining the core. For some implementations of the disclosed method, instances of the first, second and third services include copies of a first code that implements multiple services. For some implementations, the disclosed method further includes the packet carrying a service chain in a packet header and the second and third services being among services specified in the service chain.
One implementation of a disclosed method of improved recovery from failure of a service instance, in a service chain of services that perform at least services A and B, using service instance AA and service instances BA and BB to perform the services A and B, respectively, includes the service instance BA receiving from the service instance AA a first packet in a stream for a subscriber, wherein the first packet includes an added header which includes a stream affinity code that is consistent for packets in the stream. The method also includes service instance BA, in a primary role specified in a service map distributed to service instances, processing the first packet by performing service B, and service instance BA identifying service instance BB as having a secondary role for packets carrying the stream affinity code, which is specified in the service map distributed to service instances, and synchronizing service instance BA state information with the service instance BB after processing the first packet. The disclosed method further includes, after failure of the service instance BA, service instance AA receiving an updated service map, and preparing to forward a second packet, which includes the same stream affinity code as the first packet, to service instance BA for performance of the service B, including determining from the updated service map that service instance BA is no longer available, and determining from the updated service map that the service instance BB has the secondary role. The method additional includes forwarding the second packet to service instance BB instead of service instance BA.
For some implementations, the service chain is a security service chain for a subscriber and at least the service B is a security service. For the disclosed method, the stream affinity code is included in an added header as an added IP header as IP source and destination. Many implementations further include the packet carrying a service chain for a subscriber in an added packet header and service B being among services specified in the service chain.
For some implementations of the disclosed method, instances of service A and service B run in containers and the containers are hosted in pods. In many cases, instances of service A and service B are implemented on virtual machines, bare metal servers or custom hardware. For the disclosed method, the failure of service instance BA is detected by a monitoring agent, including monitoring service instance BA, for packet processing activity, and causing updating of the service map for service B to remove the service instance BA from availability should it be inactive for a configurable predetermined amount of time. In one example, the configurable predetermined amount of time may be 15 seconds. In another case, 30 seconds of inactivity may cause the service instance to be considered “failed”.
Some implementations of the disclosed method further include service instance BB processing the second packet and based on the processing, identifying a next service, among at least two additional services to which the subscriber has subscribed, that should next handle the packet, and routing the processed second packet to the identified next service upon egress from service instance BB.
Many implementations of the disclosed method further include processing a plurality of packets in a stream through the service chain of services and directing the packets for processing, as a document, to a cloud access security broker (CASB) that controls exfiltration of sensitive content in documents stored on cloud-based services in use by users of an organization, by monitoring manipulation of the documents.
Other implementations of the disclosed technology described in this section can include a tangible non-transitory computer readable storage media, including program instructions loaded into memory that, when executed on processors, cause the processors to perform any of the methods described above. Yet another implementation of the disclosed technology described in this section can include a system including memory and one or more processors operable to execute computer instructions, stored in the memory, to perform any of the methods described above.
The preceding description is presented to enable the making and use of the technology disclosed. Various modifications to the disclosed implementations will be apparent, and the general principles defined herein may be applied to other implementations and applications without departing from the spirit and scope of the technology disclosed. Thus, the technology disclosed is not intended to be limited to the implementations shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. The scope of the technology disclosed is defined by the appended claims.
Ithal, Ravi, Muniyappa, Umesh Bangalore
Patent | Priority | Assignee | Title |
11659027, | Jan 06 2022 | VMware, Inc. | Multi-network/domain service discovery in a container orchestration platform |
Patent | Priority | Assignee | Title |
10243946, | Nov 04 2016 | Netskope, Inc. | Non-intrusive security enforcement for federated single sign-on (SSO) |
10270711, | Mar 16 2017 | Red Hat, Inc. | Efficient cloud service capacity scaling |
10282277, | Dec 01 2015 | International Business Machines Corporation | Streams: intelligent operator subset for debug |
10333846, | Feb 19 2016 | Citrix Systems, Inc | Systems and methods for routing network packets between multi-core intermediaries |
10412008, | Dec 31 2013 | HUAWEI TECHNOLOGIES CO , LTD | Packet processing method, apparatus, and system |
10452422, | Dec 29 2014 | HUAWEI TECHNOLOGIES CO , LTD | Method and apparatus for deploying virtual machine instance, and device |
10469391, | Sep 23 2015 | GOOGLE LLC | Distributed software defined wireless packet core system |
10469525, | Aug 10 2016 | NETSCOPE, INC | Systems and methods of detecting and responding to malware on a file system |
10567180, | Nov 13 2015 | OBSCHESTVO S OGRANICHENNOI OTVETSTVENNOSTYU «PROGRAMMIRUEMYE SETI» | Method for multicast packet transmission in software defined networks |
10616321, | Dec 22 2017 | AT&T Intellectual Property I, L.P. | Distributed stateful load balancer |
10616339, | Nov 28 2017 | DELL PRODUCTS, L.P. | System and method to configure, manage, and monitor stacking of ethernet devices in a software defined network |
10644995, | Feb 14 2018 | Mellanox Technologies, LTD | Adaptive routing in a box |
10812376, | Jan 22 2016 | Red Hat, Inc.; Red Hat, Inc | Chaining network functions to build complex datapaths |
10819621, | Feb 23 2016 | Mellanox Technologies, LTD | Unicast forwarding of adaptive-routing notifications |
10834113, | Jul 25 2017 | NETSKOPE, INC | Compact logging of network traffic events |
10862823, | Mar 24 2014 | HUAWEI TECHNOLOGIES CO , LTD | Method for service implementation in network function virtualization (NFV) system and communications unit |
10868845, | Mar 01 2019 | NETSKOPE, INC | Recovery from failure in a dynamic scalable services mesh |
10965596, | Oct 04 2017 | Cisco Technology, Inc | Hybrid services insertion |
10965598, | Oct 04 2017 | Cisco Technology, Inc. | Load balancing in a service chain |
10986039, | Nov 11 2015 | Gigamon Inc.; GIGAMON INC | Traffic broker for routing data packets through sequences of in-line tools |
11005724, | Jan 06 2019 | MELLANOX TECHNOLOGIES, LTD. | Network topology having minimal number of long connections among groups of network elements |
11018754, | Aug 07 2018 | Appareo Systems, LLC | RF communications system and method |
11082312, | Oct 04 2017 | Cisco Technology, Inc. | Service chaining segmentation analytics |
11087179, | Dec 19 2018 | Netskope, Inc. | Multi-label classification of text documents |
11206405, | Jun 20 2018 | TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED | Video encoding method and apparatus, video decoding method and apparatus, computer device, and storage medium |
11212286, | Dec 08 2017 | NET-THUNDER, LLC | Automatically deployed information technology (IT) system and method |
6898636, | Feb 04 1999 | INTRALINKS, INC | Methods and systems for interchanging documents between a sender computer, a server and a receiver computer |
6981155, | Jul 14 1999 | GEN DIGITAL INC | System and method for computer security |
7231426, | May 24 2000 | Microsoft Technology Licensing, LLC | System and method for sending a web page via electronic mail |
7296058, | Jan 30 2002 | SWISS REINSURANCE COMPANY LTD | Systems and methods for managing email |
7475146, | Nov 28 2002 | International Business Machines Corporation | Method and system for accessing internet resources through a proxy using the form-based authentication |
7536439, | Dec 02 2003 | ZETA GLOBAL CORP | Methods and apparatus for categorizing failure messages that result from email messages |
7587499, | Sep 14 2000 | HAGHPASSAND, JOSHUA | Web-based security and filtering system with proxy chaining |
8280986, | Nov 23 2007 | LG Electronics Inc | Mobile terminal and associated storage devices having web servers, and method for controlling the same |
8281372, | Dec 18 2009 | GOOGLE LLC | Device, system, and method of accessing electronic mail |
8549300, | Feb 23 2010 | Pulse Secure, LLC | Virtual single sign-on for certificate-protected resources |
8914461, | Aug 23 2006 | CYBERSTATION, INC | Method and device for editing web contents by URL conversion |
9069436, | Apr 01 2005 | INTRALINKS, INC | System and method for information delivery based on at least one self-declared user attribute |
9270765, | Mar 06 2013 | NETSKOPE, INC | Security for network delivered services |
9363180, | Nov 04 2013 | TELEFONAKTIEBOLAGET L M ERICSSON PUBL | Service chaining in a cloud environment using Software Defined Networking |
9398102, | Mar 06 2013 | Netskope, Inc. | Security for network delivered services |
9553860, | Apr 27 2012 | INTRALINKS, INC | Email effectivity facility in a networked secure collaborative exchange environment |
9998496, | Mar 06 2013 | Netskope, Inc. | Logging and monitoring usage of cloud-based hosted storage services |
20010011238, | |||
20010054157, | |||
20020016773, | |||
20020138593, | |||
20040122977, | |||
20040268451, | |||
20050086197, | |||
20050251856, | |||
20070220251, | |||
20070289006, | |||
20080034418, | |||
20080229428, | |||
20080301231, | |||
20090225762, | |||
20100024008, | |||
20100188975, | |||
20110016197, | |||
20110154506, | |||
20110196914, | |||
20110247045, | |||
20120020307, | |||
20120237908, | |||
20130268677, | |||
20140007222, | |||
20140032691, | |||
20140165148, | |||
20140165213, | |||
20140245381, | |||
20150124815, | |||
20170013028, | |||
20180115586, | |||
20190199789, | |||
20190373305, | |||
20200036717, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 08 2020 | ITHAL, RAVI | NETSKOPE, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 054610 | /0097 | |
Jun 23 2020 | MUNIYAPPA, UMESH BANGALORE | NETSKOPE, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 054610 | /0097 | |
Dec 10 2020 | Netskope, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Dec 10 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Jul 05 2025 | 4 years fee payment window open |
Jan 05 2026 | 6 months grace period start (w surcharge) |
Jul 05 2026 | patent expiry (for year 4) |
Jul 05 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 05 2029 | 8 years fee payment window open |
Jan 05 2030 | 6 months grace period start (w surcharge) |
Jul 05 2030 | patent expiry (for year 8) |
Jul 05 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 05 2033 | 12 years fee payment window open |
Jan 05 2034 | 6 months grace period start (w surcharge) |
Jul 05 2034 | patent expiry (for year 12) |
Jul 05 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |