A subtree within a global tree of nodes is created by determining a predicate condition. The predicate condition is disseminated to the nodes in the global tree. For each node in the global tree, a determination of whether the node belongs to the subtree is performed, and an indication of whether the node belongs to the subtree is stored. After the subtree is created, a query corresponding to the subtree is put to the subtree for resolution.
|
1. A method of creating a subtree within a global tree including a plurality of nodes and disseminating a query, wherein the subtree is a subset of the global tree, the method comprising:
determining a predicate condition for the subtree;
disseminating the predicate condition to each of the plurality of nodes in the global tree, wherein each node in the global tree receives the disseminated predicate condition and evaluates itself to determine whether it satisfies the predicate condition, and the nodes in the global tree that satisfy the predicate condition belong to the subtree;
for each of the plurality of nodes in the global tree, storing a subtree local value, wherein the subtree local value of a node in the global tree is an indication of whether the node in the global tree belongs to the subtree based at least on the predicate condition;
for each parent node in the global tree, storing a subtree aggregate value, wherein the subtree aggregate value is determined from the subtree local value of the parent node and a subtree local value of each child node of the parent node;
receiving a query for the subtree; and
disseminating the query to only the nodes belonging to the subtree rather than to all of the nodes in the global tree.
10. A method of resolving a query submitted to a root node of a global tree including a plurality of nodes in an in-network data aggregation system, the method comprising:
receiving a query probe at the root node, the query probe comprising a query aggregation function and a query predicate condition;
determining a subtree in the global tree corresponding to the query predicate condition;
propagating the query probe from the root node only to nodes in the subtree, wherein the subtree includes a subset of a plurality of nodes in the global tree; and
receiving a response determined from a resolution of the query probe performed at each node in the subtree,
wherein determining the subtree in the global tree comprises,
determining a predicate condition for the subtree;
disseminating the predicate condition to each of the plurality of nodes in the global tree;
each of the plurality of nodes in the global tree receiving the disseminated predicate condition and evaluating itself to determine whether it satisfies the predicate condition, and the nodes in the global tree that satisfy the predicate condition belong to the subtree;
each of the plurality of nodes in the global tree storing a subtree local value, wherein the subtree local value is an indication of whether the node in the global tree belongs to the subtree based at least on the predicate condition; and
each parent node in the global tree storing a subtree aggregate value, wherein the subtree aggregate value is determined from the subtree local value of the parent node and a subtree local value of each child node of the parent node.
17. In a network of a plurality of nodes ordered into a global tree in an in-network aggregation system, an agent to be deployed at a first node in the global tree, the agent comprising instructions stored on a non-transitory computer readable storage device to be executed by a processor, the instructions comprising:
a first module of instructions to be executed by the processor, in cooperation with agents at other nodes in the network, to create a subtree within the global tree, wherein the subtree is a subset of the global tree, by
determining a predicate condition for the subtree, the predicate condition including an attribute descriptor and a corresponding attribute value and a comparison operator;
disseminating the predicate condition to each of the plurality of nodes in the global tree;
each of the plurality of nodes in the global tree receiving the disseminated predicate condition and evaluates itself to determine whether it satisfies the predicate condition, wherein the nodes in the global tree that satisfy the predicate condition belong to the subtree;
for each of the plurality of nodes in the global tree, storing a subtree local value, wherein the subtree local value is an indication of whether the node belongs to the subtree based on the predicate condition;
for each parent node in the global tree, storing a subtree aggregate value, wherein the subtree aggregate value is determined from the subtree local value of the parent node and a subtree local value of each child node of the parent node; and
a second module of instructions to be executed by the processor, in cooperation with agents at other nodes in the network, to resolve a query put to the subtree in order to obtain a response to the query.
2. The method of
3. The method of
sequentially determining the subtree aggregate value at each node of the global tree from nodes at a lowest tier in the global tree to a root node at a root tier in the global tree.
4. The method of
determining whether the node satisfies the predicate condition; and
storing a subtree local value indicating whether the node satisfies the predicate condition.
5. The method of
for each leaf node, determining the subtree aggregate value is its subtree local value;
for each parent node, determining the subtree aggregate value by using an OR function to combine subtree aggregate values from any children nodes to the parent node and the subtree local value at the parent node.
6. The method of
7. The method of
8. The method of
9. The method of
11. The method of
sequentially aggregating the resolutions from all nodes in the subtree from a lowest tier of the subtree to the root node at a root tier of the subtree to form the response.
12. The method of
at each parent node in the subtree, receiving the resolutions from any children nodes to the parent node and aggregating the received resolutions and the resolution of the parent node using the aggregate function in the query probe; and
sending the aggregated resolutions to a higher tier in the subtree if a highest tier has not been reached.
13. The method of
using the query descriptor to determine the subtree in the global tree corresponding to the query predicate condition.
14. The method of
disseminating the query probe using the query predicate condition to identify nodes in the subtree that are children to a parent node in the subtree.
15. The method of
16. The method of
18. The agent of
determining whether the nodes satisfy the predicate condition; and
for parent nodes, aggregating values associated with the predicate condition from the parent node and any children nodes to the parent node using an OR function.
19. The agent of
receiving the query, the query comprising a query aggregation function and a query predicate condition;
propagating the query from a root node only to nodes in the subtree; and
receiving a response determined from a resolution of the query performed at each node in the subtree.
20. The agent of
receiving resolutions from any children nodes and aggregating any received resolutions and the resolution of a current node using the aggregate function in the query; and sending the aggregated resolutions to a higher tier in the subtree if a highest tier has not been reached.
|
Information Technology personnel (IT personnel) responsible for managing data centers constantly have to perform a number of management tasks such as capacity planning, resource allocation, license management, and patch management. Most of these tasks require careful examination of the current status of all the machines or a subset of the machines in the data center. Considering these tasks, especially for large data centers, it is important to have a scalable monitoring solution that provides insights into system-wide performance information instantly, for example, by capturing data center level metrics. Data center level metrics include metrics computed for multiple machines in a data center. For example, a data center level metric may be an aggregation or average of individual-machine metrics (node level metrics). Examples of data center level metrics include, number of licensed copies of software running in the data center, number of servers for each type of server deployed in the data center, locations of computer resources, cooling and temperature information, power information, etc. Data center level metrics may also include traditional performance metrics for hardware and software, such as CPU utilization, memory and hard disk utilization, web server response times, etc, for multiple machines. These data center level metrics may be used by a system administrator for managing the resources in a data center.
Current approaches to collecting data center level metrics are primarily focused on using centralized databases where such information is collected and aggregated. For example,
The centralized database solution shown in
Secondly, complexity is an issue considering the variety of tools that gather different types of data. For example, HP OVR collects performance data, the Domain Controller collects data related to Microsoft™ Windows™, and HP Asset collects asset data. Thus, a user needs to interface with multiple tools to collect different types of data, which makes gathering data center level metrics an even more difficult task. Many automated, centralized systems do not have the capability to automatically interface with multiple types of tools to capture and store the different types of data captured by each tool.
A subtree within a global tree of nodes is created by determining a predicate condition. The predicate condition is disseminated to the nodes in the global tree. For each node in the global tree, a determination of whether the node belongs to the subtree is performed, and an indication of whether the node belongs to the subtree is stored. After the subtree is created, a query corresponding to the subtree is forwarded only along the subtree for resolution.
The invention will be described in detail in the following description of preferred embodiments with reference to the following figures.
For simplicity and illustrative purposes, the principles of the embodiments are described by referring mainly to examples thereof. Also, in the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent however, to one of ordinary skill in the art, that the embodiments may be practiced without limitation to these specific details. In some instances, well known methods and structures have not been described in detail so as not to unnecessarily obscure the description of the embodiments.
According to an embodiment, an in-network aggregation system is operable to collect and store metrics for nodes in the network. The system is a peer-to-peer (P2P) system comprised of P2P nodes storing the metrics. The nodes are organized as global trees and subtrees within the global trees for storing metrics. The nodes organized as global trees and subtrees are also operable to respond to queries about the metrics quickly and efficiently. The system is in-network because nodes capturing metrics may also store the metrics and are included in the aggregation system that is operable to respond to queries about the metrics.
The P2P, in-network, aggregation system is scalable as it performs in-network aggregation of metrics with large number of nodes and metrics. This design leverages the computing and network resources of the data center itself to compute the data center level metrics and avoids maintaining a separate infrastructure with expensive software and hardware for collecting, analyzing, and reporting data center level metrics. Second, the system is able to collect different types of data from different data collection tools. The system incorporates distributed agents at the nodes capable of interacting with different collection tools and aggregating the collected data through the trees. For example, each of the nodes in the system runs an agent for collecting and storing different types of information, including data center level metrics such as hardware configuration information, software license information, performance metrics, etc. The agent running on each node not only acts as a data collector and data storer for collecting and storing metrics on the node, it also acts as a node in a large distributed aggregation tree.
In one embodiment, the nodes 110a-n are DHT overlay nodes forming a DHT overlay network. A DHT overlay network is a logical representation of an underlying physical network, which provide, among other types of functionality, data placement, information retrieval, and overlay routing. DHT overlay networks have several desirable properties, such as scalability, fault-tolerance, and low management cost. Some examples of DHT overlay networks that may be used in the embodiments of the invention include content-addressable-network (CAN), PASTRY, CHORD, etc.
It will be apparent to one of ordinary skill in the art that the system 100 may include many more nodes than shown. Also, the nodes 110a-n may be housed in a single data center or may be housed in multiple data centers. However, nodes need not be in a data center to be included in the system 100. Furthermore, the global tree construction described herein is described with respect to DHT nodes in a DHT overlay network. It will be apparent to one of ordinary skill in the art that other mechanisms may be used for global tree construction.
According to an embodiment, the system 100 uses the Scalable Distributed Information Management System (SDIMS) described in “A Scalable Distributed Information Management System” by P. Yalagandula et al., and published in ACM SIGCOMM 2004 Conference, in August 2004. SDIMS is distributed P2P middleware that forms several aggregation trees, also referred to herein as global trees, and aggregates different metrics on the trees, so the system scales with both the number of nodes in the system and the number of metrics. SDIMS uses a DHT overlay network to construct and maintain the global trees, which handles reconfigurations such as node failures, node additions, and network failures.
In SDIMS, a global tree may aggregate a single attribute for multiple nodes to determine data center level metrics. An attribute may be a node level metric. Examples of attributes include CPU utilization, or a characteristic of a node, such as a type of server or operating system (OS) hosted on a server, etc. The attribute values, for example, are percentage of CPU utilization, C-class blade server or LINUX OS. Attribute values for a node may be stored at the node and aggregated by the global tree.
In SDIMS, a global tree is created by hashing an attribute to determine a key. Once the key is determined a put or a get operation may be performed in the DHT as is known in the art for DHT networks to retrieve data from and send data to a DHT node associated with the key. In SDIMS a global tree includes the paths from nodes in the system to the root node associated with the key. This is illustrated in
After the global tree is formed, queries for the corresponding attribute are sent to the root node of the global tree. For example, the corresponding attribute for the global tree is CPU utilization. A query for CPU utilization is routed in the DHT overlay network to the root node 110n using the key 11111. The root node 110n aggregates CPU utilizations from all the nodes 110a-g in the global tree based on an aggregate function and sends a response to the query to the node initially sending the query.
SDIMS allows applications to choose an appropriate aggregation scheme by providing three functions: install, update, and probe. Install( ) installs an aggregation function for an attribute type and specifies the update strategy that the function will use; update( ) inserts or modifies a node's local value for an attribute; and probe( ) obtains an aggregate value for an attribute for a specified subtree level. Also, nodes in a global tree store data that is aggregated along the tree. For example, each node has local data stored as a set of {attributeType, attributeName, value} tuples, such as {configuration, numCPUs, 16}, {mcast membership, session foo, yes}, or {file stored, foo, myIPaddress}. Also, the SDIMS functions may be performed by the aggregation agent 112 at each node and shown in
The aggregation abstraction in SDIMS is defined across a tree spanning all nodes in the system. Thus, aggregating data in a global tree in response to a query involves all the nodes in the global tree, which in turn may be all the overlay nodes in the system. However, in some instances, a query may not be relevant to all the nodes in the global tree. In these instances, sending the query to all the nodes in the global tree increases query response time and wastes bandwidth. According to an embodiment described below and not described in SDIMS, subtrees of global trees are created in the system along particular attributes, which may be attributes common to many queries.
According to an embodiment, subtrees are created in the system 100 which span all nodes with certain attributes. A subtree is a subset of a global tree. Subtrees may be created for attributes that are commonly queried. Thus, when a query is received at a root node having an attribute corresponding to a subtree of the root node, the query may only be disseminated to the subtree rather than to the entire global tree. This is described in further detail below.
As described above, subtrees are created in the system 100 which span all nodes with certain attributes. This may include nodes that satisfy a predicate condition. A predicate condition is a condition that each node can evaluate to determine if it satisfies the condition or not A predicate condition may include an attribute descriptor and a corresponding attribute value and a comparison operator. For example, a predicate condition is (OS=Linux). The attribute descriptor in this example is OS. The attribute value is Linux, and the comparison operator is equality “=”. Thus, nodes hosting a Linux OS are included in the subtree. If the comparison operator is “!=”, then nodes not hosting a Linux OS are included in the subtree. Another example of a predicate condition is (OS=Linux&server=Apache). In this example, a subtree is created along a predicate identifier denoting this predicate. The attribute descriptor contains two items: OS type and server type. The attribute values for the corresponding attributes are Linux and Apache. A predicate condition may include multiple attribute descriptors and values corresponding to multiple attributes. In yet another example, the attribute descriptor may be associated with the location of a node, such as the floor where a node is located or a cluster that the node is a member of. Then, an administrator can quickly get information about nodes on a particular floor or nodes in a particular cluster.
A subtree is created by disseminating the predicate condition to a global tree. The subtree stores an indication of whether each node satisfies the predicate condition, and uses “OR” function to determine which nodes in the global tree are members of the subtree.
Each of the nodes 110a-g and 110n evaluates the predicate condition P to determine whether the node satisfies the predicate condition. For example if the predicate condition is (OS=Linux), then each of the nodes determines whether it includes a Linux OS. A subtree local value is set at each node to determine whether it satisfies the predicate condition. In one example, the subtree local value is a Boolean variable of 0 or 1. If the predicate condition is satisfied, a 1 is stored; otherwise a 0 is stored. The subtree value is stored along with a subtree descriptor at each node, so each node can quickly determine whether it belongs to a subtree described by the subtree descriptor. The subtree descriptor may include the predicate condition or another value identifying the subtree corresponding to the predicate condition.
The OR function is used to combine the subtree values for the predicate condition to form the subtree. For example, the predicate condition is sequentially aggregated starting from a lowest tier 301 in the global tree 300 to a highest tier 303. Each tier below the root node may include child nodes (children) of a parent node (parent) in a higher tier. Note that nodes in a higher tier may not have children in a lower tier. In this example, tier 301 includes nodes 110d, 110e, 110f, and 110g; tier 302 includes nodes 110a-c; and tier 303 includes the root node 110n.
An OR operation is performed at each node to compute a subtree aggregate value. For leaf nodes, the subtree aggregate value is simply their subtree local value. For other nodes with children, the subtree aggregate value is computed using an OR function applied across its children's subtree local values and its own subtree local value. For example, the node 110c has a subtree local value set to 0 because the node 110c does not satisfy the predicate condition, i.e., it does not have an Linux OS and thus does not satisfy the predicate condition. However, at the node 110c, the subtree value of the node 110c and the subtree aggregate values of the children 110f and 110g are input to an OR function to determine the subtree aggregate value of the node 110c. Because the children 110f or 110g have subtree values of 1, the subtree aggregate value of 110c is 1. A similar operation is performed for any nodes with children. Thus, children satisfying the predicate condition are not cut off from the subtree when a parent does not satisfy the predicate condition. Hence, a query including an attribute described in a predicate condition used to create a subtree is disseminated to all relevant nodes. Otherwise, for example, a query for Max or Avg CPU utilization for all Linux nodes may return an incorrect value if it did not reach the nodes 110f and 110g. The final subtree is shown as 320 including nodes 110n, 110b, 110c, 110f and 110g. The subtree 320 is shown with thicker connecting lines to distinguish from nodes not in the subtree 320. Also note that for nodes 110c and 110n the subtree values are shown as (subtree local value, subtree aggregate value). Also note that for the nodes 110f, 110g and 110b the subtree local value and the subtree aggregate value are the same, which is 1 in this case.
Thus, the subtree aggregate value identifies which nodes belong to the subtree and which nodes do not belong to the subtree. For example, a subtree aggregate value set to a logical 0 might indicate that the node does not satisfy the predicate condition, and a subtree aggregate value equal to a logical 1 might indicate that the node satisfies the predicate condition and belong to the subtree. A logical 0 is shown for the subtree local and aggregate values for the nodes 110a, 110d and 110e.
The subtree aggregate values for each of the nodes in the subtree may be reported to and stored at the root node along with a subtree descriptor. In another embodiment, the root node may only store an indication of the children of the root that are members of a particular subtree, so a query corresponding to the subtree is only distributed to those children. In this embodiment, each parent node stores the aggregate value for the subtree for its children nodes so the parent node can determine which child or children to send a query. Also, an indication that a particular subtree is created, which may include a subtree descriptor and indication of the root node for the subtree, may be disseminated to all the nodes in the system 100.
Subtree maintenance is also performed by nodes. For example, whenever a node does not satisfy the predicate condition for a subtree any more, it sets its local value to 0 and determines the aggregate value again and forwards it to the parent node. Also, a node may become part of a subtree if its attribute change such that the predicate condition is satisfied.
Once a subset tree is established, a query resolution mechanism is used to ensure that queries for nodes that satisfy a predicate are disseminated to the subtree created for that predicate. This mechanism allows the query to only traverse the subtree to avoid the extra overhead cost of sending the query to the whole system.
The query probe may include a query predicate condition identifying a corresponding subtree. For example, the query predicate condition includes a query descriptor (e.g., OS), query comparison operator (e.g., equality) and a corresponding query predicate value (e.g., Linux). The query predicate condition may correspond to a predicate condition used to create a subtree, so the relevant subtree can be easily identified. If subtree identifiers other than a predicate condition are used, the query predicate may include a subtree identifier so the root node can quickly determine which subtree, if any, is relevant to the query. The root node 110n may compare information in the query predicate condition, such as the predicate condition or subtree identifier, to stored subtree information to identify any relevant subtrees to the query. As shown in
Assume the query Q is for Max CPU utilization for nodes having a Linux OS. Each of the nodes in the subtree 320 that satisfies the query predicate condition sequentially responds to the query Q starting from the lowest tier. Also, the aggregation function is performed at each node in the subtree 320 having children. This is illustrated in
The nodes 110f and 110g send their Max CPU utilizations up the subtree to their parent, which is the node 110c. The node 110c also performs the aggregation function by aggregating the Max CPU utilization. Since node 110c does not satisfy the predicate, it does not include its local value in the aggregation. So it performs the aggregation for only the children's values. For example, the node 110c determines the Max of 80 and 12 and sends the Max to its parent, which is the node 110n. The node 110n resolves the query Q by determining the Max CPU utilization of the subtree 320, which is 80. The Max CPU utilization for all nodes having a Linux OS is then transmitted to the node making the query, which is the system administrator 310 in this example.
It should be noted that information is exchanged between nodes as needed to identify a subtree and to perform an aggregate function. For example, a node may receive a query from a parent or a query resolution from a child including information identifying the subtree. The node uses this information to determine whether it is in the same subtree identified in the received information and responds accordingly. Also, information is exchanged to perform the aggregate function, such as attribute information from children needed to perform a Max or Min function. For Concat, a list of all the attributes is forwarded. For AVG, the number of values as well as the sum of values from each node are exchanged so parent nodes can calculate the average.
At step 701, a predicate condition is determined. This may include receiving a predicate condition or determining the predicate condition based on attributes commonly found in many queries. The predicate condition P is received at the root node 110n shown in
At step 702, the predicate condition is disseminated in a global tree, such as shown in
At step 703, for each node in the global tree, an indication is stored of whether the node belongs to the subtree based at least on the predicate condition. For example, as shown in
At step 801, a query probe is received at the root node of a global tree. The query probe includes a query aggregation function and a query predicate condition.
At step 802, a subtree in the global tree corresponding to the query predicate condition is determined. For example, the root node stores a subtree identifier. In one example, the subtree identifier includes a predicate condition used to create the subtree. If that predicate condition matches the query predicate condition, then the subtree in the global tree corresponding to the query is identified.
At step 803, the query probe is propagated from the root node only to nodes in the subtree, such as shown in
At step 804, a response is determined from a resolution of the query probe performed at each node in the subtree, and the response is received at the root node, such as shown in
The computer system 900 includes one or more processors, such as processor 902, providing an execution platform for executing software. Commands and data from the processor 902 are communicated over a communication bus 905. The computer system 900 also includes a main memory 904, such as a Random Access Memory (RAM), where software may be resident during runtime, and a secondary storage 906. The secondary storage 906 includes, for example, a hard disk drive and/or a removable storage drive, representing a floppy diskette drive, a magnetic tape drive, a compact disk drive, etc., or a nonvolatile memory where a copy of the software may be stored. The secondary storage 909 may also include ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM). In addition to storing software, the data storage 904 or 906 may be used to store information for creating subtrees, resolving queries in the subtrees, date center level metrics and any other information that may be relevant to creating trees and resolving queries in an in-network aggregation system.
A user interfaces with the computer system 900 with one or more I/O devices 908, such as a keyboard, a mouse, a stylus, display, and the like. A network interface 910 is provided for communicating with other computer systems via a network, which may include other nodes in the system 100.
One or more of the steps of the methods 700 and 800 and other steps described herein may be implemented as software embedded on a computer readable medium, such as the memory 904 and/or data storage 906, and executed on the computer system 900, for example, by the processor 902. The steps may be embodied by a computer program, which may exist in a variety of forms both active and inactive. For example, they may exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats for performing some of the steps. Any of the above may be embodied on a computer readable medium, which include storage devices and signals, in compressed or uncompressed form. Examples of suitable computer readable storage devices include conventional computer system RAM (random access memory), ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), and magnetic or optical disks or tapes. Examples of computer readable signals, whether modulated using a carrier or not, are signals that a computer system hosting or running the computer program may be configured to access, including signals downloaded through the Internet or other networks. Concrete examples of the foregoing include distribution of the programs on a CD ROM or via Internet download. In a sense, the Internet itself, as an abstract entity, is a computer readable medium. The same is true of computer networks in general. It is therefore to be understood that those functions enumerated below may be performed by any electronic device capable of executing the above-described functions.
The aggregation agent 112 may include modules comprised of software instructions operable to control the processor 902, at least in cooperation with agents at other nodes in the network, to create a subtree within a global tree. Another module is operable to control the processor 902, in cooperation with agents at other nodes in the network, to resolve a query put to the subtree in order to obtain a query resolution.
The second module 1180 further includes instruction sets 1188, 1190 and 1192. The set of instructions 1188 is operable to control the processor to receive an aggregated query value from the agent at a child node. The set of instructions 1190 is operable to control the processor to combine, according to the query aggregation function, the aggregated value at the node with the aggregated query value received from the agent at each child node. The combined result is the aggregated query value of the first node. The set of instructions 1192 operable to control the processor to report the aggregated query value for the node to a higher tier agent.
While the embodiments have been described with reference to examples, those skilled in the art will be able to make various modifications to the described embodiments without departing from the scope of the claimed embodiments.
Talwar, Vanish, Milojicic, Dejan S., Ko, Steve, Iyer, Subramoniam N., Yalagandula, Praveen
Patent | Priority | Assignee | Title |
10095708, | Apr 23 2014 | QUMULO, INC | Data mobility, accessibility, and consistency in a data storage system |
10318401, | Apr 20 2017 | QUMULO, INC | Triggering the increased collection and distribution of monitoring information in a distributed processing system |
10387289, | Oct 08 2014 | SignalFx, Inc. | Real-time reporting based on instrumentation of software |
10394692, | Jan 29 2015 | SIGNALFX LLC; SPLUNK INC | Real-time processing of data streams received from instrumented software |
10394693, | Oct 08 2014 | SIGNALFX LLC; SPLUNK INC | Quantization of data streams of instrumented software |
10409568, | Dec 19 2014 | SIGNALFX LLC; SPLUNK INC | Representing result data streams based on execution of data stream language programs |
10437705, | Oct 08 2014 | SIGNALFX LLC; SPLUNK INC | Real-time reporting based on instrumentation of software |
10459892, | Apr 23 2014 | Qumulo, Inc.; QUMULO, INC | Filesystem hierarchical aggregate metrics |
10678671, | Apr 20 2017 | Qumulo, Inc. | Triggering the increased collection and distribution of monitoring information in a distributed processing system |
10860547, | Apr 23 2014 | Qumulo, Inc. | Data mobility, accessibility, and consistency in a data storage system |
10877942, | Jun 17 2015 | Qumulo, Inc. | Filesystem capacity and performance metrics and visualizations |
10936538, | Mar 30 2020 | Qumulo, Inc. | Fair sampling of alternate data stream metrics for file systems |
10936551, | Mar 30 2020 | Qumulo, Inc. | Aggregating alternate data stream metrics for file systems |
10949180, | Dec 19 2014 | SIGNALFX LLC; SPLUNK INC | Dynamically changing input data streams processed by data stream language programs |
11132126, | Mar 16 2021 | Qumulo, Inc.; QUMULO, INC | Backup services for distributed file systems in cloud computing environments |
11132336, | Jun 17 2015 | QUMULO, INC | Filesystem hierarchical capacity quantity and aggregate metrics |
11151001, | Jan 28 2020 | Qumulo, Inc.; QUMULO, INC | Recovery checkpoints for distributed file systems |
11151092, | Jan 30 2019 | Qumulo, Inc. | Data replication in distributed file systems |
11157458, | Jan 28 2021 | Qumulo, Inc. | Replicating files in distributed file systems using object-based data storage |
11256682, | Dec 09 2016 | Qumulo, Inc. | Managing storage quotas in a shared storage system |
11294604, | Oct 22 2021 | Qumulo, Inc. | Serverless disk drives based on cloud storage |
11294718, | Jan 24 2020 | Qumulo, Inc. | Managing throughput fairness and quality of service in file systems |
11347699, | Dec 20 2018 | Qumulo, Inc. | File system cache tiers |
11354273, | Nov 18 2021 | Qumulo, Inc.; QUMULO, INC | Managing usable storage space in distributed file systems |
11360936, | Jun 08 2018 | QUMULO, INC | Managing per object snapshot coverage in filesystems |
11372735, | Jan 28 2020 | Qumulo, Inc. | Recovery checkpoints for distributed file systems |
11372819, | Jan 28 2021 | Qumulo, Inc. | Replicating files in distributed file systems using object-based data storage |
11435901, | Mar 16 2021 | Qumulo, Inc. | Backup services for distributed file systems in cloud computing environments |
11461241, | Mar 03 2021 | Qumulo, Inc. | Storage tier management for file systems |
11461286, | Apr 23 2014 | Qumulo, Inc. | Fair sampling in a hierarchical filesystem |
11567660, | Mar 16 2021 | Qumulo, Inc.; QUMULO, INC | Managing cloud storage for distributed file systems |
11599508, | Jan 31 2022 | QUMULO, INC | Integrating distributed file systems with object stores |
11669255, | Jun 30 2021 | Qumulo, Inc.; QUMULO, INC | Distributed resource caching by reallocation of storage caching using tokens and agents with non-depleted cache allocations |
11709661, | Dec 19 2014 | SIGNALFX LLC; SPLUNK INC | Representing result data streams based on execution of data stream language programs |
11722150, | Sep 28 2022 | QUMULO, INC | Error resistant write-ahead log |
11729269, | Oct 26 2022 | QUMULO, INC | Bandwidth management in distributed file systems |
11733982, | Dec 19 2014 | SPLUNK Inc. | Dynamically changing input data streams processed by data stream language programs |
11734147, | Jan 24 2020 | Qumulo Inc. | Predictive performance analysis for file systems |
11775481, | Sep 30 2020 | QUMULO, INC | User interfaces for managing distributed file systems |
11921677, | Nov 07 2023 | QUMULO, INC | Sharing namespaces across file system clusters |
11928046, | Jan 29 2015 | SPLUNK Inc. | Real-time processing of data streams received from instrumented software |
11934660, | Nov 07 2023 | QUMULO, INC | Tiered data storage with ephemeral and persistent tiers |
11966592, | Nov 29 2022 | Qumulo, Inc. | In-place erasure code transcoding for distributed file systems |
12105711, | Jun 22 2022 | Microsoft Technology Licensing, LLC; Microsoft Technology Licensing, LLC. | Computing resource conservation with balanced traversals and precomputations for connected data sets |
8688620, | Sep 23 2011 | Hewlett Packard Enterprise Development LP | Anomaly detection in data centers |
8700637, | Nov 08 2010 | ABACUS INNOVATIONS TECHNOLOGY, INC ; LEIDOS INNOVATIONS TECHNOLOGY, INC | Complex event processing engine |
8959156, | Jun 27 2006 | FINGERPRINT CARDS ANACATUM IP AB | Peer-to-peer aggregation system |
9131016, | Sep 11 2007 | EJAMMING, INC | Method and apparatus for virtual auditorium usable for a conference call or remote live presentation with audience response thereto |
9507683, | May 20 2011 | International Business Machines Corporation | Monitoring service in a distributed platform |
9507684, | May 20 2011 | International Business Machines Corporation | Monitoring service in a distributed platform |
ER2601, | |||
ER8677, | |||
ER9688, |
Patent | Priority | Assignee | Title |
5659725, | Jun 06 1994 | THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT | Query optimization by predicate move-around |
6424967, | Nov 17 1998 | AT&T Corp. | Method and apparatus for querying a cube forest data structure |
7346601, | Jun 03 2002 | Microsoft Technology Licensing, LLC | Efficient evaluation of queries with mining predicates |
20070198479, | |||
20070239759, | |||
20080082628, | |||
20080086469, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 10 2007 | KO, STEVE | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020001 | /0632 | |
Jul 30 2007 | IYER, SUBRAMONIAM N | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020001 | /0632 | |
Jul 30 2007 | YALAGANDULA, PRAVEEN | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020001 | /0632 | |
Jul 30 2007 | TALWAR, VANISH | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020001 | /0632 | |
Jul 30 2007 | MILOJICIC, DEJAN S | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020001 | /0632 | |
Jul 31 2007 | Hewlett-Packard Development Company, L.P. | (assignment on the face of the patent) | / | |||
Oct 27 2015 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Hewlett Packard Enterprise Development LP | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037079 | /0001 |
Date | Maintenance Fee Events |
Jan 26 2017 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jan 21 2021 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Aug 06 2016 | 4 years fee payment window open |
Feb 06 2017 | 6 months grace period start (w surcharge) |
Aug 06 2017 | patent expiry (for year 4) |
Aug 06 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 06 2020 | 8 years fee payment window open |
Feb 06 2021 | 6 months grace period start (w surcharge) |
Aug 06 2021 | patent expiry (for year 8) |
Aug 06 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 06 2024 | 12 years fee payment window open |
Feb 06 2025 | 6 months grace period start (w surcharge) |
Aug 06 2025 | patent expiry (for year 12) |
Aug 06 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |