A method for systematically configuring and deploying an empirical model used for fault detection and equipment health monitoring. The method is driven by a set of data preprocessing and model performance metrics subsystems that when applied to a raw data set, produce an optimal empirical model.
|
1. A method for implementing a monitoring system for monitoring equipment health, comprising the steps of:
generating a first empirical model from reference data representing a piece of equipment;
generating a second empirical model from the reference data;
generating at least one performance metric for said first empirical model including intentionally adding a disturbance to at least one variable;
generating at least one performance metric for said second empirical model including intentionally adding a disturbance to at least one variable;
upon comparing the at least one performance metrics from said first and said second empirical models, selecting one of them to use for monitoring the piece of equipment.
17. An apparatus for determining the performance of a model based monitoring system for monitoring equipment health, comprising:
a processor for executing computer code;
a memory for storing reference data and test data representative of a piece of equipment;
a first computer code module disposed to cause said processor to generate a model of said piece of equipment from said reference data;
a second computer code module disposed to cause said processor to use said model to generate normal estimates of said test data; and
a third computer code module disposed to cause said processor to intentionally add a disturbance value to one variable per sample, for at least some of the samples comprising said test data, generate an estimate of each such sample, compare the estimate to the corresponding one of said normal estimates for said test data, and generate a model performance metric therefrom.
31. An apparatus for determining the performance of a model based monitoring system for monitoring equipment health, comprising:
a processor for executing computer code;
a memory for storing reference data and test data representative of a piece of equipment;
a first computer code module disposed to cause said processor to generate a model of said piece of equipment from said reference data;
a second computer code module disposed to cause said processor to use said model to generate normal estimates of said test data; and
a third computer code module to generate a robustness metric by being disposed to cause said processor to:
add a disturbance value to one variable per sample, for at least some of the samples comprising said test data,
generate an estimate of each such sample,
compare the estimate to the corresponding one of said normal estimates for said test data,
sum the absolute values of all differences between estimates for a variable with the disturbance value and estimates for the same variable without the disturbance value, and
divide the sum by the quantity of the count of all samples wherein that variable was disturbed multiplied by the disturbance value.
33. An apparatus for determining the performance of a model based monitoring system for monitoring equipment health, comprising:
a processor for executing computer code;
a memory for storing reference data and test data representative of a piece of equipment;
a first computer code module disposed to cause said processor to generate a model of said piece of equipment from said reference data;
a second computer code module disposed to cause said processor to use said model to generate normal estimates of said test data; and
a third computer code module disposed to cause said processor to:
add a disturbance value to one variable per sample, for at least some of the samples comprising said test data,
generate an estimate of each such sample,
compare the estimate to the corresponding one of said normal estimates for said test data,
determine a bias in estimates for a target variable,
determine a measure of variance for the target variable, and
determine a minimum detectable shift for the target variable equivalent to a measure of robustness for the target variable plus the quantity of the estimate bias for the target variable multiplied by the measure of variance in the target variable.
32. An apparatus for determining the performance of a model based monitoring system for monitoring equipment health, comprising:
a processor for executing computer code;
a memory for storing reference data and test data representative of a piece of equipment;
a first computer code module disposed to cause said processor to generate a model of said piece of equipment from said reference data;
a second computer code module disposed to cause said processor to use said model to generate normal estimates of said test data; and
a third computer code module to generate a spillover metric by being disposed to cause said processor to:
add a disturbance value to one variable per sample, for at least some of the samples comprising said test data,
generate an estimate of each such sample,
compare the estimate to the corresponding one of said normal estimates for said test data,
determine a measure of spillover from a first variable to a second variable by subtracting the estimates of said second variable for samples in which said first variable has the disturbance value added to it, from the estimates of said second variable in the same samples when no disturbance value has been added to any variable of such samples,
determining a normalized root Mean square (RMS) for the resulting differences, and
dividing by a measure of variance of said first variable.
26. A method for implementing a monitoring system for monitoring equipment health, comprising the steps of:
generating a first empirical model from reference data representing a piece of equipment;
generating a second empirical model from the reference data;
generating at least one measure of spillover for said first empirical model, comprising:
providing a set of multivariate, normal test data samples representative of normal operation of said piece of equipment,
adding a disturbance to at least one variable of said normal test data samples for forming disturbed normal test data,
generating estimates with each said empirical model of normal test data,
generating estimates with each said empirical model of disturbed normal test data,
for each said empirical model, differencing the estimates of at least one other variable for said disturbed normal test data with the estimates of the other variable for said normal test data,
determining normalized root Mean square (RMS) for said differences, and
dividing by a measure of variance in the disturbed variable absent the disturbance, to determine a measure of impact on the other variable for the disturbed variable;
generating at least one measure of spillover for said second empirical model; and
upon comparing the at least one measure of spillover from said first and said second empirical models, selecting one of them to use for monitoring the piece of equipment.
28. A method for implementing a monitoring system for monitoring equipment health, comprising the steps of:
generating a first empirical model from reference data representing a piece of equipment;
generating a second empirical model from the reference data;
generating at least one measure of minimum detectable shift for said first empirical model, comprising:
providing a set of multivariate, normal test data samples representative of normal operation of said piece of equipment,
adding a disturbance to at least one target variable of said set of normal test data, over at least some of said normal test data samples, to form disturbed normal test data,
generating estimates with each said empirical model of said normal test data,
generating estimates with each said empirical model of said disturbed normal test data,
for each said empirical model, differencing the estimates of said disturbed normal test data with the estimates of said normal test data to determine a measure of robustness for the target variable,
determining a bias in estimates for the target variable,
determining a measure of variance for the target variable, and
determining a minimum detectable shift for the target variable equivalent to the measure of robustness for the target variable plus the quantity of the estimate bias for the target variable multiplied by the measure of variance in the target variable;
generating at least one measure of minimum detectable shift for said second empirical model; and
upon comparing the at least one performance metrics from said first and said second empirical models, selecting one of them to use for monitoring the piece of equipment.
2. A method according to
3. A method according to
4. A method according to
providing a set of multivariate, normal test data samples representative of normal operation of said piece of equipment;
adding the disturbance to at least one variable of at least some of said normal test data samples for forming disturbed normal test data;
generating estimates with each said empirical model of normal test data;
generating estimates with each said empirical model of said disturbed normal test data; and
for each said empirical model, differencing the estimates of said disturbed normal test data with the estimates of said normal test data to determine a measure of robustness for each disturbed variable.
5. A method according to
6. A method according to
7. A method according to
10. A method according to
11. A method according to
providing a set of multivariate, normal test data samples representative of normal operation of said piece of equipment;
adding the disturbance to at least one variable of at least some of said normal test data samples for forming disturbed normal test data;
generating estimates with each said empirical model of normal test data;
generating estimates with each said empirical model of said disturbed normal test data; and
for each said empirical model, differencing the estimates of at least one other variable for said disturbed normal test data with the estimates of the other variable for said normal test data, determining normalized root Mean square (RMS) for said differences, and dividing by a measure of variance in the disturbed variable absent the disturbance, to determine a measure of impact on the other variable for the disturbed variable.
12. A method according to
13. A method according to
14. A method according to
providing a set of multivariate, normal test data samples representative of normal operation of said piece of equipment;
adding the disturbance to at least one target variable of at least some of said normal test data samples for forming disturbed normal test data;
generating estimates with each said empirical model of normal test data;
generating estimates with each said empirical model of said disturbed normal test data;
for each said empirical model, differencing the estimates of said disturbed normal test data with the estimates of said normal test data to determine a measure of robustness for the target variable;
determining a bias in estimates for the target variable;
determining a measure of variance for the target variable; and
determining a minimum detectable shift for the target variable equivalent to the measure of robustness for the target variable plus the quantity of the estimate bias for the target variable multiplied by the measure of variance in the target variable.
18. An apparatus according to
19. An apparatus according to
20. An apparatus according to
21. An apparatus according to
22. An apparatus according to
23. An apparatus according to
27. A method according to
|
This application claims the benefit of priority under 35 U.S.C. § 119(e) to U.S. Provisional application No. 60/674,581 filed Apr. 25, 2005.
The present invention relates to equipment health monitoring, and more particularly to the setup and deployment of model-based equipment health monitoring systems.
Recently, new techniques have been commercialized to provide equipment health monitoring and early warning of equipment failure. Unlike prior techniques that depend on a precise physical understanding of the mechanics of the machine's design, these new techniques rely on empirical modeling methods to “learn” normal equipment behavior so as to detect nascent signs of abnormal behavior when monitoring an ailing machine or process. More specifically, such new techniques learn operational dynamics of equipment from sensor data on that equipment, and build this learning into a predictive model. The predictive model is a software representation of the equipment's performance, and is used to generate estimates of equipment behavior in real time. Comparison of the prediction to the actual ongoing measured sensor signals provides for detection of anomalies.
According to one of the new techniques described in U.S. Pat. No. 5,764,509 to Wegerich et al., sensor data from equipment to be monitored is accumulated and used to train an empirical model of the equipment. The training includes determining a matrix of learned observations of sets of sensor values inclusive of sensor minimums and maximums. The model is then used online to monitor equipment health, by generating estimates of sensor signals in response to measurement of actual sensor signals from the equipment. The actual measured values and the estimated values are differenced to produce residuals. The residuals can be tested using a statistical hypothesis test to determine with great sensitivity when the residuals become anomalous, indicative of incipient equipment failure.
While the empirical model techniques have proven to be more sensitive and more robust than traditional physics-based models, allowing even for personalized models specific to individual machines, the development and deployment of the equipment models represents significant effort. Empirical models are not amenable to a complete and thorough elucidation of their function, and so creating properly functioning models is prone to some trial and error. Furthermore, since they are largely data-driven, they can only provide as much efficacy for equipment health monitoring as the data allows. It is often difficult to know ahead of time how well a data-derived model will be able to detect insipient equipment health problems, but it is also unreasonable to await a real equipment failure to see the efficacy of the model. Tuning of an empirical model is also more a matter of art than science. Again, because the model is derived from data, the tuning needs of the model are heavily dependent on the quality of the data vis-à-vis the equipment's dynamic range and the manner in which the equipment can fail. Currently, model-based monitoring systems require significant manual investment in model development for the reasons stated above.
There is a need for means to better automate the empirical modeling process for equipment health monitoring solutions, and to improve the rate of successful model development. What is needed is a means of determining the capabilities of a given data-derived model, and of comparing alternative models. What is further needed is a way of automating deployment of individual data-derived models for fleets of similar equipment without significant human intervention. Furthermore, a means is needed of tuning a model in-line whenever it is adapted without human intervention.
A method and system is provided for automated measurement of model performance in a data-driven equipment health monitoring system, for use in early detection of equipment problems and process problems in any kind of mechanical or industrial engine or process. The invention enables the automatic development and deployment of empirical model-based monitoring systems, and the models of the monitored equipment they are based on. Accordingly, the invention comprises a number of modules for automatically determining in software the accuracy and robustness of an algorithmic data-driven model, as well as other performance metrics, and deploying a model selected based on such performance metrics, or redesigning said model as necessary.
The invention enables quick deployment of large numbers of empirical models for monitoring large fleets of assets (jet engines, automobiles or power plants, for example), which eases the burden on systems engineers that would normally set up models individually using manually intensive techniques. This results in a highly scalable system for condition based monitoring of equipment. In addition, each model and each variable of every model will have associated with it measures of performance that assess model accuracy, robustness, spillover, bias and minimum detectable shift. The measures can easily be re-calculated at any time if need be to address changes in the model due to adaptation, system changes and anything else that could effect model performance.
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as the preferred mode of use, further objectives and advantages thereof, is best understood by reference to the following detailed description of the embodiments in conjunction with the accompanying drawing, wherein:
An equipment health monitoring system according to the invention is shown in
Separately, a workbench desktop application 135 is used by an engineer to develop the model(s) used by the estimation engine 105. Data representative of the normal operation of equipment to be monitored, such as data from sensors on a jet engine representative of its performance throughout a flight envelope, is used in the workbench 135 to build the model. Model training module 140 converts the data into selected learned reference observations, which comprise the learned reference library 110. A model performance module 145 provides the engineer with measures of model efficacy in the form of accuracy, robustness, spillover, bias and minimum detectable shift, which aids in determining which empirical model to deploy in the learned reference library 110. Model performance module 145 can also be configured to run in real-time to assess model efficacy after an adaptation of the model, which is carried out in real-time by adaptation module 150, responsive to rules that operate on the input data from input module 115. The adaptation module 150 has the ability to update the learned reference library 110, for example, if an input parameter such as an ambient temperature exceeds a previously experienced range learned by the model, and the model needs to accommodate the new extra-range data into its learning.
According to the present invention, the modeling technique can be chosen from a variety of known empirical modeling techniques, or even data-driven techniques that will yet be developed. By way of example, models based on kernel regression, radial basis functions, similarity-based modeling, principal component analysis, linear regression, partial least squares, neural networks, and support vector regression are usable in the context of the present invention. In particular, modeling methods that are kernel-based are useful in the present invention. These methods can be described by the equation:
where a vector xest of sensor signal estimates is generated as a weighted sum of results of a kernel function K, which compares the input vector xnew of sensor signal measurements to multiple learned snapshots of sensor signal combinations, xi. The kernel function results are combined according to weights ci, which can be determined in a number of ways. The above form is an “autoassociative” form, in which all estimated output signals are also represented by input signals. This contrasts with the “inferential” form in which certain output signal estimates are provided that are not represented as inputs, but are instead inferred from the inputs:
where in this case, y-hat is an inferred sensor estimate. In a similar fashion, more than one sensor can be simultaneously inferred.
In a preferred embodiment of the invention, the modeling technique used in the estimation engine 105 is similarity based modeling, or SBM. According to this method, multivariate snapshots of sensor data are used to create a model comprising a matrix D of learned reference observations. Upon presentation of a new input observation Xin comprising sensor signal measurements of equipment behavior, autoassociative estimates xest are calculated according to:
where the similarity operator is signified by the symbol {circle around (X)}, and can be chosen from a number of alternative forms. Generally, the similarity operator compares two vectors at a time and returns a measure of similarity for each such comparison. The similarity operator can operate on the vectors as a whole (vector-to-vector comparison) or elementally, in which case the vector similarity is provided by averaging the elemental results. The similarity operator is such that it ranges between two boundary values (e.g., zero to one), takes on the value of one of the boundaries when the vectors being compared are identical, and approaches the other boundary value as the vectors being compared become increasingly dissimilar.
An example of one similarity operator that may be used in a preferred embodiment of the invention is given by:
where h is a width parameter that controls the sensitivity of the similarity to the distance between the input vector xin and the example vector xi. Another example of a similarity operator is given by:
where N is the number of sensor variables in a given observation, C and λ are selectable tuning parameters, Ri is the expected range for sensor variable i, and the elements of vectors Ax and Bx corresponding to sensor i are treated individually.
Further according to a preferred embodiment of the present invention, an SBM-based model can be created in real-time with each new input observation by localizing within the learned reference library 110 to those learned observations with particular relevance to the input observation, and constituting the D matrix from just those observations. With the next input observation, the D matrix would be reconstituted from a different subset of the learned reference matrix, and so on. A number of means of localizing may be used, including nearest neighbors to the input vector, and highest similarity scores.
The possibility of generating data-driven models introduces the problem that some models perform better than others derived from the same data, or from similar data. Optimally, the best model is deployed to monitor equipment, and to this end, the present invention provides the model performance module 145 for generating metrics by which the best model can be automatically deployed.
To measure the performance of a modeling technique, several performance metrics are used. The main objective of the model in the context of fault detection is to reliably detect shifts in modeled parameters. Therefore the accuracy of a model is not always the best measure of the performance of the model. A more comprehensive set of performance metrics are needed to assess a model's ability to detect deviations from normality in addition to the accuracy of the model. To accomplish this, a set of performance metrics is defined according to the invention. These metrics measure the accuracy, robustness, spillover, bias and minimum detectable shift for a given model.
Individual Variable Modeling Accuracy—This is a measure of the accuracy of the autoassociative and/or inferential model for each variable in each group of variables for each test data set. Accuracy is calculated for each variable using a normalized residual Root Mean Square (RMS) calculation (accp). This is calculated by dividing the Root Mean Square (RMS) of the residual for each variable (rmsp) by the standard deviation of the variable itself (σp). A smaller accp corresponds to a higher accuracy. This metric tends to favor over-fitting, and therefore must be assessed with a corresponding robustness measurement. The accuracy measurement for each variable is calculated according to:
Overall Model Accuracy—The overall accuracy for each model containing M modeled output variables, is generated by:
and the spread in accuracy is given by the standard deviation of acci:
Individual Variable Modeling Robustness—This is a measurement of the ability of the model to detect disturbances in each one of the modeled variables. When a fault occurs in a monitored system, it usually (but not always) manifests itself in more than one of the modeled variables. In order to realistically measure robustness, one must accurately simulate fault scenarios and then assess robustness for each variable expected to show deviations from normality. Unfortunately, this is a very impractical approach. To overcome the impracticality, a disturbance is added to each individual variable. If the amount of reference data permits, disturbances are introduced in non-overlapped windows throughout the length of the available reference dataset. The robustness for each variable is then calculated in the corresponding disturbance window. In this way, robustness for all variables may be calculated in one pass. This approach assumes that the length of the reference data set is L≧W*M, where W is the window size and M is the number of variables to be tested in the model. If this is not the case, the disturbance is added M separate times and the analysis is done separately for each variable.
Measuring robustness—Add or subtract a constant amount from each sample of the windowed region of data depending on if the sample is below or above the mean of the signal respectively. The amount to add (or subtract) is typically ½ the range of the variable, so that the disturbance is usually very close to being within the normal data range. Turning to
Both the original set of reference data as well as the data with perturbations is input to the candidate model, and estimates are generated. The objective is to see how badly the model estimates are influenced by the perturbed data, especially in view of how well the model makes estimates when the data is pristine. To calculate the robustness metric for each variable the following equation is used:
Here, the A(i)s are the estimates of the input with the disturbance minus the estimates without the disturbance for the samples with the added disturbances; and the S(j)s are the estimates without the disturbance minus the estimates with the disturbance for the samples with the subtracted disturbances. NA and NS are the number of samples with added and subtracted disturbances respectively, and Δ is the size of the disturbance. Ideally, rob should be equal to 0, meaning that the estimate with the disturbance is equal to the estimate without the disturbance, and the model is extremely robust in the face of anomalous input. If the value of rob is 1 or greater, the estimate is either completely following the disturbance or overshooting it.
Overall Model Robustness—The overall robustness is just the average of the individual modeled variable robustness measurements over variables p and the spread in robustness is given by the standard deviation of the individual robustness measurements.
Spillover—This measures the relative amount that variables in a model deviate from normality when another variable is perturbed. In contrast to robustness, spillover measures the robustness on all other variables when one variable is perturbed. Importantly, this metric is not calculated in the case of an inferential model, where perturbation of an input (an independent parameter) would not be expected to meaningfully impact an output in terms of robustness, since the outputs are entirely dependent on the inputs. The spillover measurement for each variable is calculated using a normalized RMS calculation (sprp|q), which is given by:
where {circumflex over (x)}p|norm
Overall spillover—The overall spillover metric for a model is given by:
Model Bias—This metric gives a measure of the constant difference between the model estimate and actual data above and below the mean of the data. It is calculated for each variable using the following formula:
Here, Xp is a vector of samples for the input variable p, {circumflex over (X)}p is a vector of corresponding estimates for that input variable and σp is the standard deviation of the Xp. The model bias metric is calculated on unperturbed, normal data.
Minimum Detectable Shift—The minimum detectable shift that can be expected for each variable is given by:
Mdsp=robp+σp×Biasp.
Turning to
A rule set may be used to operate on the model metrics to determine which candidate model to deploy as the optimal model. The rule set can be implemented as software in the model performance module 145. According to one preferred embodiment, the monitoring system of the present invention is provided with identification of which model variables are considered “performance” variables, that is, the sensors that are watched most closely for expected signs of failure known to a domain expert for the subject equipment. Further, the inventive system is supplied with the desired target levels of minimum detectable shift on those variables. These performance variables will often be a subset of the available sensors, and the challenge is to identify a group of sensors inclusive of the performance variables which optimally models the performance variables and provides best fault detection based on them. At the model filtering stage 430, these requirements are used to determine whether each model meets or fails the requirements. If a model cannot detect the minimum desired detectable shift for a performance variable, it is eliminated as a candidate. Once the models have been thus filtered, they are ranked in step 435 according to the rule set operating on the model metrics as desired by the user. For example, once a model has met the minimum desired detectable shift requirements for certain performance variables, the models may be ranked on accuracy alone, and the most accurate model selected for deployment. As an alternative example, the rule set may specify that a ranking be made for all models according to each of their model metrics. Their ranks across all metrics are averaged, and the highest average ranking selects for the model to be deployed. As yet another example, further criteria on rank may be applied, such that the highest average ranking model is chosen, so long as that model is never ranked in the bottom quartile for any given metric. In yet another embodiment, some of or all of the model metrics may be combined in a weighting function that assigns importance to each metric, to provide an overall model score, the highest scoring model being the model selected for deployment.
Xu, Xiao, Wegerich, Stephan W., Pipke, Robert Matthew, Herzog, James P., Wolosewicz, Andre
Patent | Priority | Assignee | Title |
10025653, | Dec 08 2015 | Uptake Technologies, Inc. | Computer architecture and method for modifying intake data rate based on a predictive model |
10062291, | Oct 21 2014 | The Boeing Company | Systems and methods for providing improved flight guidance |
10169135, | Mar 02 2018 | Uptake Technologies, Inc. | Computer system and method of detecting manufacturing network anomalies |
10176032, | Dec 01 2014 | UPTAKE TECHNOLOGIES, INC | Subsystem health score |
10176279, | Jun 19 2015 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Dynamic execution of predictive models and workflows |
10210037, | Aug 25 2016 | Uptake Technologies, Inc. | Interface tool for asset fault analysis |
10216178, | Jun 19 2015 | UPTAKE TECHNOLOGIES, INC | Local analytics at an asset |
10228925, | Dec 19 2016 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Systems, devices, and methods for deploying one or more artifacts to a deployment environment |
10254751, | Jun 19 2015 | UPTAKE TECHNOLOGIES, INC | Local analytics at an asset |
10255526, | Jun 09 2017 | Uptake Technologies, Inc. | Computer system and method for classifying temporal patterns of change in images of an area |
10261850, | Jun 19 2015 | UPTAKE TECHNOLOGIES, INC | Aggregate predictive model and workflow for local execution |
10291732, | Sep 17 2015 | Uptake Technologies, Inc. | Computer systems and methods for sharing asset-related information between data platforms over a network |
10291733, | Sep 17 2015 | Uptake Technologies, Inc. | Computer systems and methods for governing a network of data platforms |
10333775, | Jun 03 2016 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Facilitating the provisioning of a local analytics device |
10379982, | Oct 31 2017 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Computer system and method for performing a virtual load test |
10417076, | Dec 01 2014 | UPTAKE TECHNOLOGIES, INC | Asset health score |
10467532, | Mar 09 2016 | Uptake Technologies, Inc. | Handling of predictive models based on asset location |
10474932, | Sep 01 2016 | Uptake Technologies, Inc. | Detection of anomalies in multivariate data |
10510006, | Mar 09 2016 | Uptake Technologies, Inc. | Handling of predictive models based on asset location |
10545845, | Sep 14 2015 | Uptake Technologies, Inc. | Mesh network routing based on availability of assets |
10552246, | Oct 24 2017 | Uptake Technologies, Inc. | Computer system and method for handling non-communicative assets |
10552248, | Mar 02 2018 | Uptake Technologies, Inc. | Computer system and method of detecting manufacturing network anomalies |
10554518, | Mar 02 2018 | Uptake Technologies, Inc. | Computer system and method for evaluating health of nodes in a manufacturing network |
10579750, | Jun 19 2015 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Dynamic execution of predictive models |
10579932, | Jul 10 2018 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Computer system and method for creating and deploying an anomaly detection model based on streaming data |
10579961, | Jan 26 2017 | Uptake Technologies, Inc. | Method and system of identifying environment features for use in analyzing asset operation |
10607426, | Sep 19 2017 | RTX CORPORATION | Aircraft fleet and engine service policy configuration |
10623294, | Dec 07 2015 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Local analytics device |
10635095, | Apr 24 2018 | Uptake Technologies, Inc. | Computer system and method for creating a supervised failure model |
10635519, | Nov 30 2017 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Systems and methods for detecting and remedying software anomalies |
10671039, | May 03 2017 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Computer system and method for predicting an abnormal event at a wind turbine in a cluster |
10722179, | Nov 29 2005 | PROLAIO, INC | Residual-based monitoring of human health |
10754721, | Dec 01 2014 | Uptake Technologies, Inc. | Computer system and method for defining and using a predictive model configured to predict asset failures |
10796235, | Mar 25 2016 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Computer systems and methods for providing a visualization of asset event and signal data |
10815966, | Feb 01 2018 | Uptake Technologies, Inc. | Computer system and method for determining an orientation of a wind turbine nacelle |
10860599, | Jun 11 2018 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Tool for creating and deploying configurable pipelines |
10878385, | Jun 19 2015 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Computer system and method for distributing execution of a predictive model |
10975841, | Aug 02 2019 | Uptake Technologies, Inc. | Computer system and method for detecting rotor imbalance at a wind turbine |
11017302, | Mar 25 2016 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Computer systems and methods for creating asset-related tasks based on predictive models |
11030067, | Jan 29 2019 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Computer system and method for presenting asset insights at a graphical user interface |
11036902, | Jun 19 2015 | Uptake Technologies, Inc. | Dynamic execution of predictive models and workflows |
11113610, | Aug 26 2014 | GE Aviation Systems Limited | System for building and deploying inference model |
11119472, | Sep 28 2018 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Computer system and method for evaluating an event prediction model |
11144378, | Jun 05 2015 | Uptake Technologies, Inc. | Computer system and method for recommending an operating mode of an asset |
11181894, | Oct 15 2018 | Uptake Technologies, Inc. | Computer system and method of defining a set of anomaly thresholds for an anomaly detection model |
11208986, | Jun 27 2019 | Uptake Technologies, Inc. | Computer system and method for detecting irregular yaw activity at a wind turbine |
11232371, | Oct 19 2017 | Uptake Technologies, Inc. | Computer system and method for detecting anomalies in multivariate data |
11256244, | Feb 05 2018 | Inventus Holdings, LLC | Adaptive alarm and dispatch system using incremental regressive model development |
11295217, | Jan 14 2016 | Uptake Technologies, Inc.; UPTAKE TECHNOLOGIES, INC | Localized temporal model forecasting |
11315034, | Sep 19 2017 | BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD. | Intelligent big data system, and method and apparatus for providing intelligent big data service |
11333578, | Sep 19 2017 | RTX CORPORATION | Method for online service policy tracking using optimal asset controller |
11480934, | Jan 24 2019 | Uptake Technologies, Inc. | Computer system and method for creating an event prediction model |
11711430, | Jan 29 2019 | Uptake Technologies, Inc. | Computer system and method for presenting asset insights at a graphical user interface |
11747237, | Sep 19 2017 | RTX CORPORATION | Method for online service policy tracking using optimal asset controller |
11797550, | Jan 30 2019 | Uptake Technologies, Inc. | Data science platform |
11868101, | Jan 24 2019 | Uptake Technologies, Inc. | Computer system and method for creating an event prediction model |
11892830, | Dec 16 2020 | UPTAKE TECHNOLOGIES, INC | Risk assessment at power substations |
7877240, | Aug 10 2004 | Siemens Aktiengesellschaft | Method for detecting the sources of faults or defective measuring sensors by working case modeling and partial suppression of equations |
8209039, | Oct 01 2008 | Rosemount Inc | Process control system having on-line and off-line test calculation for industrial process transmitters |
8275577, | Sep 19 2006 | Smartsignal Corporation | Kernel-based method for detecting boiler tube leaks |
8311774, | Dec 15 2006 | Smartsignal Corporation | Robust distance measures for on-line monitoring |
8478542, | Jun 19 2006 | PROLAIO, INC | Non-parametric modeling apparatus and method for classification, especially of activity state |
8597185, | Nov 29 2005 | PROLAIO, INC | Residual-based monitoring of human health |
8620591, | Jan 14 2010 | PROLAIO, INC | Multivariate residual-based health index for human health monitoring |
8620853, | Jul 19 2011 | Smartsignal Corporation | Monitoring method using kernel regression modeling with pattern sequences |
8660980, | Jul 19 2011 | Smartsignal Corporation | Monitoring system using kernel regression modeling with pattern sequences |
8795170, | Nov 29 2005 | PROLAIO, INC | Residual based monitoring of human health |
9250625, | Jul 19 2011 | GE INTELLIGENT PLATFORMS, INC | System of sequential kernel regression modeling for forecasting and prognostics |
9256224, | Jul 19 2011 | GE INTELLIGENT PLATFORMS, INC | Method of sequential kernel regression modeling for forecasting and prognostics |
9430882, | Oct 11 2013 | BRIGHTORDER IP INC | Computerized vehicle maintenance management system with embedded stochastic modelling |
9471452, | Dec 01 2014 | UPTAKE TECHNOLOGIES, INC | Adaptive handling of operating data |
9743888, | Nov 29 2005 | PROLAIO, INC | Residual-based monitoring of human health |
9842034, | Sep 14 2015 | Uptake Technologies, Inc. | Mesh network routing based on availability of assets |
9864665, | Dec 01 2014 | Uptake Technologies, Inc. | Adaptive handling of operating data based on assets' external conditions |
9910751, | Dec 01 2014 | Uptake Technologies, Inc. | Adaptive handling of abnormal-condition indicator criteria |
Patent | Priority | Assignee | Title |
6181975, | Jun 19 1996 | Arch Development Corporation | Industrial process surveillance system |
6331864, | Sep 23 1997 | WIREMED TECH LLC | Real-time multimedia visual programming system |
6522978, | Sep 15 1999 | General Electric Company | Paper web breakage prediction using principal components analysis and classification and regression trees |
6859739, | Jan 19 2001 | Smartsignal Corporation | Global state change indicator for empirical modeling in condition based monitoring |
20030139908, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 24 2006 | Smartsignal Corporation | (assignment on the face of the patent) | / | |||
Jul 14 2006 | WEGERICH, STEPHAN W | Smartsignal Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018162 | /0329 | |
Jul 14 2006 | XU, XIAO | Smartsignal Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018162 | /0329 | |
Jul 14 2006 | HERZOG, JAMES P | Smartsignal Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018162 | /0329 | |
Jul 18 2006 | WOLOSEWICZ, ANDRE | Smartsignal Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018162 | /0329 | |
Jul 25 2006 | PIPKE, R MATTHEW | Smartsignal Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 018162 | /0329 |
Date | Maintenance Fee Events |
Mar 14 2013 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jun 29 2017 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
May 20 2021 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Dec 29 2012 | 4 years fee payment window open |
Jun 29 2013 | 6 months grace period start (w surcharge) |
Dec 29 2013 | patent expiry (for year 4) |
Dec 29 2015 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 29 2016 | 8 years fee payment window open |
Jun 29 2017 | 6 months grace period start (w surcharge) |
Dec 29 2017 | patent expiry (for year 8) |
Dec 29 2019 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 29 2020 | 12 years fee payment window open |
Jun 29 2021 | 6 months grace period start (w surcharge) |
Dec 29 2021 | patent expiry (for year 12) |
Dec 29 2023 | 2 years to revive unintentionally abandoned end. (for year 12) |