The present approach relates to an automated approach for verifying sufficiency of and/or quality of a service operation performed on an asset by a field engineer. In one implementation the approach employs autonomous tests and/or compares performance of the serviced asset with that of comparable peers operating in similar or co-local environments.
|
18. A method for validating a service operation performed on an industrial asset, comprising:
in response to an indication that a service operation is completed on the industrial asset, identifying, via a processor integral with the industrial asset, a subset of comparison devices within a population of comparison devices that are comparable to the industrial asset based on one or more criteria;
acquiring, via the processor, one or more comparison performance metrics from the subset of comparison devices;
comparing, via the processor, the one or more comparison performance metrics to actual performance metrics generated after receipt of the indication that the service operation is completed; and
based on the comparison of the actual performance metrics to the comparison performance metrics, generating, via the processor, a notification that the service operation is either satisfactory or not satisfactory.
14. A device configured to be serviced on-site, the device comprising:
a processor integral with the device; and
a data storage or memory storing processor-executable routines which, when invoked and executed by the processor, implement one or both of an interface agent and validation agent, which alone or in combination:
receive a service completion indication upon completion of a service operation performed on the device;
in response to the service completion indication, calculate expected performance metrics based on one or more performance models for the device;
generate actual performance metrics for the device by operating the device after receipt of the service completion indication;
in response to the actual performance metrics being within a specified threshold of the expected performance metrics, generate a completion notification that the service is complete; and
in response to the actual performance metrics being outside the specified threshold:
request a subset of comparison devices within a population of comparison devices that are comparable to the device via a look-up agent;
based on the subset of comparison devices, request comparison performance metrics from each comparison device of the subset of comparison devices;
compare the actual performance metrics to the comparison performance metrics; and
based on the comparison of the actual performance metrics to the comparison performance metrics, generate a notification that the service operation is either satisfactory or not satisfactory.
1. A method for validating a service operation performed on a device, comprising:
receiving, via a processor integral with the device, a service completion indication upon completion of a service operation to the device;
in response to the service completion indication, initiating, via the processor, validation of the service operation;
calculating, via the processor, expected performance metrics based on one or more performance models for the device;
generating, via the processor, actual performance metrics for the device by operating the device after receipt of the service completion indication;
in response to the actual performance metrics being within a specified threshold of the expected performance metrics, generating, via the processor, a completion notification that the service operation is complete; and
in response to the actual performance metrics being outside the specified threshold:
identifying, via the processor, a subset of comparison devices within a population of comparison devices that are comparable to the device by querying a look-up agent communicatively coupled to the processor;
based on the subset of comparison devices, requesting, via the processor, comparison performance metrics from each comparison device of the subset of comparison devices;
comparing, via the processor, the actual performance metrics to the comparison performance metrics; and
based on the comparison of the actual performance metrics to the comparison performance metrics, generating, via the processor, a notification that the service operation is either satisfactory or not satisfactory.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
11. The method of
12. The method of
13. The method of
15. The device of
16. The device of
17. The device of
19. The method of
20. The method of
|
The subject matter disclosed herein relates to the servicing and repair of field assets including large-scale assets used in power generation and/or transmission, other utility applications, transportation, production of materials or goods, and so forth.
For a variety of large-scale assets using in utilities, manufacturing, mining and resource production, transportation, and so forth, a given asset may be in a fixed or constrained location, and thus must be serviced or maintained at that location. Examples of such assets may include, but are not limited to, turbines used in power generation (e.g., wind turbines, steam turbines, gas turbines, hydroelectric turbines, and so forth), electricity transportation or storage infrastructure, water transportation infrastructure, rail systems, automated or semi-automated manufacturing equipment (e.g., mobile or fixed placement robots), oil and gas pumping equipment or separators, and so forth. Due to the localized nature of such assets, routine or on-call servicing of a given asset may be performed by a field engineer who travels on-site to perform a repair or service operation.
In conventional approaches, when a field engineer performs service on an asset, the service completion and quality are determined at the discretion of the field engineer. Post service, the field engineer may perform tests to verify that the service is successful and asset is functioning normally. These tests are typically not exhaustive. Furthermore, the interpretation of the certain test result may be at least partly subjective, and thus biased by a given technician's expertise, technical background, personal mode, and so on. As a result, it is not uncommon for the asset to require service for the same symptoms repeatedly and/or for the asset's productivity to be sub-optimal.
In one embodiment, a method is provided for validating a service operation performed on a device. In accordance with this method, a service completion indication is received at a device upon completion of a service operation to the device. In response to the service completion indication, validation of the service operation is requested by a validation agent executing on or in communication with the device. An expected performance metric is calculated based on one or more performance models for the device. One or more actual performance metrics for the device are generated after receipt of the service completion indication. A notification that the service is complete is generated if the actual performance metrics are within a specified threshold of the expected performance metrics. If the actual performance metrics are not within the specified threshold, a list of comparison devices is requested from a look-up agent. Based on the list of comparison devices, comparison performance metrics are requested from each listed comparison device. The actual performance metrics are compared to the comparison performance metrics. Based on the comparison of the actual performance metrics to the comparison performance metrics, a notification is generated that the service operation is either satisfactory or not satisfactory.
In a further embodiment, a device configured to be serviced on-site is provided. The device includes a processor component and a data storage or memory storing processor executable routines. The routines, when invoked and executed by the processor, implement one or both of an interface agent and validation agent, which alone or in combination: receive a service completion indication upon completion of a service operation to the device; in response to the service completion indication, calculate an expected performance metric based on one or more performance models for the device; generate one or more actual performance metrics for the device after receipt of the service completion indication; generate a notification that the service is complete if the actual performance metrics are within a specified threshold of the expected performance metrics; if the actual performance metrics are not within the specified threshold, request a list of comparison devices from a look-up agent; based on the list of comparison devices, request comparison performance metrics from each listed comparison device; compare the actual performance metrics to the comparison performance metrics; and based on the comparison of the actual performance metrics to the comparison performance metrics, generate a notification that the service operation is either satisfactory or not satisfactory.
In an additional embodiment, a method is provided for validating a service operation performed on a device. In accordance with this method, in response to an indication that a service operation is completed on the device, one or more comparison devices are automatically identified based on one or more criteria. One or more comparison performance metrics are automatically acquired from the one or more comparison devices. The one or more comparison performance metrics are compared to actual performance metrics generated after receipt of the indication that the service operation is completed. Based on the comparison of the actual performance metrics to the comparison performance metrics, a notification is generated that the service operation is either satisfactory or not satisfactory.
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present invention, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Furthermore, any numerical examples in the following discussion are intended to be non-limiting, and thus additional numerical values, ranges, and percentages are within the scope of the disclosed embodiments.
The present discussion relates to an automated approach for verifying sufficiency of and/or quality of a service operation performed on an asset by a field engineer. In one implementation the approach employs autonomous tests and/or compares performance of the serviced asset with that of its peers operating in similar or co-local environments. The present approach, therefore, addresses issues for evaluating sufficiency of a service operation that are related to subjective human errors and variability.
By way of example, and turning to
In the depicted example, the device 100 includes one or more microprocessor components 110, a memory 112 or other data storage structure configured to store data used or generated by the processor 110 and/or models or routine executed or implemented by the processor 110, and one or more sensors 114 configured to generate sensed measurements that may be used in evaluation the performance or operation of the device 100 when running. Though the processor 110, memory 112, and sensors 114 are all depicted as being internal to or integral with the device 100 in the example of
In the depicted example, various agents may be implemented on or in communication with the device 100 (such as via stored routines executed on a processor 110, application specific integrated circuitry, and so forth) to facilitate interaction with the field engineer 102 and validation of a service operation. For example, in one implementation an agent-oriented architecture is employed as a basis for the organization and communication of the agent aspects. Processor or circuitry-implemented agents may, when running or invoked (either by a user or automatically), act with some degree of autonomy with respect to performing programmed or configured tasks without direct control or interaction by a user and may invoke other tasks (such as communication with other devices or data stores, acquisition of sensor data, operation of the device 100, and so forth) needed to complete the role attributed to a given agent, such as service validation.
By way of example, the depicted implementation shown in
With respect to the repair validation agent, in the depicted example this agent may receive a request to validate a repair (or other service operation) from the UI agent 120 when a service operation is reported as being completed by the field engineer 102. In response to this request, and using logic and processes discussed in greater detail with respect to
As discussed herein, as part of determining sufficiency of a validation operation, the repair validation agent 122 may access various resources, including data from one or more sensors 114 configured to monitor operational parameters of the device 100, one or more performance evaluation metrics or models 130 (e.g., suitable for modeling performance of the device 100 and/or of generating performance metrics for the device 100, and/or performance data or metrics from one or more other devices 150 (e.g., comparison or neighboring devices). With respect to possible sensing modalities, such sensors may be device specific and may be characterized as either being condition monitoring (e.g., environmental sensors, such as sensors for sensing or measuring temperature, humidity, wind or wind speed, and so forth) or performance monitoring (e.g., operational sensors measuring some operational or performance characteristic of the asset).
In the depicted example, the performance metrics or models 130 are stored on the device 100, such as in memory 112 and executable by processor 110. In practice, however, such models 130 may be accessed from a remote source as part of the validation process or from an external storage, including a storage device provided by the engineer 102 and connected to the device as part of the service operation. Examples of performance evaluation metrics or models 103 that may be employed include, but are not limited to: degradation models over time, performance models over time, and so forth. As will be appreciated, such performance evaluation models and metrics 130 may be dependent on the model of the device 100.
With respect to the other devices 150, in one implementation the devices 150 from which comparative performance data may be sought may be neighboring devices of the same or comparable type. Such devices 150 may be subject to comparable or identical local or environmental conditions, thus helping to account for such environmental factors with respect to device performance. For example, in the context of a wind farm where the device 100 is a wind turbine, other devices 150 may be other wind turbines of identical or comparable design and present at the wind farm or within a certain proximity (e.g., the closest 5, 10, or 15 turbines, all turbines within 1, 5, or 10 miles, and so forth). Thus performance of such other devices 150, in this neighboring or proximity context can effectively limit or eliminate performance differences attributable to variations in the local environment. Further, to the extent that only the same or comparable equipment model types are queries for performance metrics, performance differences related to device differences can also be limited or eliminated.
In addition, in the depicted example various types of agents are depicted to facilitate identifying and/or communicating with the other devices 150. In this example, the repair validation agent 122 queries one or more look-up agents for an asset or device community 160 (here depicted as part of a community home) to which the population of devices of interest belong. Thus, such look-up agents may allow referencing and searching of devices in a local context (e.g., a wind farm, power plant, and so forth), a regional context, (e.g., state or multi-state area), a national context, an international context, and/or a global (e.g., including regions outside country bounds) context.
By way of example, a population of devices may be registered to one or more agents that allow different devices to be identified (e.g., “looked up”) in various ways. For example, the device community 160 may include or be searchable via a “yellow pages agent” 180 provided as a software module or component that provides a database or datastore that organizes information based on category and/or capability (such as device capabilities, equipment type, location, and so forth). Thus, the yellow pages agent 180 may allow devices to be search and identified based on their functionality, capabilities, and so forth.
The device community 160 may also include or be searchable via a white pages agent 182 provided as a software module or component that provides a database or datastore that organizes information (in this case other devices at least of the types embodied by device 100 and other devices 150) based on title or other identification. Thus, in the present example, the white pages agent 182 can be searched by name or title for a particular device or devices, such as to return a communication address. The yellow pages agent 180 can instead be searched based on one or more defined functions, capabilities, and/or locations to return a list of one or more devices having the specified characteristics. Based on the returned list, the identified devices can then be located or communicated with via data retrieved from the white pages agent 182.
By way of example, in one implementation the white pages agent 182 and the yellow pages agent 180 are hosted on a community host server, which can be on the premises of the device 100 or within a virtualized or network context (e.g., within the “Cloud”). At boot up all devices 100, 150 within the device community are registered with the white pages agent 182 and the yellow pages agent 180. For example, devices 100, 150 may register their existence to the white pages agent 182 so as to be discoverable by other devices within the community. Devices 100, 150 register their capabilities and services they provide or are capable of providing with the yellow pages agent 180. In one such arrangement, devices talk to yellow pages agent 180, white pages agent 182, and with each other using a predicate logic based communication language, such as Agent Communication Language (ACL).
While separate and distinct white pages agents 182 and yellow pages agents 180 are shown and described with respect to
With the preceding in mind, and turning to
In the depicted example, upon receipt of the validation request 204, the repair validation agent 122 may query or otherwise access (block 210) one or more performance models stored or otherwise provided on the device 100 or accessible to the device 100. The repair validation agent 122 may then employ the performance models 130, which may take into account or be parameterized by various device-specific factors, location-specific factors, environment-specific factors, or other factors to calculate (block 214) one or more metrics 218 corresponding to an expected performance of the device 100 if the service operation was successful or sufficient. In one implementation, the repair validation agent 122 may also take into account performance data or metrics form neighboring or comparison devices 150 (discussed in greater detail with respect to
Turning to
Conversely, if the actual performance metrics do not meet or exceed the expected performance metrics, the repair validation agent 122 requests (block 264) a list of comparison devices 150 whose performance metrics may be useful in determining whether the actual performance metrics 252 of the service device 100 are consistent with a successful service operation. In the depicted example, the repair validation agent 122 may specify one or more factors or criteria (e.g., location, proximity, model or equipment type, age, and so forth) that may be searchable in a yellow pages agent 180 to generate a list of devices 150 will be useful for evaluating the performance of the serviced device 100. As will be appreciated from the discussion above, depending on the implementation a white pages agent 182 may also be queried, as needed, to acquire contact information for the comparison devices 150 so identified. Though in other implementations such information may be provided as part of the list of comparison devices 150.
By way of example, in one implementation the comparison devices 150 may be comparable devices (e.g., wind turbines, mobile robots, and so forth) at the same site (e.g., wind farm, warehouse, manufacturing site, and so forth), within a threshold distance (e.g., 1 mile, 5 miles, 10 miles, 50 miles, and so forth), situated in a comparable terrain (e.g., hilly terrain, plains, a valley, and so forth), or experiencing comparable weather or environmental conditions at that time. Likewise, comparison devices 150 may be identified or limited to be the same or a comparable model as the serviced device 100 (e.g., the same model, or a subsequent or prior model released within a threshold time frame, such as 1, 2, or 5 years. Similarly, comparison devices 150 may be identified as or limited to be devices of the same or comparable age as the serviced device 100 or having a similar service history. In this manner, comparison devices 150 may be selected to be a group of devices that allow performance variation due to environment, age, service, history, location, and so forth to be controlled for or eliminated, thus allowing the sufficiency of the current service operation to be better assessed.
As shown in the provided example, a determination (block 268) may be made as to whether any of the searchable devices meet the specified criteria, i.e., whether there are any comparison devices 150. If not, no further comparison is possible and the service is indicated as incomplete or unsatisfactory (block 280). The repair validation agent 122 indicates to the UI agent 120 that the service operation is not acceptable and the UI agent 120 provides a notification to the field engineer 102. The field engineer 102 may then re-perform the service operation or may perform an alternative operation to address the deficiency (block 282).
Conversely, if searchable devices meet the specified criteria, the identities of such comparison devices 150 are returned to the repair validation agent 122. Based on the returned list of devices 150, the repair validation agent 122 requests (block 284) comparison performance metrics 290 from the devices 150. In one implementation, devices 100, 150 communicate with each other using predicate logic based agent communication language.
If no comparison performance metrics 290 are returned (block 292), no further comparison is possible and the service is indicated as incomplete or unsatisfactory (block 280). The repair validation agent 122 indicates to the UI agent 120 that the service operation is not acceptable and the UI agent 120 provides a notification to the field engineer 102. The field engineer 102 may then re-perform the service operation or may perform an alternative operation to address the deficiency (block 282).
Conversely, if comparison performance metrics are returned to the repair validation agent 122, a comparison (block 298) is performed between the comparison performance metrics 290 and the actual performance metrics 252 to determine the sufficiency or acceptability of the service operation (block 310). By way of example, the results 300 of the comparison step 298 may be evaluated at block 310 to determine whether the actual performance metrics 252 meet or exceed the comparison metrics 290 or are within some specified tolerance threshold (e.g., within 1%, 2%, 5%; within 0.25 standard deviation (SD), 0.5 SD, 0.75 SD, and so forth).
If the comparison results 300 are deemed satisfactory, the repair validation agent 122 indicates that the service operation has been completed (block 260), the UI agent 120 is notified and, correspondingly, provides a notification to the field engineer 102 that the service operation was successful. Conversely, if the comparison results are not determined to be satisfactory, the repair validation agent 122 indicates to the UI agent 120 that the service operation is not acceptable and the UI agent 120 provides a notification to the field engineer 102. The field engineer 102 may then re-perform the service operation or may perform an alternative operation to address the deficiency (block 282).
Technical effects of the invention include implementation of an automated approach for verifying sufficiency of and/or quality of a service operation performed on an asset by a field engineer. In one implementation the approach employs autonomous tests and/or compares performance of the serviced asset with that of comparable peers operating in similar or co-local environments. The present approach, therefore, addresses issues for evaluating sufficiency of a service operation that are related to subjective human errors and variability.
This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
7035786, | May 13 1998 | X-ACT SCIENCE INC | System and method for multi-phase system development with predictive modeling |
7716077, | Nov 22 1999 | Accenture Global Services Limited | Scheduling and planning maintenance and service in a network-based supply chain environment |
9348571, | Aug 25 2014 | General Electric Company | Method, device, and program storage device for autonomous software life cycle management |
20070043538, | |||
20080208429, | |||
20090037302, | |||
20100010883, | |||
20100268849, | |||
20110270577, | |||
20120041716, | |||
20130021131, | |||
20130073223, | |||
20160055077, | |||
20160146707, | |||
20160203445, | |||
JP2003233420, | |||
WO2011143531, | |||
WO2015181857, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 16 2016 | General Electric Company | (assignment on the face of the patent) | / | |||
Sep 16 2016 | BALOCH, GHULAM ALI | General Electric Company | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 039769 | /0569 |
Date | Maintenance Fee Events |
Date | Maintenance Schedule |
Apr 26 2025 | 4 years fee payment window open |
Oct 26 2025 | 6 months grace period start (w surcharge) |
Apr 26 2026 | patent expiry (for year 4) |
Apr 26 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 26 2029 | 8 years fee payment window open |
Oct 26 2029 | 6 months grace period start (w surcharge) |
Apr 26 2030 | patent expiry (for year 8) |
Apr 26 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 26 2033 | 12 years fee payment window open |
Oct 26 2033 | 6 months grace period start (w surcharge) |
Apr 26 2034 | patent expiry (for year 12) |
Apr 26 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |