A process and neural network architecture for on-line adjustment of the weights of the neural network in a manner that corrects errors made by a nonlinear controller designed based on a model for the dynamics of a process under control. A computer system is provided for controlling the dynamic output response signal of a nonlinear physical process, where the physical process is represented by a fixed model of the process. The computer system includes a controlled device for responding to the output response signal of the system. The computer system also includes a linear controller for providing a pseudo control signal that is based on the fixed model for the process and provides a second controller, connected to the linear controller, for receiving the pseudo control signal and for providing a modified pseudo control signal to correct for the errors made in modeling the nonlinearities in the process. A response network is also included as part of the computer system. The response network receives the modified pseudo control signal and provides the output response signal to the controlled device. The second controller preferably is a neural network. The computer system may include a plurality of neural networks with each neural network designated to control a selected variable or degree of freedom within the system.
|
10. A computer system for controlling the dynamic output response of a nonlinear physical process, said physical process represented by a fixed model of said process, comprising:
a linear controller for providing a pseudo control signal, said pseudo control signal being based on said fixed model for said process; a second controller, connected to said linear controller, for receiving said pseudo control signal and for providing a modified pseudo control signal to correct for errors inherent in modeling of said process, said second controller comprising a neural network for modifying said pseudo control signal based on on-line data training of said neural network, the value of said modified pseudo control signal equals the sum of value of the output of said linear controller and the value of an estimated derivative of a received command signal for a degree of freedom in said system subtracted by the value of the output of said neural network for that same degree of freedom; and a response network, connected to said second controller, for receiving said modified pseudo control signal and for providing an output response signal to a controlled device.
1. A computer system for controlling a nonlinear physical process with a dynamic output response signal, said physical process represented by a fixed model of said process, comprising:
a linear controller for providing a pseudo control signal, said pseudo control signal being based on said fixed model for said process; a second adaptive controller, connected to said linear controller, for receiving said pseudo control signal and for providing a modified pseudo control signal to said linear controller to correct for errors inherent in modeling of said process, said second adaptive controller comprising a neural network for modifying said pseudo control signal based on on-line analysis of said process, the value of said modified pseudo control signal equals the sum of said pseudo control signal, outputted from said linear controller, and an estimated derivative of a received command signal for a degree of freedom in said system, the sum being subtracted by an output signal from said neural network for that same degree of freedom; and a response network, connected to said second controller, for receiving said modified pseudo control signal and for providing said output response signal to a controlled device.
5. A method for providing a control signal to a controlled device in a modeled dynamic non-linear process in which data is received on-line for a non-linear system, comprising the steps of:
receiving feedback state and command signals at a controller; calculating a pseudo control signal for the received feedback state and command signals; receiving the pseudo control signal and state signal at an online neural network that is adapted to correct for errors inherent in the modeling of the non-linear physical process; calculating a fixed point solution at the neural network to ensure stability of the process; adjusting connection weights of the neural network based on the state and command signals received; modifying, at the neural network, the pseudo control signal with an output signal of the on-line neural network to correct for inverse modeling errors; producing a modified pseudo control signal as an output from said neural network; receiving the modified pseudo control signal and calculating an inverse response control signal at an inverse response function unit that is based on a model for the process; and transmitting the inverse response control signal to an output response device for producing adjustments to the controlled device.
14. A method for providing a control signal to an actuator device for controlling an aircraft in a modeled dynamic non-linear process in which data is received on-line for a non-linear system, comprising the steps of:
receiving feedback state and command signals at a controller; calculating a pseudo control signal for the received feedback state and command signals; receiving the pseudo control signal and state signal at an online neural network that is adapted to correct for errors inherent in the modeling of the non-linear physical process; calculating a fixed point solution at the neural network to ensure stability of the process; adjusting connection weights of the neural network based on the state and command signals received; modifying, at the neural network, the pseudo control signal with the output of the on-line neural network to correct for inverse modeling errors; producing a modified pseudo control signal as an output from said neural network by adding the value of the output of said linear controller and the value of a received control signal for a degree of freedom in said system to form a sum, and subtracting said sum by the value of the output of said neural network; receiving the modified pseudo control signal and calculating an inverse response control signal at an inverse response function unit that is based on a model for the process; and transmitting the inverse response control signal to said actuator device.
2. The computer system of
3. The computer system of
4. The computer system of
6. The method of
7. The method of
8. The method of
9. The method of
11. The computer system of
12. The computer system of
13. The computer system of
|
This continuation-in-part application claims priority benefits under 35 U.S.C. §120 and 37 C.F.R. §1.53(b) to U.S. patent application Ser. No. 08/510,055 filed Aug. 1, 1995 now U.S. Pat. No. 6,092,919 naming as inventors Anthony J. Calise and Byoung-Soo Kim.
The U.S. Government has a paid-up license in the invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of a contract awarded by the Department of the Army, Army Research Office.
The present invention generally relates to control systems for dynamic processes and, particularly, relates to adaptive control systems for minimizing errors in output control signals of uncertain nonlinear processes.
Many control or pattern recognition computer systems are created by designing a function to model a selected set of data or statistics. From the modeled set of data, the computer system may control, estimate, correct or identify output data based on the modeled function. Many methods exist for creating functions that model data. Recently, neural networks have been employed to identify or create functional models for various types of systems.
A neural network consists of simple interconnected processing elements. The basic operation of each processing element is the transformation of its input signals to a useful output signal. Each interconnection transmits signals from one element to another element, with a relative effect on the output signal that depends on the weight for the particular interconnection. A neural network may be trained by providing known input values and output values to the network, which causes the interconnection weights to be changed until the desired system is modeled to a specified degree of accuracy or as precisely as reasonably possible.
With statistics software or neural network software, input-output relationships during a training phase are identified or learned, and the learned input-output relationships are applied during a performance phase. For example, during the training phase a neural network adjusts connection weights until known target output values are produced from known input values. During the performance phase, the neural network uses connection weights identified during the training phase to impute unknown output values from known input values. The neural network accuracy depends on data predictability and network structure that is chosen by the system designer, for example, the number of layers and the number of processing elements in each layer.
Because the reliability of a function modeled by a neural network depends on the reliability of data that is used to model a system, large quantities of data are often required to produce a model that satisfies the desired specifications. However, it can be difficult to collect data that represents all possible outcomes of a given system and, thus, a model of the system is created based on a training subset of data from the system from which predictions or corrections can be made. Because system functions based on the model may originate from data that was not contained in the initial training set, the potential for error within the system is increased with respect to the non-modeled input data.
It is desirable to have a system that can adapt and learn to make predictions or corrections based on non modeled input data after the model has been put into action. This adaptive learning may be termed on-line learning. Due to the time that it takes to train a neural network, the use of neural networks have been limited to providing models for predictive systems when the inputs and outputs are known, such as a neural network used in predicting or recognizing a pattern based on selected inputs for which the system was trained. This type of system is not sufficient for producing accurate results in a control system environment where the model has not been trained for all possible outcomes or where nonlinearities or sudden changes may be introduced to the system under control. This is particularly true for the control of processes described by nonlinear differential equations of motion. It is well known by practitioners in the art that certain systems are difficult to control, such as systems in which the defining equations of motion for the process to be controlled are poorly known with respect to their functional forms, or in which the functional, forms themselves may undergo sudden and unexpected variation, particularly when the effect of the control action enters nonlinearly. Thus, in nonlinear systems, neural networks trained off-line will not produce results that minimize the error of the control system based on data received on-line.
Because of the limitations in existing methods for training neural networks, and the general lack of a proof of stability in control applications in which the control enters nonlinearly, a technique which provides rapid on-line learning, and that insures stability for real time applications is highly desirable. Such a technique would have applicability in the field of flight control of either manned or unmanned aerial vehicles. For such applications, the dynamics are highly nonlinear and can undergo variations due to transitions in flight conditions, initiation of highly dynamic maneuvers involving large state excursions from trim flight conditions, or due to failures in actuators or due to damage to the airframe.
A number of airframes for modern aircraft, particularly high speed military fighter aircraft, are inherently unstable, and require sophisticated control electronics to translate pilot control inputs into appropriate signals to actuate control devices. Problems in the design of such control systems arise from the fact that very complex nonlinear relationships describe the physical behavior of the aircraft. The relations vary in a complex way with aircraft speed, altitude, and angle of attack. The control system is, in many respects, only as good as the model of the nonlinear physical system upon which the controller is based. Therefore, any system that can adaptively learn to correct for defects in the modeling process can provide improved stability in aircraft control systems.
Such a system is usable not only in the control of high speed military aircraft, but also for the control of other aircraft, such as helicopters. In particular, the system of the present invention is contemplated as useful in both control of high speed unstable aircraft and useful in the control of remotely controlled unmanned helicopter vehicles.
Thus, there is a need in the art for an adaptive control system that insures both the capability for real-time, on-line learning and stability of the controlled process and that has an architecture that enables adaptation to processes in which the effect of the control action on the dynamics of the system is nonlinear.
Generally described, the present invention provides a process and neural network architecture for on-line adjustment of the weights of the neural network in a manner that corrects errors made by a nonlinear controller designed based on a model for the dynamics of a process under control.
More particularly described, the present invention provides a computer system for controlling the dynamic output response signal of a nonlinear physical process, where the physical process is represented by a fixed model of the process. The computer system includes a controlled device for responding to the output response signal of the system. The computer system also includes a linear controller for providing a pseudo control signal that is based on the fixed model for the process and provides a second controller, connected to the linear controller, for receiving the pseudo control signal and for providing a modified pseudo control signal to correct for the errors made in modeling the nonlinearities in the process. A response network is also included as part of the computer system. The response network receives the modified pseudo control signal and provides the output response signal to the controlled device.
The second controller preferably is a neural network. The computer system may include a plurality of neural networks with each neural network designated to control a selected variable or degree of freedom within the system.
The present invention may be implemented as an improvement to flight control systems. In a flight control computer system designed for a set of flight dynamics of a particular aircraft, the flight control system implementing a command augmentation system, the command augmentation system comprising: an attitude control for providing a pseudo control signal used in determining a selected control signal for controlling an action of the aircraft; and an output control unit, connected to receive input from the attitude control, for generating the selected control signal based on the pseudo control signal, the present invention provides the following improvement: a neural network connected between the attitude control and the output control for modifying the pseudo control signal based on data received during the flight of the aircraft to correct for errors made in modeling the nonlinearities in the flight dynamics of the aircraft.
Thus, it is an object of the present invention to use neural networks for learning and controlling on-line control processes.
It is another object of the present invention to provide an on-line adaptive control to correct errors made by a controller.
It is another object of the present invention to provide an adaptive controller that quickly adapts and corrects system processes although a crude model for the dynamics of a system were initially given.
It is another object of the present invention to provide an improved adaptive controller that is particularly useful in a control system for an inherently unstable airframe.
It is still a further object of the present invention to provide an adaptive controller particularly useful in a flight controller for remotely controlled aircraft, including unmanned helicopter vehicles.
These and other objects, features, and advantages of the present invention will become apparent from reading the following description in conjunction with the accompanying drawings.
Referring to
In this detailed description, numerous details are provided such as sample data, and specific equations, etc., in order to provide an understanding of the invention. However, those skilled in the art will understand that the present invention may be practiced without the specific details. Well-known circuits, programming methodologies, and structures are utilized in the present invention but are not described in detail in order not to obscure the present invention.
For purposes of this discussion, a process or method is generally a sequence of computer-executed steps leading to a desired result. These steps require physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic, or optical signals that are capable of being stored, transferred, combined, compared, or otherwise manipulated. It is conventional for those skilled in the art to refer to these signals as bits, data, values, elements, symbols, characters, terms, numbers, or the like. Certain portions of the description which follow is presented in terms of equations. It should be appreciated that the operands of the equations are steps of a process that serve to manipulate the terms or characters of the equation to produce electrical signals for generation of input/output signals to control the various instruments and devices used in conjunction with the present invention. Standard notation is used in these equations as readily understood by those skilled in the art. It should be kept in mind, however, that these and similar terms should be associated with appropriate physical quantities inside the computer and that these are merely convenient labels applied to these physical quantities that exist within the computer.
It should also be understood that manipulations within the computer system are often referred to in terms such as adding, subtracting, multiplying, dividing, comparing, moving, etc., which are sometimes associated with mental operations performed by a human operator. It should be understood that involvement of a human operator is not necessary in the present invention because the operations described herein are machine operations performed in conjunction with a human operator or user that interacts with the computer system. The machines used for performing the operation of the present invention include digital computers, circuit boards, digital chips or other similar computing devices known to those skilled in the art.
Although the aircraft 11 is shown as a fixed wing airplane, those skilled in the art will appreciate that the flight control computer system 10 embodying the present invention may be implemented with a helicopter or with unmanned aircraft. The flight control computer system 10 is used to aid a pilot in controlling the aircraft during flight. The flight control computer system 10 is particularly helpful in stabilizing the air frame of the aircraft. Without a stabilization system or a highly accurate stabilization system, the pilot's task of controlling the plane would be more difficult.
Controlling the stabilization of an aircraft requires analyzing multiple variables or degrees of freedom to determine the appropriate control value needed to produce the desired stabilization. The multiple degrees of freedom or variables used in controlling or stabilizing an aircraft affect each other in a nonlinear manner and thus must be operated on accordingly. Generally, modeling errors enter a modeled system when there is a sudden alteration of the process or a failure occurs to a system device, such as failure of an actuator. Creating a nonlinear model to represent the various effects that multiple variables have on each other within a specified system can be difficult when accuracy and response speed are important, as with high performance aircraft. Generally, it is difficult to collect data that represents all possible outcomes of a given system. Thus, a model of the system is created based on a training subset of data from the system from which predictions or corrections can be made. Because system functions based on the model may originate from data that was not contained in the initial training subset, tile potential for error within the system is increased with respect to the non-modeled input data. Thus, it is desirable to have a system that can adapt and learn to make predictions or corrections based on non-modeled input data. This adaptive learning may be termed on-line learning or control.
Traditionally, neural networks are used to create an initial model based on a training subset of data. A model based on an initial training set may be termed an off-line model. Due to the time that it takes to train a neural network, the use of neural networks have been limited to providing models for predictive systems when the inputs and outputs are known, such as a neural network used in predicting or recognizing a pattern based on selected inputs for which the system was trained. This type of system is not sufficient for producing accurate results in a control system environment where the model has not been trained for all possible outcomes. Thus, in nonlinear systems such as aircraft control systems, typical neural networks trained off-line will not produce results that minimize tile error of the control system based on data received on-line. The present invention provides such an adaptive control system that permits on-line adjustments of control parameters to accommodate or correct for data not utilized in an initial training set.
Continuing the discussion of an aircraft flight control, system embodying the present invention, aircraft flight control computer systems must account for multiple nonlinear variables or degrees of freedom. The degrees of freedom relevant to aircraft stabilization consist of three position variables and three attitude variables. Because the three position variables change slowly with respect to the attitude variables, a control system for an aircraft stabilization system can be treated as having only three degrees of freedom (i.e., the attitude variables). Attitude generally refers to the position or orientation of an aircraft, either in motion or at rest, as determined by the relationship between the aircraft's axis and some reference line or plane or some fixed system of reference axes. The attitude variables are given in terms of angles. The attitude angles are generally represented by roll-ψ, pitch-Θ, and yaw-φ in the determination of appropriate control signals in flight control computer systems. The angular rates φ, Θ, and ψ are Euler angular coordinates. However, two types of angular rates are encountered in flight control computer systems. The other types of angles are body angular rates in which roll is represented by Pc, pitch is represented by qc and yaw is represented by rc. The body angular rates are measured rates and are generally transformed to Euler angular rates in a flight control system.
Referring to
The CAS output 20 provides a roll rate pc, which is the pilot's roll rate command as passed through the command augmentation unit 19, a pitch rate qc, which is calculated based on the normal acceleration command anc and the side slip S, and a yaw rate rc, which is also based upon the normal acceleration anc and the side slip S. As noted above, these angular rates roll pc, pitch qc, and yaw rc are body angular rates. In the system shown, the roll pc is input by a pilot and the pitch qc and yaw rc commands are computed quantities. These body angular rates are then passed on to the attitude orientation system 18 to be received at a command transformation system 22. The command transformation system 22 transforms the body angular rates to Euler coordinates which are given as the first derivatives of φc, θc, and Ψc (i.e., the first derivatives of roll, pitch, and yaw, respectively in Euler coordinates). The command transformation output 24 is then passed along to an integrator 26 which outputs the appropriate roll φ, pitch θ, and yaw Ψ coordinates. These coordinates are passed on to an attitude controller 30. In this nonlinear control system, the attitude controller represents the linear portion of an adaptive controller 25.
The adaptive controller 25 includes the attitude controller 30 and a neural network controller 34. The attitude controller 30 in connection with the neural network controller 34 produces pseudo control vectors after uφ, uΘ, and uψ, as discussed in more detail below. The adaptive controller 25 serves to adjust the pseudo control vector on-line. The adaptive controller 25 receives feed back data as indicated by path 29.
A controller or inverse function 32, used in the CAS, consists of a neural network, as discussed in more detail below, that models the initial function for determining the appropriate control signal to provide to the plane's stabilizing system based on off-line training. As noted herein, off-line generally means that the weights defining the neural network used to define the inverse function 32 were determined according to a set of known inputs and known outputs prior to on-line implementation of the flight control system shown in FIG. 2. It should be appreciated that the function for the inverse function 32 can be derived using systems other than neural networks as known by those skilled in the art.
In prior art methods (shown in FIG. 7), the u vector calculated or produced from an attitude controller would be passed on to the neural network controller 32 in order to produce a command based on off-line training data. However, the neural network 34 and its position within the architecture of the system shown in
Referring to
The adaptive control 25 consists of the attitude controllers 30a, 30b, and 30c and the neural networks 34a, 34b, and 34c for providing signals to the inverse function 32, respectively. The neural networks may be coupled by the connection line 35 as described in more detail in connection with FIG. 4B. The adaptive control 25 may employ each of the neural networks 34a, 34b, and 34c to define the architecture for pseudo control signal u that is input the inverse function 32. The output of each of the neural networks 34a, 34b, and 34c modifies the linear output of the attitude controllers 30a, 30b, and 30c which corresponds to the neural networks 34a, 34b, and 34c, respectively. Entering the attitude controller 30a is a measured roll value φ and a command roll value φc as well as the respective first derivatives of these values {dot over (φ)} and {dot over (φ)}c. The derivative {dot over (φ)}c and other command derivatives indicated herein are obtained by from the command transformation unit 22 of
Referring to
Referring to
As noted above, a nonlinear control system may be designed, using techniques known by those skilled in art, where the process dynamics of the actual system can be characterized by the equation:
where the vectors x and {dot over (x)} make up the elements of the state 10 vector, and δ is the control vector, all being of equal dimension (n). The vector x is generally a measured value such as Φ, Θ, and Ψ discussed above. The dimension (n) is referred to as the number of degrees of freedom, and a subscript i=1 . . . n will be used to denote the ith degree of freedom, corresponding to the ith element in the vector x. In the example discussed above, n=3 to represent the three degrees of freedom Φ, Θ, and Ψ. A model of the actual system may be represented by the following equation:
Model {umlaut over (x)}={circumflex over (f)}(x,{dot over (x)},δ) (Eq. 2)
A well known method of controlling nonlinear systems of this form when the nonlinear function f(x,{dot over (x)},δ) is known is based on state feedback inversion. State feedback inversion entails designing a so-called pseudo state feedback control, u(x,{dot over (x)},), as a linear controller, such as controllers 31 in
where {circumflex over (f)}-1(x,{dot over (x)},δ) is the inverse of the model {circumflex over (f)}(x,{dot over (x)},δ) of the actual nonlinear function f(x,{dot over (x)},δ). When the function f(x,{dot over (x)},δ) is poorly modeled, or when the system being controlled undergoes a rapid or unexpected change, then the above procedure for using the inverse of a model is inadequate for practical applications. However, the present invention utilizes an adaptive controller such as an adaptive controller 25 (
where {overscore (u)}i(xi,{dot over (x)}i) is the output of a linear controller 31 designed for the linear system {umlaut over (x)}i={overscore (u)}i,xci(t) is the command signal for ith degree of freedom, and uin(x,{dot over (x)},u) is the output of a neural network 34. The pseudo control ui(x,{dot over (x)}) is calculated at node 60 in FIG. 4. The architecture and processes of uin(x,{dot over (x)},u) are defined by equation 5:
where wi,j is the connection weight for the ith degree of freedom and j represents the number of elements for each degree of freedom. It should be appreciated that the number of elements j may vary for each degree of freedom according to the designer's specifications.
The ui- input of inputs 64 to the neural network 34 represents all u vectors except the ui vector for which the control signal is being determined.
Many approaches known in the art exist for defining 5 the linear part of the controller 31. For the purposes of the discussion herein, it will be assumed that the linear controller 31 has the so-called "proportional+derivative" form:
as processed at node 62. It should be appreciated that other processes or forms can be equally treated for a linear controller within the context of the present invention. For this choice of linear control, the connection weights wi,j(t) are adjusted on-line and in real time according to the following weight adjustment process:
where ei(t)=xi(t)-xci(t) (i.e., the error value between the measured value xi(t) and command value xci(t)) and γi is a constant learning rate. The weight adjustment processes are well known to those skilled in the art. The learning weight γi, as known by those skilled in the art, is chosen by the designer of the neural network. The learning weight determines how much a given input should effect the various connection weights of the system (i.e., the learning rate of the system). Because the weight adjustment process or equation is given in terms of the first derivative of the connection weight values wi,j(t), only a bank of analog or digital integrators, for these weight values wi,j(t), is needed for the on-line adjustment of the neural network weights as given by Eq. 7.
The gain constants kpi (proportional constant) and kdi (derivative constant) may be chosen by techniques known to those skilled in the art familiar with proportional/derivative control. There is considerable latitude in designing the particular manner in which the gains kpi and kdi enter the weight adjustment process. In all cases, the basic form of the weight adjustment process remains similar that shown in Eq. 7. Having determined the pseudo control vector u, the real control δ(t) is then computed using the approximate inverse function δ={circumflex over (f)}-1(x,{dot over (x)},u).
One skilled in the art is provided a wide latitude in selecting the basis functions βi,j(x,{dot over (x)},u). In general, βi,j(x,{dot over (x)},u) should be chosen so that the neural network 34 corrects for the differences between {circumflex over (f)}(x,{dot over (x)},u) and the actual, but unknown inverse function {circumflex over (f)}-1(x,{dot over (x)},u). In the absence of any additional information concerning these differences, polynomial functions of x, {dot over (x)} and u may be employed. Such polynominal functions, as known to those skilled in the art, are described in Kim, B. S. and Calise, A. J., "Nonlinear Flight Control Using Neural Networks," AIAA Guidance, Navigation, and Control Conference, Paper #94-3646, AZ, Aug. 1-3, 1994 which is herein incorporated by reference. In any case, some provision should be made for the fact that whatever set of basis functions is selected, perfect correction can rarely be achieved, and under these circumstances a dead zone, as known by those skilled in the art, is employed to deactivate the weight adjustment process, as given in Eq. 7, when a positive scalar function of the tracking errors ei(t) and {dot over (e)}(t) is below a specified level. The dead zone feature is used in a stability proof to insure that the tracking errors remain bounded and ultimately reach and remain on the boundary of the dead zone. For a suitably comprehensive set of basis functions, the size of the dead zone can be reduced to near zero. The use of a dead zone in adaptive control is a well-known approach that is often employed as a part of a stability proof. The details regarding the stability proof and the design of a dead zone for this application may be found in Kim, B. S. and Calise, A. J., "Nonlinear Flight Control Using Neural Networks," AIAA Guidance, Navigation, and Control Conference, Paper #94-3646, AZ, Aug. 1-3, 1994.
Two advantageous features are presented by the architecture and processes implemented by the neural network 34. First, the neural network basis functions βi,j(x,{dot over (x)}, u) that shape the network outputs uin(t) are functions of the elements of u(x,{dot over (x)}). Therefore, the system is enabled to operate with systems that are nonlinear in the control. Since u(t) depends on un(t), it follows that the ith network input depends on the present outputs of all the networks. Therefore, a so-called fixed point solution is used to define the network 34 outputs. In order to determine the fixed point, a network is iterated until a fixed point solution is determined. A variety of well known approaches for calculating a fixed point solution, or to approximate its solution to within practical limits are known by those skilled in the art. The architecture for the neural networks 34 defined by this invention is illustrated in FIG. 5.
Referring to
Having discussed the steps of present invention in connection with the architecture of the system, the steps of present invention will be discussed generally in connection with the flow diagram of FIG. 6. The specific steps implemented in the preferred embodiment of the present invention are discussed in connection with
The following text discusses computer simulations implementing the present invention. High fidelity six degree of freedom computer simulations have been conducted to demonstrate the usefulness of the network processes and control architectures depicted in
The structure of
Referring to
The foregoing relates to the preferred embodiment of the present invention, and many changes may be made therein without departing from the scope of the invention as defined by the following claims.
Calise, Anthony J., Kim, Byoung-Soo, Corban, J. Eric
Patent | Priority | Assignee | Title |
7272454, | Jun 05 2003 | Fisher-Rosemount Systems, Inc. | Multiple-input/multiple-output control blocks with non-linear predictive capabilities |
7305472, | Jun 03 1996 | Rovi Technologies Corporation | Method for downloading a web page to a client for efficient display on a television screen |
7418432, | May 27 2000 | Georgia Tech Research Corporation | Adaptive control system having direct output feedback and related apparatuses and methods |
7523399, | Jun 03 1996 | Rovi Technologies Corporation | Downloading software from a server to a client |
7856105, | Mar 09 2006 | Andrew LLC | Apparatus and method for processing of amplifier linearization signals |
8509926, | Dec 05 2005 | Fisher-Rosemount Systems, Inc | Self-diagnostic process control loop for a process plant |
8909360, | Dec 05 2005 | Fisher-Rosemount Systems, Inc. | Self-diagnostic process control loop for a process plant |
Patent | Priority | Assignee | Title |
5268834, | Jun 24 1991 | MASSACHUSETTS INSTITUTE OF TECHNOLOGY A CORP OF MA | Stable adaptive neural network controller |
5311421, | Dec 08 1989 | Hitachi, Ltd. | Process control method and system for performing control of a controlled system by use of a neural network |
5394322, | Jul 16 1990 | INVENSYS SYSTEMS INC FORMERLY KNOWN AS THE FOXBORO COMPANY | Self-tuning controller that extracts process model characteristics |
5396415, | Jan 31 1992 | HONEWELL INC , A CORP OF DE | Neruo-pid controller |
5486996, | Jan 22 1993 | Honeywell Inc. | Parameterized neurocontrollers |
5570282, | Nov 01 1994 | SCHNEIDER ELECTRIC SYSTEMS USA, INC | Multivariable nonlinear process controller |
5625552, | Dec 18 1991 | Closed loop neural network automatic tuner | |
5706193, | Jun 29 1993 | Siemens Aktiengesellschaft | Control system, especially for a non-linear process varying in time |
5719480, | Oct 27 1992 | MINISTER OF NATIONAL DEFENCE OF HER MAJESTY S CANADIAN GOVERNMENT | Parametric control device |
6078843, | Jan 24 1997 | Honeywell, Inc | Neural network including input normalization for use in a closed loop control system |
6092919, | Aug 01 1995 | GUIDED SYSTEMS TECHNOLOGIES, INC | System and method for adaptive control of uncertain nonlinear processes |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 29 1999 | KIM, BYOUNG-SOO | GUIDED SYSTEMS TECHNOLOGIES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017065 | /0560 | |
May 31 2000 | Guided Systems Technologies, Inc. | (assignment on the face of the patent) | / | |||
Jan 28 2003 | CORBAN, J ERIC | GUIDED SYSTEMS TECHNOLOGIES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 013763 | /0427 | |
Jan 28 2003 | CORBAN, J ERIC | GUIDED SYSTEMS TECHNOLOGIES, INC | CORRECTIVE ASSIGNMENT TO CORRECT THE ADDRESS OF THE ASSIGNEE DOCUMENT PREVIOUSLY RECORDED AT REEL 013763 FRAME 0427 | 013790 | /0577 |
Date | Maintenance Fee Events |
Sep 24 2007 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 19 2011 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Feb 05 2016 | REM: Maintenance Fee Reminder Mailed. |
Jun 29 2016 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jun 29 2007 | 4 years fee payment window open |
Dec 29 2007 | 6 months grace period start (w surcharge) |
Jun 29 2008 | patent expiry (for year 4) |
Jun 29 2010 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 29 2011 | 8 years fee payment window open |
Dec 29 2011 | 6 months grace period start (w surcharge) |
Jun 29 2012 | patent expiry (for year 8) |
Jun 29 2014 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 29 2015 | 12 years fee payment window open |
Dec 29 2015 | 6 months grace period start (w surcharge) |
Jun 29 2016 | patent expiry (for year 12) |
Jun 29 2018 | 2 years to revive unintentionally abandoned end. (for year 12) |