A method of internal connection for neural network linking successive neuron layers and useful in image analysis and image and signal processing is provided. The neuron output states of a new neuron layer are represented by functions obtained by the method. A weighted summation of functions which represent the output state of a neuron from a preceding layer is carried out. A saturation function is applied to the result of the weighted summation. A function is outputted which is representative of the output state of the neuron layer.
|
10. A neural network comprising,
a plurality of neurons arranged in layers, each neuron layer having inputs and outputs wherein each neuron layer receives output states as functions from a preceding neuron layer, and wherein each neuron layer comprises a weighted summer for obtaining a weighted summation of functions which represent said output states from the preceding neuron layer, a saturation function stage for obtaining a saturated summation by applying a saturating function to said weighted summation, and a distribution function stage for applying a distribution function to said saturated summation, wherein said distribution stage has an output which outputs a function representative of output states of said neuron layer.
1. An internal connection method for neural networks, comprising
linking successive neuron layers, each layer having at least one input and at least one output, wherein said neuron layers are connected so that the output of one neuron layer is connected to the input of a successive neuron layer, each neuron layer receiving at said inputs, output functions from a preceding neuron layer, using a weighted summer to carry out a weighted summation of said output functions which represent an output state from the preceding neuron layer, using a saturation function stage means to apply a saturation function to said weighted summation, and using a distribution function stage to apply a distribution function to said saturated summation and outputting a function representative of output states of said neuron layer.
2. The method according to
3. The method according to
4. The method according to
5. The method of
8. The method of
9. The method according to
|
This is a continuation of application Ser. No. 07/944,482, filed Sep. 14, 1992, now abandoned.
The present invention relates to an internal connection method for neural networks, having particular application to prediction tasks or to observation vector dimension reduction in, for example, the image processing and analysis fields. More generally, it can be applied to techniques employing multi-layer neural network modelling structures.
Over the last few years, a large number of neural network models have been proposed. Among these, the model that seems to be of most interest for industrial applications is the multi-layer Perceptron which is particularly discussed in an article by D. E. Rumelhart, G. E. Hinton and R. J. Williams "Learning internal representations by error back propagation" in the book "Parallel Distributed Processing" Cambridge: MIT Press, 1986. This structure is an assembly of individual units called "neurons" or "automata". The neurons are organized into layers, each layer receiving the outputs from the preceding layer at its inputs. A learning algorithm known as a gradient back-propagation algorithm has been proposed for this type of network. This algorithm is discussed in the abovementioned article.
Even though the multi-layer Perceptron's performance is satisfactory as regards classification tasks, the same does not apply to tasks involving prediction or, notably, space dimensional reduction. The multi-layer Perceptron performs badly in these fields because of its poor ability to handle non-linear problems.
The aim of the invention is to overcome the above-cited disadvantages. The invention hence provides an internal connection method for neural networks linking successive neuron layers, each layer receiving the outputs from the layer of immediately lower rank at its inputs, the method consisting in representing the output states of the neurons by a function, said function being obtained by the steps comprising: carrying out a weighted summation of functions, each of said functions representing the output state of a neuron from the preceding layer, then, in a second step, applying a saturating function to the result of said summation, and, in a third step, applying a distribution function to the result after application of the saturating function.
The principal advantage of the invention is that it improves the ability of the multi-layer Perceptron to handle non-linear problems with a notable improvement in the Perceptron's prediction and dimension-reducing performance.
Further features and advantages of the invention will become more clear from the description that follows in conjunction with the attached drawings in which:
FIG. 1a shows the organisation of neurons in layers in accordance with the multi-layer Perceptron structure.
FIG. 1b illustrates a neuron connection model.
FIG. 2a shows a neuron connection model in accordance with the invention.
FIG. 2b shows examples of distributed scalar functions.
FIG. 2c shows a neural model with sampling in accordance with the invention.
FIG. 3 shows the segmentation of a distributed scalar function.
FIG. 4a shows results obtained by application of the method according to the invention.
FIG. 4b illustrates the structure of the neural network used to obtain the results shown in FIG. 4a.
FIG. 5 shows a flow chart of the method according to the invention.
FIG. 1a shows how neurons are organized in layers in accordance with the structure of the multi-layer Perceptron. An input vector 1 supplies information to input units symbolized by arrows. There are no internal connections within the layers.
Each neuron receives the outputs from the layer of immediately lower rank at its inputs. Generally, a so-called threshold neuron, having no inputs, whose output is always 1, and which is symbolized by a triangle, is provided on each layer, except at output vector 2 level, in order to improve the network's capacity by thus adding a degree of freedom.
FIG. 1b shows one way of connecting neurons in association with the multi-layer Perceptron's structure, and proposed by Rumelhart et al. It consists of a summing element 3 and a non-linear transition function 4, of the tangential hyperbolic type, for example. Oi stands for the output state of a neuron i from the preceding layer, Oj is the output state of a neuron j forming part of the next layer, the three arrows at the input to summing element 3 symbolizing the connections to the neurons of the preceding layer. Wij stands for the weighting coefficient associated with the connection between neuron i and neuron j. Xj, the result of the weighted sum 3, is called the potential of neuron j. If we use F to stand for the transition function 4 of the neuron, and pred (j) stands for the set of neuron j's predecessors, then, using this notation, we can write: ##EQU1## the symbol ε indicating that i belongs to the set pred (j).
FIG. 2a shows a connection model in accordance with the invention. The neuron output states are no longer indicated by scalar quantities as before, but by functions. The connection method according to the invention consists, in a first step 21, of providing the weighted sum of the functions fxi representing neurons i from the preceding layer. If the function Wij stands for a weight continuum characterizing the connection between the neurons i and j, then, at the end of this first step 21, the potential Xj associated with neuron j is given by the relation: ##EQU2## pred (j) having been defined previously.
In a second stage 22, the result Xj of this weighted sum is mathematically treated by a saturating function Fs, the result from which is given by xj, xj =Fs (Xj), Fs being, for example, of the tangential hyperbolic type a third step 23, a distribution function dh is applied to the value xj in order to obtain the output function that characterizes the output state of neuron J, fxj standing for this function. The distribution function dh employed in the third step is a function that applies the set of real numbers R into all the applications of R within R, and associates with every real number x the result of the convolution product of the Dirac delta function centered on x, represented by δx, with an even and aperiodic function represented by h. Using the above notations, we have, for each real number x:
fx =dh (x)=δx *h
where δx* h stands for the convolution product of the function δx by the function h. In this case, for all real numbers x and y,
fx (y)=h(y-x)=h(x-y) (1).
The distribution function dh is hence determined entirely by knowledge of the function h. Below, the function fx will be called the distributed scalar representation of x. Function h can in practice be a smoothing filter, for example.
FIG. 2b shows several distributed scalar representations in the case where, for example, h is a function that generally follows a Gaussian curve. The y-axis is the set of real numbers, the axis marked fx (y) is the set of images of the real numbers resulting from application of the function fx. Curve 201 shows the shape of the function fx for x=0, curve 202 shows the shape of the function fx for x=0.3 and curve 203 shows the shape of function fx for x=0.7.
How the network according to the invention works will be better understood from the following examination of a connection between a neuron i and a neuron j. Starting from the expression for potential Xj, it is possible to obtain the relation: ##EQU3## in accordance with relation (1).
This latter relation in fact represents the convolution product of the function h by the function Wij applied to the real number xi, in other words:
sij (xi)=(h*Wij)(xi) (3)
Thus, by grouping together the distribution function for neuron i with the weight continuum Wij characterizing the connection between neuron i and neuron j, this connection appears as the result of filtering the weight continuum by the distributed scalar representation generating function. If we write Cij =h*Wj, it is possible to describe the connections that terminate at neuron j by the following sequence of operations, using the preceding notations: ##EQU4##
One way of expressing the effect of distributed scalar representation is to consider that the connection is an adaptive function and no longer a simple adaptive gain like in the case of a conventional neural model, thus making it possible to deal more effectively with problems of non-linearity.
It is still however possible to represent a weight continuum in practice. According to the invention, we thus represent the weights by a sampled function. This sampled function between neuron i and neuron j is defined and expressed in the following manner: ##EQU5## where n is a relative whole number, Wij,n is the value of the continuum of weight Wij at point n, and δ is the Dirac delta function.
From relation (3), using the previous notations, the function sij that links the potential Xj of a neuron j to the value xi of a neuron i of the preceding layer is defined in the following manner: sij (xi)=(h*Wij *)(xi) giving ##EQU6## as Wij,n is constant: we obtain: ##EQU7## by setting: ##EQU8## we obtain: ##EQU9##
The neural model with sampling is shown in FIG. 2c. The summing function of step 21, the saturating function of step 22 and the distribution function of step 23 are the same ones as before, n and m being relative whole numbers,
Oj,n =fxj,n and Oi,n =fxi,m
Wij,m being the value of weight continuum Wij at sampling m. At the output from step 23, instead of a single arrow as in the previous cases, there are now two arrows separated by a dotted line that symbolizes sampled values obtained, of which Oj,n is one particular value. The improvement in the neural network's internal connection can be clearly seen here as, instead of the neuron's output being represented by a single value, it is now represented by a sequence of sampled values with an accompanying improvement in accuracy and a higher suitability to the various types of processing involved.
As FIG. 2b shows, the filtering function h is generally a Gaussian function with only a small standard deviation.
Referring to FIG. 5, the method according to the invention is shown in the flow chart. Inputs as functions are received from the preceding neuron layer by the present neuron layer 66. A weighted summation 62 of the inputs is determined. Then a saturation function 62 is applied to the weighted summation. To the results of the saturation function, a distribution function 64 is applied. The distribution function can involve transforming the weighted summation by a convolution product of a smoothing filter and a Dirac Delta function. The result of this transformation can then be sampled 65, resulting in output functions representative of the output sate of the neuron layer.
From FIG. 3, it can be seen that outside the interval [n1, n2 ], the value of fx (y)=h(x-y)=h(y-x) is always small, meaning that weights Wij,n where n is outside the range [n1, n2 ] have little influence in the product Wij,n fxi,n of relation (5). For values of n outside the [n1, n2 ], the values of weights Wij,n can hence be set to zero. In practice, the limits of range [n1, n2 ] would be chosen so as to enclose asymptotic values of saturating function Fs. In FIG. 3, FMIN and FMAX are the upper and lower saturation or asymptotic values of saturating function Fs. For the tangential hyperbolic function for example, FMIN =-1 and FMAX =+1. In line with the notation established above, we can now write: ##EQU10##
Having now established the connection model, a learning algorithm needs to be defined. It is necessary to provide this learning phase in order that, regardless of the input to the net, the states of the output neurons are adapted to this input. The learning algorithm defined according to the invention is a gradient algorithm inspired by the back-propagation algorithm discussed in the article by Rumelhart et al cited above.
In the learning algorithm defined according to the invention, a mean value eQM for the squares of the errors is, for example, defined between the target outputs (tj) and the outputs actually obtained (xj):
eQM =E{eQ }
The operator E{ }computes an average value for the values eQ defined by: ##EQU11## s identifying all the output neurons.
During the learning phase, all the examples are fed in several times in a random order. Each time an example is presented, the weights Wij,n vary in the opposite sense to the error gradient: denoting this variation by ΔWij,n : ##EQU12## α being a positive constant, and a reduction in error is thus ensured. The local gradient of the error of the squares will be written as gij,n below. ##EQU13##
As eQM =E{eQ}, it is possible to write: ##EQU14## as differentiation is a linear operator. We consequently obtain: ΔWij,n =-αE{gij,n}
In order to carry out the learning phase, it is hence necessary to know the gradients gij,n of the error with respect to a weight: ##EQU15## from relations (2) and (6).
Thus, by setting ##EQU16## we obtain gij,n =-δj Oi,n
All that is needed now to obtain gij,n is to define δj. By applying elementary rules of differential calculus, it is seen that δj can be expressed as a function of the terms: ##EQU17## relative to the neurons k of the layer following it. In effect: ##EQU18## can be expressed as ##EQU19## being the previously-defined interval. ##EQU20## can be written as ##EQU21## succ (j) identifying the neurons of the layer immediately following the layer where neuron j is situated. ##EQU22## from relations (2) and (6). Additionally, ##EQU23## In effect, Oj,n =fxj,n, and from relations (1) and (4) ##EQU24## which gives the following expression for δj in terms of the δk's for the neurons k of the following layer: ##EQU25##
For the particular case of the last layer, δj is readily expressed as a function of the type of error that is chosen, as, in effect: ##EQU26## Now, from relation (7), for the neurons j belonging to the last layer: ##EQU27##
We consequently have δj=-(xj-tj)Fs (Xj).
Thus, using iteration and making use of the successive quadratic errors between the target outputs and the actual outputs obtained, it is possible to determine the coefficients Wij,n, this constituting the learning phase.
The method according to the invention can advantageously be applied to space dimension reduction, notably in signal processing for example. Thus, in numerous signal processing applications, observation is effectively represented by a vector belonging to a space of dimension N. The components of this vector are very frequently not independent and are in reality limited to a hyper-surface of dimension M, M being less than N. Some methods of analysis do in fact allow this hyper-surface to be approximated to a hyper-plane. When the hyper-surface is highly curved however, this approximation is not satisfactory, and currently no satisfactory method is in existence. The method according to the invention enables satisfactory results to be obtained. The example that follows considers a two-dimensional case, but is representative of any other higher-dimension problem. In FIG. 4a, the hyper-surface S is represented by a quarter of a circle of unit radius 50.
The data to be processed are synthetic data represented by vectors with coordinates (a,b) chosen at random over the quarter S of the circle in accordance with a uniform law that is a function of, for example, angle. FIG. 4b shows the neural net employed in accordance with the invention, a and b being the input data. The d functions, 41 and 42, are distribution functions yielding the distributed scalar representation functions fa and fb defined as in relation (1), neurons 43, 44 and 45 being connected in accordance with the method of the invention. Function fx1 is the distributed scalar representation function representing the state of neuron 1, indicated by reference numeral 43. A and B are the values obtained at the output from the net. During the learning phase, the net is set up by a self-associative method, meaning that the target values required at the output from the network are equal to the data supplied at the network's input; in this case, if a and b are input data and A and B are the output data obtained, the error eQ of the squares is given by the following relation: ##EQU28##
As in this network, all information must of necessity pass via neuron 1 indicated by 43 in FIG. 4b, the problem to be computed is reduced to a dimension 1 problem, this giving a satisfactory result as the plot of the output coordinates (A, B) obtained, represented by curve S1 designated as 51, of FIG. 4a is a satisfactory approximation of the target output coordinates represented by curve S. Another solution that consisted of processing the same data with a conventional neural net that was not connected as per the invention, and even though it enabled the dimension to be reduced to 1, yielded an approximation represented by straight line P.
The method according to the invention can also be employed in the field of prediction, where it considerably improves performance. The good results obtained by this method in the processing of non-linear problems are principally due to the fact that the connections behave like adaptive functions and no longer like simple adaptive gains. For example, the smoothing function h of relation (1) employed to define the distribution function can be defined by the Gaussian function: ##EQU29## and adapted as a function of the particular problem that is to be processed by making changes to the standard deviation a which in general is of the order of unity.
Patent | Priority | Assignee | Title |
10410117, | Sep 21 2008 | BRAINCHIP, INC | Method and a system for creating dynamic neural function libraries |
11238342, | Sep 21 2008 | BrainChip, Inc. | Method and a system for creating dynamic neural function libraries |
5630024, | Jan 19 1994 | Nippon Telegraph and Telephone Corporation | Method and apparatus for processing using neural network with reduced calculation amount |
5790758, | Jul 07 1995 | SORENTINO CAPITAL LLC | Neural network architecture for gaussian components of a mixture density function |
6038337, | Mar 29 1996 | NEC Corporation | Method and apparatus for object recognition |
6643627, | Jun 11 1997 | University of Southern California | Dynamic synapse for signal processing in neural networks |
7676441, | Jun 11 2004 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, pattern recognition apparatus, and pattern recognition method |
Patent | Priority | Assignee | Title |
4315319, | Feb 01 1980 | Boeing Company, the | Non-linear signal processor |
4991092, | Aug 12 1988 | REGENTS OF THE UNIVERSITY OF CALIFORNIA, THE, A CORP OF CA | Image processor for enhancing contrast between subregions of a region of interest |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 14 1994 | Thomson-CSF | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Nov 10 1998 | REM: Maintenance Fee Reminder Mailed. |
Apr 18 1999 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Apr 18 1998 | 4 years fee payment window open |
Oct 18 1998 | 6 months grace period start (w surcharge) |
Apr 18 1999 | patent expiry (for year 4) |
Apr 18 2001 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 18 2002 | 8 years fee payment window open |
Oct 18 2002 | 6 months grace period start (w surcharge) |
Apr 18 2003 | patent expiry (for year 8) |
Apr 18 2005 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 18 2006 | 12 years fee payment window open |
Oct 18 2006 | 6 months grace period start (w surcharge) |
Apr 18 2007 | patent expiry (for year 12) |
Apr 18 2009 | 2 years to revive unintentionally abandoned end. (for year 12) |