A color sensor for generating color information defining colors of an image includes an input section, a color processing section, a color comparison section, a color boundary processing section and a memory processing section. The input section includes an array of transducer pairs, each pair defining one of a plurality of pixels. Each transducer pair generates two peak outputs, one for the selected color of each transducer of the pair. A plurality of pixel processors in the color processing section each receives the outputs from one of the transducer pairs. The color processing section generates a color feature vector representative of the brightness of the light incident on the pixels and a color value corresponding to the ratio of outputs from the transducers comprising the transducer pair associated with the pixels. The color boundary processing section generates a plurality of color boundary feature vectors, each representing the difference between the color value for a pixel and its neighboring pixels. The color comparator processor measures and compares the reflective color of two objects and the memory processor section provides a process to recognize a color, a boundary of color and/or a comparison of colors.

Patent
   RE42255
Priority
May 10 2001
Filed
Jul 27 2006
Issued
Mar 29 2011
Expiry
May 10 2021
Assg.orig
Entity
Large
1
44
all paid
0. 19. A method comprising:
generating color feature vectors representative of brightness of light associated with pixels of an image based, at least in part, on bi-chromic information associated with said pixels;
generating color values indicative of ratios of said bi-chromic information associated with said pixels; and
generating a plurality of color boundary feature vectors associated with said pixels, said color boundary feature vectors representing local color gradients based, at least in part, on color values associated with neighboring ones of said pixels.
0. 26. An apparatus comprising:
means for generating color feature vectors representative of brightness of light associated with pixels of an image based, at least in part, on bi-chromic information associated with said pixels;
means for generating color values indicative of ratios of said bi-chromic information associated with said pixels; and
means for generating a plurality of color boundary feature vectors associated with said pixels, said color boundary feature vectors representing local color gradients based, at least in part, on color values associated with neighboring ones of said pixels.
0. 13. An image processor for generating color information defining colors of an input image, the image processor comprising:
a color processing section to provide color feature vectors representative of brightness of light associated with pixels of an image based, at least in part, on bi-chromic information associated with said pixels, and to provide color values indicative of ratios of components of said bi-chromic information associated with said pixels; and
a color boundary processing section to provide a plurality of color boundary feature vectors associated with said pixels, said color boundary feature vectors representing local color gradients based, at least in part, on color values associated with neighboring ones of said pixels.
1. A color sensor for generating color information defining colors of an input image, the color sensor comprising:
an input section including an array of transducer pairs, each transducer pair defining one of a plurality of pixels of said image, each transducer pair comprising at least two transducers each generating an output having a peak at a selected color, the selected color differing as between the two transducers, and each transducer having an output profile comprising a selected function of color;
a color processing section including a plurality of color pixel processors each receiving the outputs from the two transducers comprising the transducer pair associated with a pixel, and for generating in response a color feature vector representative of the brightness of the light incident on the pixel and a color value corresponding to the ratio of outputs from the transducers comprising the transducer pair associated with the pixel; and
a color boundary processing section for generating a plurality of color boundary feature vectors, each associated with a pixel, each representing the difference between the color value generated by the pixel color processor for the respective pixel and color values generated by the pixel color processor for pixels neighboring the respective pixel.
11. A color comparator for comparing color information between a first input image and a second input image, the color comparator comprising:
an input section for each image, each input section including an array of transducer pairs, each transducer pair defining one of a plurality of pixels of said image, each transducer pair comprising at least two transducers each generating an output having a peak at a selected color, the selected color differing as between the two transducers, and each transducer having an output profile comprising a selected function of color;
a color processing section for each image, each color processing section including a plurality of color pixel processors each receiving the outputs from the two transducers comprising the transducer pair associated with a pixel, and for generating in response a color feature vector representative of the brightness of the light incident on the pixel and a color value corresponding to the ratio of outputs from the transducers comprising the transducer pair associated with the pixel; and
a comparator section receiving the color feature vector and the color value from the color processing section for each image and generating a comparison feature fusion vector representative of color information differences in the first and second images.
2. A color sensor as defined in claim 1 in which said input section includes:
a retina comprising said transducer pair array;
a lens for focusing an image of an object onto said retina; and
an adjustable iris situated between said lens and said retina for adjusting the intensity of light comprising said image on said retina.
3. A color sensor as defined in claim 2 in which said iris is adjustable in response to an adjustment signal representative of the intensity of light incident over the entire retina.
4. A color sensor as defined in claim 3 in which said color processor generates said adjustment signal in response to the sum of the amplitudes of all of the outputs generated by all of said transducers comprising the retina.
5. A color sensor as defined in claim 1, wherein the color processing section further comprises:
a plurality of pairs of controlled gain amplifier circuits, each pair associated with one of the color pixel processors, each one of the pair for receiving an output from one of the transducers comprising the transducer pair associated with the one color pixel processor, each controlled gain amplifier circuit generating a controlled gain output in response to the output from the transducer and a respective controlled gain signal; and
a common control generating said controlled gain signals from said controlled gain outputs in a feedback loop manner, for controlling said controlled gain amplifier circuits of all of said color pixel processors in tandem.
6. A color sensor as defined in claim 5, further comprising, for each color pixel processor, a ratio generating circuit for generating a color vector output representative of a difference between amplitudes of the outputs of said controlled gain amplifier circuits, said color vector corresponding to said ratio of outputs.
7. A color sensor as defined in claim 5, further comprising, for each color pixel processor, a brightness value generating circuit for generating a brightness value corresponding to the sum of the controlled gain outputs generated by the respective controlled gain amplifier circuits.
8. A color sensor as defined in claim 7, wherein each color pixel processor further comprises:
a neural director for receiving the color value and brightness value and generating in response an output vector having an increased dimensionality which will aid in distinguishing between similar patterns in the input image; and
a multi-king-of-the-mountain circuit receiving the output vector of the neural director and generating a number of MKOM output vectors, each of which is associated with one dimension of the vector input thereto by the neural director, each component of the MKOM output vector having a value in a range of possible values from zero up to a maximum value related to the maximum positive element value of the input vector, the outputs associated with an input vector component having successively lower values being successively lower in value, forming a ranking of the vector components.
9. A color sensor as defined in claim 5, wherein the common control generates said controlled gain signals as a function of a peak output generated by respective ones of the controlled gain amplifier circuits of all of the color pixel processors.
10. A color sensor as defined in claim 9, wherein the common control generates said controlled gain signals as a function of a sum of the peak outputs.
12. A color comparator as defined in claim 11, wherein the comparator section further comprises:
brightness difference circuit receiving the color feature vector for each of the images and generating a brightness difference vector;
a color value difference circuit receiving the color value for each of the images and generating a color value difference vector; and
a comparator feature fusion network array receiving the brightness difference vector and the color value difference vector and generating the comparison feature fusion vector.
0. 14. The image processor of claim 13, wherein said bi-chromic information associated with said pixels is based, at least in part, on outputs of transducer pairs associated with said pixels.
0. 15. The image processor of claim 13, wherein said bi-chromic information associated with said pixels is based, at least in part, on transducer output peaks at a first color associated with said pixels and transducer output peaks at a second color associated with said pixels.
0. 16. The image processor of 13, wherein the color processing section comprises:
a plurality of pairs of controlled gain amplifier circuits associated with said pixels to apply a controlled signal gain to said bi-chromic information in response to associated controlled gain signals; and
a common control to generate said controlled gain signals from outputs of said controlled gain amplifier circuits in a feedback loop manner.
0. 17. The image processor of claim 16, wherein said common control is further adapted to control said controlled gain amplifier circuits of all of said color pixel processors in tandem.
0. 18. The image processor of claim 16, further comprising a plurality of brightness value generating circuits associated with said pixels for generating brightness values associated with a sum of outputs of associated ones of said controlled gain amplifier circuits.
0. 20. The method of claim 19, further comprising obtaining said bi-chromic information associated with said pixels based, at least in part, on outputs of transducer pairs associated with said pixels.
0. 21. The method of claim 19, further comprising obtaining said bi-chromic information associated with said pixels based, at least in part, on transducer output peaks at a first color associated with said pixels and transducer output peaks at a second color associated with said pixels.
0. 22. The method of claim 19, further comprising:
applying a controlled signal gain to components of said bi-chromic information in response to associated controlled slain signals to provide controlled gain output signals; and
generating said controlled gain signals based, at least in part, on said controlled gain output signals.
0. 23. The method of claim 22, further comprising controlling application of said controlled signal gain applied to bi-chromic information associated with all of said pixels in tandem.
0. 24. The method of claim 22, further comprising generating brightness values associated with a sum of outputs of associated ones of said controlled gain output signals.
0. 25. The method of claim 19, further comprising:
focusing a lens on an object to project said image onto a retina; and
adjusting an iris between said lens and said retina to adjust light comprising said image.
0. 27. The apparatus of claim 26, further comprising means for obtaining said bi-chromic information associated with said pixels based, at least in part, on outputs of transducer pairs associated with said pixels.
0. 28. The apparatus of claim 26, further comprising means for obtaining said bi-chromic information associated with said pixels based, at least in part, on transducer output peaks at a first color associated with said pixels and transducer output peaks at a second color associated with said pixels.
0. 29. The apparatus of claim 26, further comprising:
means for applying a controlled signal gain to components of said bi-chromic information in response to associated controlled gain signals to provide controlled gain output signals; and
means for generating said controlled gain signals based, at least in part, on said controlled gain output signals.
0. 30. The apparatus of claim 29, further comprising means for controlling application of said controlled signal gain applied to bi-chromic information associated with all of said pixels in tandem.
0. 31. The apparatus of claim 29, further comprising means for generating brightness values associated with a sum of outputs of associated ones of said controlled gain output signals.
0. 32. The apparatus of claim 26, further comprising:
means for focusing a lens on an object to project said image onto a retina; and
means for adjusting an iris between said lens and said retina to adjust light comprising said image.

The invention described herein may be manufactured by or for the Government of the United States of America for Governmental purposes without the payment of any royalties thereon or therefore.

This patent application is co-pending with related patent applications entitled NEURAL DIRECTORS (U.S. patent application Ser. No. 09/436,957), NEURAL SENSORS (U.S. patent application Ser. No. 09/436,956), STATIC MEMORY PROCESSOR (U.S. patent application Ser. No. 09/477,638), DYNAMIC MEMORY PROCESSOR (U.S. patent application Ser. No. 09/477,653), MULTIMODE INVARIANT PROCESSOR (U.S. patent application Ser. No. 09/641,395) and A SPATIAL IMAGE PROCESSOR (Ser. No. 09/853,932), by the same inventor as this patent application.

(1) Field of the Invention

The invention relates generally to the field of color sensors and more particularly to color sensors having neural networks with a plurality of hidden layers, or multi-layer neural networks, and further to a new neural network processor for sensing color in optical image data.

(2) Description of the Prior Art

Electronic neural networks have been developed to rapidly identify patterns in certain types of input data, or accurately to classify the input patterns into one of a plurality of predetermined classifications. For example, neural networks have been developed which can recognize and identify patterns, such as the identification of hand-written alphanumeric characters, in response to input data constituting the pattern of on and off picture elements, or “pixels”, representing the images of the characters to be identified. In such a neural network, the pixel pattern is represented by, for example, electrical signals coupled to a plurality of input terminals, which, in turn, are connected to a number of processing nodes, each of which is associated with one of the alphanumeric characters which the neural network can identify. The input signals from the input terminals are coupled to the processing nodes through certain weighting functions, and each processing node generates an output signal which represents a value that is a non-linear function of the pattern of weighted input signals applied thereto. Based on the values of the weighted pattern of input signals from the input terminals, if the input signals represent a character that can be identified by the neural network, the one of the processing nodes associated with that character will generate a positive output signal, and the others will not. On the other hand, if the input signals do not represent a character that can be identified by the neural network, none of the processing nodes will generate a positive output signal. Neural networks have been developed which can perform similar pattern recognition in a number of diverse areas.

The particular patterns that the neural network can identify depend on the weighting functions and the particular connections of the input terminals to the processing nodes. The weighting functions in, for example, the above-described character recognition neural network, essentially will represent the pixel patterns that define each particular character. Typically, each processing node will perform a summation operation in connection with values representing the weighted input signals provided thereto, to generate a sum that represents the likelihood that the character to be identified is the character associated with that processing node. The processing node then applies the non-linear function to that sum to generate a positive output signal if the sum is, for example, above a predetermined threshold value. Conventional non-linear functions which processing nodes may use in connection with the sum of weighted input signals is generally a step function, a threshold function, or a sigmoid, in all cases the output signal from the processing node will approach the same positive output signal asymptotically.

Before a neural network can be useful, the weighting functions for each of the respective input signals must be established. In some cases, the weighting functions can be established a priori. Normally, however, a neural network goes through a training phase, in which input signals representing a number of training patterns for the types of items to be classified, for example, the pixel patterns of the various hand-written characters in the character-recognition example, are applied to the input terminals, and the output signals from the processing nodes are tested. Based on the pattern of output signals from the processing nodes for each training example, the weighting functions are adjusted over a number of trials. After the neural network has been trained, during an operational phase it can generally accurately recognize patterns, with the degree of success based in part on the number of training patterns applied to the neural network during the training stage, and the degree of dissimilarity between patterns to be identified. Such a neural network can also typically identify patterns that are similar, but not necessarily identical, to the training patterns.

One of the problems with conventional neural network architectures as described above is that the training methodology, generally known as the “back-propagation” method, is often extremely slow in a number of important applications. In addition, under the back-propagation method, the neural network may result in erroneous results that may require restarting of training. Even after a neural network has been through a training phase, confidence that the best training has been accomplished may sometimes be poor. If a new classification is to be added to a trained neural network, the complete neural network must be retrained. In addition, the weighting functions generated during the training phase often cannot be interpreted in ways that readily provide understanding of what they particularly represent.

Edwin H. Land's Relinex theory of color vision is based upon “three color” experiments performed before 1959. A simple “mishap” showed that three colors were not always required to see accurate color. Land used a short and long record of brightness data (black and white transparencies) to produce color perceived by human eyes and not by photographic means. He demonstrated a perception of a full range of pastel colors using two very similar in color light sources such as yellow, at 579 nm and yellow orange, at 599 nm (“Experiments in Color Vision”, Edwin H. Land, Scientific American, Vol. 200 No. May 5, 1959). Land found that in some two record experiments all colors present were not perceived. Although Land demonstrated that two records provided color perceptions, he constructed his Retinex theory upon three records such as his long, medium and short records (An Alternative Technique for the Computation of the Designator in the Retinex Theory of Color Vision”, Edwin H. Land, Proceedings of the National Academy of Sciences, Vol. 83, 1986). The invention herein is related to human color perception discovered during Land's color vision experiments as reported in 1959.

The “Trichromatic” theory in human color vision has been accepted on and off since the time of Thomas Young in 1802 (A Vision in the Brain”, S. Zeki, Blackwell Scientific Publishing, 1993). Still and video electronic camera designs are correctly based upon the trichromatic theory but the current designs are highly subjective to color error reproduction due to changes in the ambient light color temperatures and color filtrations. The device in this invention senses color using a new “bichromatic” theory, which includes a mechanism that insures color constancy over a large range of ambient color temperatures. The use of two lightness records as used by Land in 1959 is one key to this invention.

The bichromatic theory is based upon an interpretation of a biological color process that occurs in the eyes and brain of humans and in some animals. The bichromatic theory is defined as a system that functions together under the following assumptions, accepted principles and rules of procedure, for which FIGS. 4A and 4B are provided for support:

It is therefore an object of the invention to provide a new and improved neural network color sensor.

It is a further object to provide a neural network color sensor in which the weighting functions may be determined a priori.

Another object of the present invention is to provide a neural network color sensor, which can be trained with a single application of an input data set.

In brief summary, the color sensor generates color information defining colors of an image, comparison of colors illuminated under two or more light sources and boundaries between different colors. The color sensor includes an input section, a color processing section, a color comparison section, a color boundary processing section and a memory processing section. The input section includes an array of transducer pairs, each transducer pair defining one of a plurality of pixels of the input section. Each transducer pair comprises at least two transducers, each generating an output having a peak at a selected color, the selected color differing as between the two transducers, and each transducer having an output profile comprising a selected function of color. The color processing section includes a plurality of color pixel processors, each receiving the outputs from the two transducers comprising the transducer pair associated with a pixel. In response, the color processing section generates a color feature vector representative of the brightness of the light incident on the pixel and a color value corresponding to the ratio of outputs from the transducers comprising the transducer pair associated with the pixel. The color boundary processing section generates a plurality of color boundary feature vectors, each associated with a pixel, each representing the difference between the color value generated by the pixel color processor for the respective pixel and color values generated by the pixel color processor for pixels neighboring the respective pixel.

The color boundary sensor produces object shape feature vectors from a function of the differences in color. This color boundary sensor can sense a colored object shape in a color background where a black and white sensing retina could not detect differences in lightness between the background and the object. The color comparator processor can measure and compare the reflective color of two objects, even when each object is illuminated by two lights of different color temperatures. The memory processor section provides a process to recognize a color, a boundary of color and a comparison of colors.

A more complete understanding of the invention and many of the attendant advantages thereto will be readily appreciated as the same becomes better understood by reference to the following detailed description when considered in conjunction with the accompanying drawings wherein corresponding reference characters indicate corresponding parts throughout the several views of the drawings and wherein:

FIG. 1 is a functional block diagram of a color sensor constructed in accordance with the invention;

FIG. 1A is an expanded view of a transducer pair;

FIG. 2 is a functional block diagram of a color processor, which is useful in the color sensor depicted in FIG. 1;

FIG. 3 is a functional block diagram of a color boundary processor, which is useful in the color sensor of in FIG. 1;

FIG. 4A is an example of the responses of two normalized photo transducers used in the color sensor; and

FIG. 4B is a schematic illustration of the theorems defining the workings of the color sensor.

FIG. 1 is a functional block diagram of a color sensor 10 constructed in accordance with the invention. By way of background, the color sensor 10 operates in accordance with a “bi-chromatic” mechanism of color recognition, which is theorized as being similar to the way in which human beings see and recognize color. In the conventional “tri-chromatic” color recognition mechanism, any color light, either reflected or incidental, can be generated combining three different color illuminations. In the reverse, i.e., color recognition, any input color can be represented or analyzed as a combination of three colors, i.e., base colors. Accordingly three transducers, each sensing one of the base three colors, can be used to determine the contribution of each of the base colors in the input color. In the bi-chromatic mechanism, colors can be distinguished using two color transducers, which have peak sensitivity at different colors, and provide a known output signal response as a function of the input color. The color sensor 10 determines, for an input image, the distribution of colors over the image, using two color transducers to identify the color at each point (that is, for each pixel or picture element) in the image. The color boundary process produces object shape features relative to the boundaries between different colors. The color comparator process produces comparative features relative to a “true reflective color” in ambient lights of different color temperatures. The reading of a “true reflective color” in an ambient light of a color temperature and the reading of the same “true reflective color” in an ambient light of a second color temperature is a process that mimics human color constancy.

With reference to FIG. 1, the color sensor 10 includes an input section 11, a color processing section 12 and a color boundary processing section 13, a color comparison processor 19 and a memory processor 29. The color processing section 12 and a color boundary processing section 13 both generate color and color boundary feature vectors, which may be provided to, for example, a memory processing section 14. The input section 11 receives an image of an object and generates, for each point, or pixel, color information signals representative of the color at the particular point of the image. The input section 11 includes a “retina” 15, which comprises an array of transducer pairs 15(1) through 15(M) (generally identified by reference numeral 15(m) and shown in the expanded view of FIG. 1A), which define the pixels of the image. Each transducer pair comprises two transducers, which have output peaks at two different frequencies and which provide a predetermined output value as a function of a color wave band. Preferably, all of the pixels will have one transducer 15(m) (1) which has a peak output at one frequency identified as 1 and the second transducer 15(m) (2) having a peak output at a second frequency identified as 2. The input section 11 further includes a lens 26, which focuses an image of the object onto the retina 15, and an iris 17, which controls the intensity of light incident on the retina 15.

The color processing section 12 uses the color information signals from the input section to generate, for each pixel, a local color feature vector representative of the color of the pixel. The color processing section 12 consists of a color processor array 20 and a feature fusion network array 23. The structure and operation of the color processing section 12 will be described in detail below in connection with FIG. 2. Similarly, the color boundary processing section 13 generates, for each pixel, a local color gradient feature vector that represents the gradient of the color at the pixel. The structure and operation of the color boundary processing section 13 will be described in detail below in connection with FIG. 3. The memory processor 29 is as described in STATIC MEMORY PROCESSOR, U.S. patent application Ser. No. 09/477,638. The parallel memory processors 16 and 18 are as described for the memory processor of the MULTIMODE INVARIANT PROCESSOR (U.S. patent application Ser. No. 09/641,395). The multi-mode invariant image processor, without its input sensor, is used for both parallel memory processors 16 and 18. The possible multiple outputs of the parallel memory processor 18 are the colored input object(s) classifications. The output vector array of the parallel memory processor 16 is a Positional King Of the Mountain (PKOM) array mapped to the pixels 15(m) in the retina, which becomes a map of color classifications of each pixel. It is noted that the PKOM array is a neural network array internal to the parallel memory processor 16 and the remaining neural circuits to the normal output of the MULTIMODE INVARIANT PROCESSOR are not used. The memory processor 29 is a static memory processor and provides an output classification as a degree of color comparison.

The local color feature vectors and the local color gradient feature vectors generated for all of the pixels are processed by the processing section 14 to, for example, classify the image into one of a plurality of image classes. The processing section 14 may comprise any of a plurality of processing elements for processing the vectors generated by the color processors 12, 13 and/or 19.

FIG. 2 is a functional block diagram of color processing section 12 and 19 as used in the color sensor of FIG. 1. With reference to FIG. 2, the color processing section 12 includes a plurality of pixel color processors 20(1) through 20(M), generally identified by reference numeral 20(m). For each color processor 20(m), a corresponding feature fusion network 23(m) of color processing section 12 includes corresponding feature fusion neural directors 35(1) through 35(M) and Multi King Of the Mountain (MKOM) 36(1) through 36(M), generally identified by reference numerals 35(m) and 36(m), respectively. The structures of all of the pixel color processors 20(m) are similar, and so FIG. 2 depicts the structure of only one pixel color processor and the corresponding feature fusion neural director 35(m) and MKOM 36(m). Each pixel color processor 20(m) processes the outputs generated by one of the transducer pairs in the retina 11. The color processing section 12 also includes a common control 21, which controls all of the pixel color processors 20(m) in parallel, controls the iris 17 and receives pixel data from each color processor 20(m).

Each pixel color processor 20(m) includes controlled gain amplifier (CGA) circuits 30(m)(1), 30(m)(2), which receive the color amplitude signals generated by the respective transducers 15(m)(1), 15(m)(2). Each CGA circuit 30(m)(1), 30(m)(2) generates an output adjusted by a gain control factor generated by the common control 21. The gain control factor is a function of the output of the transducer for each frequency having the highest amplitude, referred to as 15(H)(1) and 15(H)(2). The CGA circuits 30(m)(1), 30(m)(2) will normalize the respective outputs in relation to the highest amplitude output for their respective frequency. This allows each transducer pair 15(m) and their respective CGA circuit 30(m) to output differing values, which represent the color at each transducer pair 15(m) as well as the “color temperature” of the light incident on the object or retina 15. The common control 21 senses all transducer outputs for each frequency and uses the highest outputs 15(H)(1), 15(H)(2) to set each CGA circuit 30(m) in the color processor 12 to the same gain as the CGA circuits 30(H)(1), 30(H)(2) from the pixel(s) 15(m) that sensed the highest light energy in retina 15. The transducers 15(H)(1), 15(H)(2), the CGA circuits 30(H)(1), 30(H)(2) and the common control 21 operate as an automatic gain controlled loop normalizing the output signal at CGA circuit 30(H)(1). Therefore, the response of each transducer 15(m)(1) is normalized at the output of each CGA circuit 30(m)(1) relative to the output of CGA circuit 30(H)(1). It is to be noted that the transducers 15(H)(1), 15(H)(2) need not be from the same pixel 15(m), as the spectral light energy of a visual scene image at two separate frequencies is generally not the same everywhere on retina 15.

The gain controlled output of each CGA circuit 30(m)(1), 30(m)(2) is provided to a number of elements, including a respective sum circuit 33(m), a difference circuit 32(m) and the common control 21. The outputs from the CGA circuits 30(m)(1), 30(m)(2) are coupled to the difference circuit, or difference generator 32(m), which generates an output vector that is representative of the difference between the amplitudes of the outputs form the CGA circuits 30(m)(1), 30(m)(2). Accordingly, it will be appreciated that the output generated by the difference generator 32(m) corresponds to the ratio of the amplitudes of the automatic controlled gain signals from the respective transducers 15(H)(1), 15(H)(2) and the respective pixel transducer 15(m) outputs.

As noted above, the outputs from the CGA circuits 30(m)(1) and 30(m)(2) are also coupled to a sum circuit 33(m). The sum circuit 33(m) generates an output that corresponds to the sum of the amplitudes of the automatic controlled gain signal from the respective transducers 15(m)(1) and 15(m)(2), and thus represents the brightness of the light incident on the pixel defined by the transducers.

The output vector from difference circuit 32(m) is coupled to the color boundary processor 13 (FIG. 1). The difference vector from difference circuit 32(m) and the brightness vector from sum circuit 33(m) are also both coupled to a neural director 35(m) that disperses these inputs into a local color feature vector. The neural director 35(m) is preferably similar to the neural directors as described in NEURAL DIRECTOR, U.S. patent application Ser. No. 09/436,957. Neural director 35(m) is preferably established to provide an output vector with an increased dimensionality, which will aid in distinguishing between similar patterns in the input vector.

The output of the neural director 35(m) is coupled to bipolar MKOM 36(m), which is described in detail in STATIC MEMORY PROCESSOR, U.S. patent application Ser. No. 09/477,638. The bipolar MKOM 36(m) generates a number of positive and/or negative outputs M(1) through M(R), generally identified by reference numeral M(r), each of which is associated with one dimension of the feature vector input thereto. Each positive component M(r) of the output vector can have a range of values from zero up to a maximum value, which corresponds to, or is proportional to, the maximum positive element value of the input vector. The positive outputs M(r) that are associated with an input vector component having successively lower positive values, are themselves successively lower in value, thus forming a positive ranking of the vector components. Outputs M(r) that are associated with input vector components having negative values are also ranked as negative vector components in a similar manner to the positive components. The rankings for the respective input feature vectors may be global, for all of the components of the input feature vector, or they may be localized among a selected number of preferably contiguous input feature vector components. The feature vector generated by the bi-polar MKOM 36(m) is coupled to the memory processing section 14.

The outputs from CGA circuits 30(m)(1) and 30(m)(2) of all of the pixel color processors 20(m) are also coupled to the common control 21. The common control 21 includes peak sensing circuits 40(1), 40(2), each of which receives the output from the correspondingly-indexed CGA circuits 30(m)(1), 30(m)(2), and each generates an output which corresponds to the one of the outputs from the correspondingly-indexed CGA circuits 30(m)(1), 30(m)(2) with the largest signal value. The outputs from the peak circuits 40(1), 40(2) are also connected to control the gain of all of the correspondingly-indexed CGA circuits 30(m)(1), 30(m)(2).

The outputs from the CGA circuits 30(m)(1) and 30(m)(2) of all of the color pixel processors 20(m) are also connected to a sum circuit 41. The sum circuit 41 generates an output, which represents the sum of the outputs from all of the CGA circuits 30(m)(1), 30(m)(2) of all of the color pixel processors 20(m). The output provided by the sum circuit 41 represents the total intensity or power of the light incident on the retina 15. An iris control circuit 42 uses the sum circuit 41 output to control the iris 17, which normalizes the intensity of the light on retina 15.

FIG. 3 is a functional block diagram of the color boundary processor 13, which is useful in the color sensor depicted in FIG. 1. The color boundary processor 13 can sense a colored object shape in a background of a different color. A black and white sensing retina often responds to different colors as equal lightness. Therefore, it may not sense an object of one color against a different background color. As noted above, the color boundary processor 13 receives the color vector signals from the difference circuits 32(m) of all of the pixel color processors 20(m). Color boundary processor 13 then generates an output for each pixel 15(m) that represents a color gradient for the pixel 15 (m). The outputs of each difference circuit 32(m) are spatially arranged exactly in the same spatial orientation as each associated pixel 15(m) in retina 15. The array of difference circuit 32(m) outputs becomes a virtual retina 55, shown in FIG. 3 to aid in the visualization of the spatial interconnections between the array of color processors 20 and color boundary processor 13. The color boundary processor 13 comprises a plurality of window difference networks 50(1) through 50(M), generally identified by reference numeral 50(m), each associated with one of the pixels 15(m) and associated window 57(m). Color boundary processor 13 further comprises a like plurality of neural directors 51(m).

Each window difference network 50(m) receives a local window array 57(m) of difference vectors generated by the correspondingly-indexed pixel color processor 20(m). Each window difference network 50(m), in turn, generates an output vector which represents a color acceleration vector between the difference vectors provided by the correspondingly-indexed pixel color processor 20(m) and color vectors for pixels within a predetermined area around the pixel 15(m), illustrated in FIG. 3 as local window 57(m). Local window 57(m) may consist of any chosen pattern of pixels surrounding pixel 15(m). e.g., a star pattern or a box pattern. Each neural director 51(m) receives the color acceleration vector from the correspondingly-indexed window difference network 50(m). As with neural director 35(m), each neural director 51(m) is preferably established to provide an output local color boundary feature vector with the same or an increased dimensionality, which will aid in distinguishing between similar patterns in the input vector.

In a modification to the invention 10, each pixel can be a three transducer set 15(m). Each transducer of the set 15(m) is to be matched to the response of the human retinal color cones. The three transducer set 15(m) will produce two “transducer pairs” for each pixel 15(m) and with two color processors 12 a color retina will be produced. The retina and two parallel memory processors 16 will sense color matched to the human color perception over a wide range of ambient lighting conditions.

With reference again to FIG. 1, the local color feature vectors generated by the pixel color processing section 12, an array of color comparators 19 and the local color boundary feature vectors generated by color boundary processor 13 for all of the pixels 15(m), are coupled to the memory processing section 14. The memory processing section 14 may perform a variety of individual or combined operations in connection with the feature vectors input thereto, including object recognition and the like, based on preselected object classification patterns or the like.

The invention provides a number of advantages. In particular, the invention provides a system for receiving an image of an object and generates, for an array of pixels of the image, color and color gradient/boundary information, in the form of feature vectors, which may be processed to, for example, classify the object into one of a plurality of object classes. The system generates the color and color gradient/boundary information using only two transducers for each pixel, in accordance with a bi-chromatic color recognition scheme, with the transducers having peak responses at selected colors 1 and 2, and a known output profile as a function of color, instead of the non-color constancy process produced in accordance with the tri-chromatic color recognition scheme.

It will be appreciated that numerous modifications may be made to the system 10. For example, the memory processing section 14 may perform processing in connection with comparisons generated for two images, using output color feature vectors generated either by the same color sensor 10 at two points in time, or output comparator vectors which are generated by two color sensors (the second being denoted by 11′ and 12′) for respective pixels 15(m) for respective images. In that case, and with reference to FIG. 2, the color processing section 12, in particular the pixel color processors 20(m), may provide outputs for the two images to the respective difference circuits 60(m), 61(m) of color comparison processor 19, each of which generates a difference vector representing the difference between the difference vectors and brightness vectors generated by the color processors 12 for the respective images. The difference vectors of 60(m) and 61(m) are input to comparator feature fusion network array 62, which operates in a manner similar to feature fusion network array 23. Similar difference circuits (not shown) may also be provided for the local color boundary feature vectors generated by the color difference processors 13 for the respective images.

In addition, the peak detector circuits 40(1), 40(2) of the common control 21 may be replaced with summing circuits that generate a sum output for controlling the CGA circuits 30(m)(1), 30(m)(2).

Preferably, the iris control 42 will generally rapidly adjust the iris in response to changes in the light intensity levels incident on the retina 15, so as to maintain the light levels incident on the transducers within a predetermined operating range. In that case, the CGA circuits 30(m)(1), 30(m)(2) may have a relatively slower response to changes in the automatic gain control signals from the control circuit 21. These differences in response will allow the slower response of normalization via the CGA circuits to maintain a steady color constancy in a scene of rapid brightness changes.

The described components of invention 10 provide the necessary components for a uniquely designed photographer's exposure and color temperature meter. A calibration of the common control network 21 provides values for exposure and color temperature data. The meter may be an independent device, i.e., a hand held meter, or it may be integrated in a camera body, either electronic or film, to provide automatic exposure and color temperature corrections. The device may also be integrated into color printers or printing presses as a color ink control.

It will be apparent that variations and modifications may be made to the invention herein described and illustrated, by those skilled in the art with the attainment of some or all of the advantages of the invention. It is also understood that the color sensor described herein may be connected to the various devices described in the referenced patent applications, wherein all the devices act in concert in a manner similar to the human eye. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.

Woodall, Roger L.

Patent Priority Assignee Title
10665011, May 31 2019 Universite Laval Dynamically estimating lighting parameters for positions within augmented-reality scenes based on global and local features
Patent Priority Assignee Title
4057708, Apr 07 1975 Motorola Inc. Minimum miss distance vector measuring system
4396903, May 29 1981 Westinghouse Electric Corp.; Westinghouse Electric Corporation Electro-optical system for correlating and integrating image data from frame-to-frame
4963981, Jun 21 1988 Hitachi, Ltd. Image sensor device capable of electronic zooming
5245672, Mar 09 1992 UNITED STATES OF AMERICA REPRESENTED BY THE SECRETARY OF COMMERCE Object/anti-object neural network segmentation
5263097, Jul 24 1991 Texas Instruments Incorporated; TEXAS INSTRUMENTS INCORPORATED, A CORPORATION OF DE Parameter normalized features for classification procedures, systems and methods
5263122, Apr 22 1991 Hughes Electronics Corporation Neural network architecture
5311600, Sep 29 1992 BOARD OF TRUSTEES OF, THE LELAND STANFORD JUNIOR UNIVERSITY Method of edge detection in optical images using neural network classifier
5440662, Dec 11 1992 GOOGLE LLC Keyword/non-keyword classification in isolated word speech recognition
5446828, Mar 18 1993 The United States of America as represented by the Secretary of the Navy Nonlinear neural network oscillator
5524065, Feb 07 1992 Canon Kabushiki Kaisha Method and apparatus for pattern recognition
5613037, Dec 21 1993 GOOGLE LLC Rejection of non-digit strings for connected digit speech recognition
5621863, Jul 28 1994 in2H2 Neuron circuit
5629870, May 31 1994 SIEMENS INDUSTRY, INC Method and apparatus for predicting electric induction machine failure during operation
5666467, Mar 03 1993 U.S. Philips Corporation Neural network using inhomogeneities in a medium as neurons and transmitting input signals as an unchannelled wave pattern through the medium
5680481, May 26 1992 Ricoh Company, LTD Facial feature extraction method and apparatus for a neural network acoustic and visual speech recognition system
5687291, Jun 27 1996 The United States of America as represented by the Secretary of the Army Method and apparatus for estimating a cognitive decision made in response to a known stimulus from the corresponding single-event evoked cerebral potential
5712959, Jul 07 1995 SORENTINO CAPITAL LLC Neural network architecture for non-Gaussian components of a mixture density function
5719480, Oct 27 1992 MINISTER OF NATIONAL DEFENCE OF HER MAJESTY S CANADIAN GOVERNMENT Parametric control device
5724487, Jul 07 1995 Neural network for maximum likelihood classification with supervised and unsupervised training capability
5745382, Aug 31 1995 Arch Development Corporation Neural network based system for equipment surveillance
5790758, Jul 07 1995 SORENTINO CAPITAL LLC Neural network architecture for gaussian components of a mixture density function
5842194, Jul 28 1995 GOVERNMENT OF JAPAN AS REPRESENTED BY THE MINISTRY OF ENCONOMY, TRADE AND INDUSTRY, THE Method of recognizing images of faces or general images using fuzzy combination of multiple resolutions
5845271, Jan 26 1996 Non-algorithmically implemented artificial neural networks and components thereof
5850470, Aug 30 1995 Siemens Corporation Neural network for locating and recognizing a deformable object
5852815, Jan 26 1996 Neural network based prototyping system and method
5852816, Jan 26 1996 Neural network based database scanning system
5887087, Apr 13 1994 FUJIFILM Corporation Image reading apparatus
5974163, Dec 13 1995 NEC Corporation Fingerprint classification system
6014653, Jan 26 1996 Non-algorithmically implemented artificial neural networks and components thereof
6028608, Dec 20 1996 HANGER SOLUTIONS, LLC System and method of perception-based image generation and encoding
6038338, Feb 03 1997 NAVY, UNITED STATES OF AMERICA AS REPRESENTED BY THE SECRETARY OF THE NAVY Hybrid neural network for pattern recognition
6105015, Feb 03 1997 NAVY, THE UNITED STATES OF AMERICA, AS REPRESENTED BY THE SECRETARY OF THE Wavelet-based hybrid neurosystem for classifying a signal or an image represented by the signal in a data system
6192360, Jun 23 1998 Microsoft Technology Licensing, LLC Methods and apparatus for classifying text and for building a text classifier
6278799, Mar 10 1997 Hierarchical data matrix pattern recognition system
6301572, Dec 02 1998 Lockheed Martin Corporation Neural network based analysis system for vibration analysis and condition monitoring
6429812, Jan 27 1998 Mobile communication device
6469804, Nov 06 1997 Heidelberger Druckmaschinen AG Method of obtaining colorimetric values
6560582, Jan 05 2000 The United States of America as represented by the Secretary of the Navy Dynamic memory processor
6594382, Nov 04 1999 The United States of America as represented by the Secretary of the Navy Neural sensors
6618713, Nov 04 1999 The United States of America as represented by the Secretary of the Navy Neural directors
6694049, Aug 17 2000 The United States of America as represented by the Secretary of the Navy Multimode invariant processor
6735579, Jan 05 2000 The United States of America as represented by the Secretary of the Navy Static memory processor
6768815, May 10 2001 The United States of America as represented by the Secretary of the Navy Color sensor
6801655, May 10 2001 The United States of America as represented by the Secretary of the Navy Spatial image processor
/
Executed onAssignorAssigneeConveyanceFrameReelDoc
Apr 26 2001WOODALL, ROGER L The United States of America as represented by the Secretary of the NavyASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0202230077 pdf
Date Maintenance Fee Events
Sep 23 2011M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Dec 29 2015M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Mar 29 20144 years fee payment window open
Sep 29 20146 months grace period start (w surcharge)
Mar 29 2015patent expiry (for year 4)
Mar 29 20172 years to revive unintentionally abandoned end. (for year 4)
Mar 29 20188 years fee payment window open
Sep 29 20186 months grace period start (w surcharge)
Mar 29 2019patent expiry (for year 8)
Mar 29 20212 years to revive unintentionally abandoned end. (for year 8)
Mar 29 202212 years fee payment window open
Sep 29 20226 months grace period start (w surcharge)
Mar 29 2023patent expiry (for year 12)
Mar 29 20252 years to revive unintentionally abandoned end. (for year 12)