An image-capturing apparatus for capturing an image by using a solid-state image-capturing device may include a face detector configured to detect a face of a human being on the basis of an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium; an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is close to a specific expression in relation to expressions other than the specific expression; and a notification section configured to notify notification information corresponding to the computed expression evaluation value to an image-captured person.
|
1. An image-capturing apparatus for capturing an image using a solid-state image-capturing device, the image-capturing apparatus comprising:
a face detector configured to detect a face of a human being based on an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium;
an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is closer to a specific expression than to expressions other than the specific expression; and
wherein the expression evaluation section has pre-stored therein information on a determination axis obtained by performing linear discriminant analysis based on a group of sample images of faces each corresponding to one of the specific expression and expressions other than the specific expression, and computes the expression evaluation value based on the magnitude of projection components of a vector based on an image signal of the face detected by the face detector with respect to a vector on the determination axis, and
the determination axis is computed based on data such that data of a vector based on the sample images is dimension-compressed by principal component analysis whereas the magnitude of projection components of a vector based on an image signal is computed without performing principal component analysis.
5. An image-capturing apparatus for capturing an image using a solid-state image-capturing device, the image-capturing apparatus comprising:
a face detector configured to detect a face of a human being based on an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium; and
an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is closer to a specific expression than to expressions other than the specific expression;
wherein the expression evaluation section has pre-stored therein information on a determination axis obtained by performing linear discriminant analysis based on a group of sample images of faces each corresponding to one of the specific expression and expressions other than the specific expression, and computes the expression evaluation value based on the magnitude of projection components of a vector based on an image signal of the face detected by the face detector with respect to a vector on the determination axis,
the determination axis is computed based on data such that data of a vector based on the sample images is dimension-compressed by principal component analysis, and
the expression evaluation section normalizes the image signal of the face detected by the face detector to an image size of the sample image used when the determination axis was determined, and computes the expression evaluation value using the image signal after the normalization.
3. An image-capturing apparatus for capturing an image using a solid-state image-capturing device, the image-capturing apparatus comprising:
a face detector configured to detect a face of a human being based on an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium; and
an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is closer to a specific expression than to expressions other than the specific expression;
wherein the expression evaluation section has pre-stored therein information on a determination axis obtained by performing linear discriminant analysis based on a group of sample images of faces each corresponding to one of the specific expression and expressions other than the specific expression, and computes the expression evaluation value based on the magnitude of projection components of a vector based on an image signal of the face detected by the face detector with respect to a vector on the determination axis,
the determination axis is computed based on data such that data of a vector based on the sample images is dimension-compressed by principal component analysis, and
the expression evaluation section has pre-stored therein data of a vector in the determination axis as data of a vector having a dimension before being dimension-compressed, and computes the expression evaluation value by computing an inner product of the vector having a dimension before being dimension-compressed and a vector based on an image signal of the face detected by the face detector.
2. The image capturing apparatus according to
4. The image capturing apparatus according to
6. The image-capturing apparatus according to
7. The image capturing apparatus according to
|
This application is a divisional of U.S. application Ser. No. 11/881,989, filed on Jul. 30, 2007, which claims priority from Japanese Patent Application No. JP 2006-211000 filed in the Japanese Patent Office on Aug. 2, 2006, the entire content of which is incorporated herein by reference.
1. Field of the Invention
The present invention relates to an image-capturing apparatus and method for capturing an image by using a solid-state image-capturing device, to an expression evaluation apparatus for evaluating the expression of an image-captured face, and to a program for performing the processing thereof.
2. Description of the Related Art
In image-capturing apparatuses, a so-called self-timer function of automatically releasing a shutter button after a fixed period of time has elapsed from when an operation for depressing the shutter is performed is typically installed in not only silver-halide cameras, but also digital still cameras. However, the timing at which a shutter is released by the self-timer function is determined in advance. Therefore, it is difficult to guarantee that a person being image-captured has a desired expression on their face at the time the shutter is released, and there is a problem in that unsatisfactory photographs may be often taken.
On the other hand, in recent years, image processing technologies for performing digital computation processing on the basis of an image signal have progressed rapidly, and as an example thereof, a technology for detecting the face of a human being from an image is known. There is a known face detection technology in which, for example, a difference in luminance between two pixels in a face image is learnt as an amount of a feature, an estimated value indicating whether or not a predetermined region in an input image is to be computed on the basis of the feature amount, and whether or not the image in the region is a face is finally determined on the basis of the estimated value of one or more estimated values (refer to, for example, Japanese Unexamined Patent Application Publication No. 2005-157679 (Paragraph Nos. [0040] to [0052], FIG. 1).
The development of such face detection technologies has progressed to a level at which such a technology can be installed into a digital image-capturing apparatus for performing image capturing using a solid-state image-capturing device, such as a digital still camera. Recently, furthermore, a technology for determining the expression of a detected face has attracted attention. It has been considered that, for example, the expression of a face of an image-captured person could be evaluated for each captured image from the image signal in which a plurality of frames are continuously captured, so that an appropriate image can be selected on the basis of the information on those evaluations (refer to, for example, Japanese Unexamined Patent Application Publication No. 2004-46591 (Paragraph Nos. [0063] to [0071], FIG. 3).
In recent years, since competition among makers of digital image-capturing apparatuses has become intensified, there has been a strong demand for advancement of such image-capturing apparatuses in order to increase the product value thereof. As in the problem of the above-described self-timer function, a captured image is not necessarily satisfactory to an image-capturing person and an image-captured person. Therefore, it may be said that a function of assisting an image-capturing operation in order to increase such a degree of satisfaction is very important for increasing the product value. It has been desired that, in particular, such a function be implemented using an advanced image processing technology. However, a function of assisting an image-capturing operation in real time while such an image-capturing operation is being performed has yet to be implemented.
The present invention has been made in view of such problems. It may be desirable to provide an image-capturing apparatus and method capable of capturing an image with high satisfaction for an image-captured person and an image-capturing person.
It may also be desirable to provide an expression evaluation apparatus capable of capturing an image with high satisfaction for an image-captured person or an image-capturing person, and a program for performing the processing thereof.
According to an embodiment of the present invention, there is provided an image-capturing apparatus for capturing an image by using a solid-state image-capturing device, the image-capturing apparatus may include a face detector configured to detect a face of a human being on the basis of an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium; an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is close to a specific expression in relation to expressions other than the specific expression; and a notification section configured to notify notification information corresponding to the computed expression evaluation value to an image-captured person.
In such an image-capturing apparatus, in a period of time until an image signal obtained by image capturing is recorded on a recording medium, the face detector may detect the face of a person from the image signal. The expression evaluation section may evaluate the expression of the face detected by the face detector, and compute an expression evaluation value indicating how close the face expression is to a specific expression in relation to expressions other than the specific expression. The notification section may notify the image-captured person of notification information corresponding to the computed expression evaluation value.
In the image-capturing apparatus of the present invention, in a period of time until a captured image signal obtained in image capturing is recorded on a recording medium, a face of a person may be detected from the captured image, the expression of the face may be evaluated, and an expression evaluation value indicating the degree to which the specific expression is close to a specific expression in relation to expressions other than the specific expression may be computed. Then, notification information corresponding to the expression evaluation value may be notified or supplied to the image-captured person. Therefore, it is possible to allow the image-captured person to recognize whether the expression of himself/herself is appropriate for image capturing, and as a result, it is possible to prompt the image-captured person to form a better expression. Therefore, it becomes possible to reliably record an image with high satisfaction for an image-captured person and an image-capturing person on a recording medium.
Embodiments of the present invention will now be described below with reference to the drawings.
[First Embodiment]
The image-capturing apparatus shown in
The optical block 11 includes a lens for collecting light from a subject into the image-capturing device 12, a driving mechanism for moving the lens in order to perform focusing and zooming, a shutter mechanism, an iris mechanism, and the like. On the basis of a control signal from the microcomputer 19, the driver 11a controls driving of each mechanism inside the optical block 11.
The image-capturing device 12 is, for example, a CCD (Charge Coupled Device) type or CMOS (Complementary Metal Oxide Semiconductor) type solid-state image-capturing device, is driven on the basis of a timing signal output from the TG 12a, and converts incident light from the subject into an electrical signal. Under the control of the microcomputer 19, the TG 12a outputs a timing signal.
The AFE circuit 13 samples and holds the image signal output from the image-capturing device 12 so as to satisfactorily maintain the S/N (Signal/Noise) ratio by CDS (Correlated Double Sampling) processing, controls gain by AGC (Auto Gain Control) processing, performs A/D conversion, and outputs digital image data.
The camera signal processing circuit 14 performs, on the image data from the AFE circuit 13, AF (Auto Focus), AE (Auto Exposure), a detection process for various kinds of image quality correction processing, and an image quality correction process corresponding to a signal output from the microcomputer 19 on the basis of detection information. As will be described later, in this embodiment, the camera signal processing circuit 14 has a face detection function and a function of extracting data of a face region.
The graphic processing circuit 15 converts image data output from the camera signal processing circuit 14 into a signal to be displayed on the display device 16, and supplies the signal to the display device 16. Furthermore, the graphic processing circuit 15 combines information, such as an expression score (to be described later), in an image in response to a request from the microcomputer 19. The display device 16 is formed of, for example, an LCD (Liquid Crystal Display), and displays an image on the basis of the image signal from the graphic processing circuit 15.
The image encoder 17 compresses and codes the image data output from the camera signal processing circuit 14 and outputs the coded data to the recording apparatus 18. More specifically, the image encoder 17 compresses and codes image data for one frame, which is processed by the camera signal processing circuit 14, in accordance with a coding method such as JPEG (Joint Photographic Experts Group), and outputs the coded data of a still image. Not only a still image but also data of a moving image may also be compressed and coded.
The recording apparatus 18 is an apparatus for recording coded data from the image encoder 17 as an image file, and is implemented, for example, as an apparatus for driving a portable recording medium such as a magnetic tape or an optical disc, or an HDD (Hard Disk Drive).
The microcomputer 19 includes a CPU (Central Processing Unit), and memory such as a ROM (Read Only Memory) and a RAM (Random Access Memory), and centrally controls the image-capturing apparatus by executing a program stored in the memory.
The input section 20 outputs, to the microcomputer 19, a control signal corresponding to operation input to various kinds of input switches by a user. As the input switches, for example, a shutter release button, cross keys used to select various kinds of menus and to set an operation mode, and others are provided.
On the basis of a control signal from the microcomputer 19, the LED light-emitting section 21 allows LEDs provided on the exterior surface of the image-capturing apparatus to be turned on. Examples of the LED include those indicating that a self-timer function is being operated.
On the basis of a control signal from the microcomputer 19, the sound output section 22 outputs sound, such as operation confirmation sound. When an audio data encoder/decoder is provided, reproduction sound when the audio data is reproduced may be output.
In this image-capturing apparatus, signals that are photoreceived and photoelectrically converted by the image-capturing device 12 are sequentially supplied to the AFE circuit 13, whereby a CDS process and an AGC process are performed, and thereafter the signals are converted into digital image data. The camera signal processing circuit 14 performs an image quality correction process on the image data supplied from the AFE circuit 13, and supplies the image data after processing to the graphic processing circuit 15, whereby the image data is converted into an image signal for display. As a result, an image (camera through image) currently being captured is displayed on the display device 16, so that the image-capturing person can confirm the angle of view by viewing the image.
In this state, when an instruction for recording an image is made to the microcomputer 19 as a result of the shutter release button of the input section 20 being depressed, the image data from the camera signal processing circuit 14 is supplied to the image encoder 17, whereby a compression and coding process is performed, and the image data is recorded by the recording apparatus 18. When a still image is to be recorded, image data for one frame is supplied from the camera signal processing circuit 14 to the image encoder 17. When a moving image is to be recorded, processed image data is continuously supplied to the image encoder 17.
Next, a description will be given of an image-capturing operation mode provided in the image-capturing apparatus. The image-capturing apparatus has a mode in which, when a still image is to be captured, the face of an image-captured person is detected from the captured image, the expression of the face is evaluated, and information indicating the degree of the evaluation is notified to the image-captured person, and a mode in which a shutter is released automatically in response to the degree of the evaluation and still image data is recorded in the recording apparatus 18. Hereinafter, the former mode will be referred to as an “expression evaluation mode”, and the latter mode will be referred to as an “expression response recording mode”.
The expression evaluation mode serves the role of evaluating the expression of a face when the face is detected from the captured image, notifying the image-captured person of information corresponding to the evaluation, and prompting the image-captured person to form an expression more appropriate for image capturing. For example, the degree of whether or not the expression is a smile is evaluated. Furthermore, in the expression response recording mode, when the evaluation value exceeds a predetermined value, it is determined that the face of the image-captured person has become an expression appropriate for image capturing, and still image data is automatically recorded. This assists that an image with a high degree of satisfaction for the image-captured person can be recorded. This embodiment has been described as having two modes, that is, an expression evaluation mode and an expression response recording mode. Alternatively, this embodiment may have only the expression response recording mode.
As shown in
At this point, the operation of each function shown in
In the expression evaluation mode, at first, on the basis of image data that is obtained by image capturing using the image-capturing device 12 and that is transmitted through the camera signal processing circuit 14, the face detector 31 detects the face of an image-captured person from the image (step S1). Then, detection information indicating the region of the detected face is output to the face image generator 32. As in this embodiment, when a notification is made by displaying information corresponding to the expression evaluation value on the display device 16, the detection information of the face from the face detector 31 is also supplied to the notification controller 42 of the microcomputer 19.
As a technique for detecting a face, a well-known technique can be used. For example, a technique disclosed in Japanese Unexamined Patent Application Publication No. 2005-157679 can be used. In this technique, first, a difference of luminance between two pixels in a face image is learnt, and it is stored in advance as an amount of a feature. Then, as shown in step S1 of
Next, the face image generator 32 extracts data of a region Af of the detected face (step S2). Then, the face image generator 32 converts the extracted image data into image data of a fixed size, normalizes it, and supplies the image data to the expression evaluation section 41 (step S3).
At this point, in this embodiment, as examples of detection information of the face, which is output from the face detector 31, the position (for example, the coordinate of the left end, hereinafter referred to as “position information of the face”) of a detection frame of a rectangle surrounding the periphery of the face, and the size of the detection frame (for example, the number of pixels in each of the horizontal and vertical directions, hereinafter referred to as “size information of the face”) are assumed to be output. In this case, the face image generator 32 accesses the memory (RAM) in which image data for which the face is to be detected is temporarily stored, and reads only the data of the region corresponding to the position information and the size information of the face from the face detector 31.
The extracted image data is normalized by being subjected to resolution conversion as image data of a fixed size (resolution). The image size after the normalization becomes a size that becomes a processing unit when the expression evaluation section 41 evaluates the expression of the face. In this embodiment, as an example, the size is set at 48×48 pixels.
As the image extraction function and the resolution conversion function provided in the face image generator 32, the same functions that are typically provided for the camera signal processing circuit 14 of the related art for the purpose of detection and generation of an output image can also be used.
Next, on the basis of the normalized image data of the face from the face image generator 32 and the determination axis information 44 that is stored in advance, the expression evaluation section 41 performs an operation for evaluating the degree of appropriateness of the expression of the face and outputs the expression evaluation value (step S4). The expression evaluation value indicates the degree to which the expression of the face is close to one of the two expressions. For example, as two expressions, “smile” and “usual expression” are used. The higher the expression evaluation value, the higher the degree to which the expression is estimated to be a “smile” rather than “usual expression”. The method of computing the expression evaluation value will be described later.
Next, the notification controller 42 notifies the image-captured person of information corresponding to the expression evaluation value output from the expression evaluation section 41 (step S5). For example, the information corresponding to the expression evaluation value is displayed via the graphic processing circuit 15 on the display device 16 oriented toward the image-captured person side. In this case, display may be performed such that a face to be evaluated is specified within the display device 16 on the basis of the position and the size information on the face supplied from the face detector 31. A difference in the expression evaluation value may be notified on the basis of a change in the luminance, a change in the blinking speed, and a change in the color of the LED light-emitting section 21 by using the LED light-emitting section 21. Alternatively, a notification may be made by outputting sound that differs according to the expression evaluation value via the sound output section 22.
In the following description, the expression evaluation section 41 is assumed to evaluate, as an example, the degree about whether the expression of the face is a smile or expressionless. In this embodiment, in particular, the information corresponding to the expression evaluation value is notified to the image-captured person by displaying it on the display device 16, the display screen thereof being oriented toward the image-captured person side. In
As shown in
At this point, when the mode is set to an “expression response recording mode”, the expression evaluation section 41 performs control so that, when the expression evaluation value exceeds a predetermined threshold value, the shutter is released automatically, that is, the captured image is recorded. In
As a result, when a face is detected from the captured image and the expression of the face is evaluated to be appropriate for image capturing (here, when the degree to which the face expression is close to a smile becomes high), the captured image at that time is automatically recorded. Therefore, when compared with the self-timer function of the related art (that is, the function of recording a captured image after a fixed period of time has elapsed from when the shutter release button is depressed), it becomes possible to reliably capture an image in which the image-captured person has a satisfactory expression, and it is possible to increase the degree of satisfaction of the image-captured person and the image-capturing person.
Next, a description will be given below of an example of a specific display screen for a smile score on the display device 16.
In
In the example of the screen display of
In the example of
In the expression response recording mode, the threshold value of the expression evaluation value when a captured image is recorded may be set by a user as desired, so that the image-captured person can freely determine to what degree he/she wishes a face expression is close to a smile before a captured image is recorded. In the example of
The changing of the threshold value of the expression evaluation value is not limited to the above-described method, and may be performed from a dedicated setting screen selected from the menu screen. Alternatively, a dedicated operation key may be provided to change the threshold value of the expression evaluation value. When the display device 16 is of a touch panel type, for example, the threshold value may be changed by allowing a finger to contact a key image displayed on the display device 16. Furthermore, a threshold value may be changed by moving a finger in the side-to-side direction with the finger in contact with the boundary display icon 205 of
When a plurality of faces are detected from within the image-capture screen, an expression evaluation value may be computed for each of those faces, and information corresponding to those values may be displayed on the display device 16.
In the example of
Also, in the example of
In the manner described above, as a result of performing notification of information corresponding to an expression evaluation value by using a display device, it is possible to notify the image-captured person of information corresponding to an expression evaluation value by various methods, such as by displaying a smile score corresponding to an expression evaluation value using a bar graph or a numeric value, the line type, color, brightness, and the like of a face display frame being changed in accordance with an expression evaluation value, or a character for prompting the image-captured person to smile in accordance with an expression evaluation value being displayed in the vicinity of a face. In particular, in the case of a digital video camera, since a notification can be made using a display device in which the display screen orientation is variable, which is heretofore provided, it is possible to reliably record an image with a high satisfaction for the user without causing a large increase in the development/manufacturing cost due to the changing of the basic configuration of the camera.
In the foregoing, a digital video camera is used as an example in which a display device in which the display screen orientation is variable is installed. Some digital still cameras provided with a display device for confirming the angle of view on a side opposite to an image-capturing lens are such that the display screen orientation is variable. If the display screen thereof can be oriented toward the image-captured person side, the above-described display image can be displayed, making it possible to notify the image-captured person of information corresponding to the expression evaluation value.
Next, a description will be given of an expression evaluation method used in this image-capturing apparatus.
In this embodiment, as a method for evaluating an expression, a so-called “Fisher linear discriminant analysis” is used. In this method, first, many sample images of faces each having two expressions are provided in advance. On the basis of the data of these sample images, by considering as a two-class problem between two expressions, a determination axis Ad by which the two expressions are satisfactorily determined by linear discriminant analysis (LDA) is formed in advance. Then, when an expression evaluation is to be made, the inner product of the input data of the face image and the determination axis Ad is determined to compute the expression evaluation value.
As shown in
In this PCA process, first, it is considered to obtain M axes so that the variance among sample image groups of N dimensions (N=48 . . . × . . . 48) of input M (for example, M=300) becomes a maximum. Such axes are determined as solutions (intrinsic vector) of an intrinsic value problem of a covariance matrix of an image group, and by extracting only vector components having a comparatively large coefficient as principal components, the data can be compressed into data of N′ dimensions (N>>N′) of only vector components suitable for showing the features of the face. It is known that, for example, by setting N′=approximately 40, sufficient accuracy can be maintained for the determination of the face expression. By excluding several components starting from the large coefficient from among the principal components obtained by the PCA process, the number of dimensions can be reduced further and the burden of the next PCA process can be reduced while maintaining the expression determination accuracy.
As shown in
Referring back to
In the LDA process, in general, a determination axis is determined such that inter-class and intra-class variances projected on the intrinsic vector of N′ dimensions become a maximum. That is, an intrinsic vector corresponding to the maximum intrinsic value of each of inter-class and intra-class covariance matrixes is determined, and the intrinsic vector is set as a vector (Fisher vector) in the determination axis Ad. The relation between a covariance matrix and an intrinsic value and the relation between a covariance matrix and an intrinsic vector are shown in equations (1) and (2), respectively.
RB{right arrow over (μ)}=λRW{right arrow over (μ)} (1)
RW−1RB{right arrow over (μ)}=λ{right arrow over (μ)} (2)
For the computation of the reverse matrix, the intrinsic value, and the intrinsic vector of the left side of equation (2), an LU (Lower-Upper) decomposition method, a QR decomposition method (Q: orthogonal matrix, R: upper triangular matrix), and a Gaussian elimination method can be used, respectively. The expression evaluation section 41 prestores information, such as a coefficient of each component of a Fisher vector, as information (determination axis information 44) on the determination axis Ad obtained in the above-described manner in a ROM or the like.
The basic procedure of expression determination using the above-described determination axis Ad is as follows. At first, image data of a face detected from a captured image is subjected to a PCA process, and principal components are extracted. Then, the expression of the face image, as shown in the PCA space SPCA of
The information on the Fisher vector can also be converted into information in the pixel space Spxl (the space of the dimension possessed by the original image data before the PCA process). Equation (3) shows an input face image vector as a vector in the pixel space Spxl, and equation (4) shows a Fisher vector as a vector in the pixel space Spxl.
In equation (5-2), since the result of the subtraction between the Fisher vector components and the constant C in the pixel space Spxl can be computed in advance, the expression evaluation section 41 has stored therein the subtraction result and the constant C in advance as the determination axis information 44. Then, when the vector of the face image detected from the captured image is given, the inner product computation of equation (5-2) is performed without performing a PCA process on the vector. In the evaluation value computation for one face using equation (5-2), a maximum number of times subtractions, multiplications, and additions are performed is 48 . . . × . . . 48 only. Moreover, in practice, only the computation of a coefficient corresponding to the 40 or so principal components μ1 to μN, is performed. Therefore, when compared with the case in which the inner product computation of vectors in the PCA space SPCA is performed, the number of computations can be greatly reduced without decreasing the accuracy of expression evaluation, and the expression evaluation value Eexp can be easily computed in real time in a state of angle of view matching before the captured image is recorded.
In such a computation method, for example, even when compared with the case in which an expression is evaluated by matching between a template of many face images and the detected face image, it is possible to perform expression evaluation with a low processing load and with high accuracy. When matching using a template is to be performed, usually, it is necessary to further extract parts, such as the eye or the mouth, from the detected face image and to perform a matching process for each part. In comparison, in the method of this embodiment, after the data of the detected face image is normalized to a fixed size, the face image is replaced with vector information and can be applied to an inner product computation as it is (or only partially applying a mask), and the inner product computation becomes a simple computation composed of subtractions, multiplications, and additions of approximately 40 dimensions as described above.
In this embodiment, as an example, on the basis of the result of the PCA process for a sample image, an average of each distribution of face images of smiles and usual expressions in the PCA space is determined, and a projection point of these averages with respect to the determination axis Ad is determined in advance. Then, by using the middle point of the projection points of each average as a reference, the expression evaluation value Eexp is converted into a numeric value. That is, as shown in
Next, a description will be given of the processing procedure of an image-capturing apparatus operating in the expression response recording mode, the processing procedure being summarized in a flowchart.
[Step S11] The face detector 31 detects a face from the data of a captured image and outputs the position information and the size information of all the detected faces to the face image generator 32 and the notification controller 42.
[Step S12] On the basis of the position information and the size information of the face from the face detector 31, the face image generator 32 extracts data in a region of each face detected from the data of the captured image.
[Step S13] The face image generator 32 normalizes the data of each of the extracted face regions into data of a predetermined number of pixels (here, 48×48 pixels), applies masking to a region for which an expression does not need to be detected, and outputs the image data after processing to the expression evaluation section 41.
[Step S14] The expression evaluation section 41 reads the determination axis information 44, and computes the inner product of vectors obtained from one face image supplied from the face image generator 32 and the vector components of the determination axis in order to compute an expression evaluation value. The computed expression evaluation value is, for example, temporarily stored in a RAM or the like.
[Step S15] The expression evaluation section 41 determines whether or not the expression evaluation process has been performed on all the detected faces. When all the detected faces have not been processed, step S14 is performed again on another face, and when all the detected faces have been processed, step S16 is performed.
[Step S16] On the basis of the expression evaluation value computed in step S15 and the position information and the size information of the face corresponding to the expression evaluation value, the notification controller 42 outputs the expression information such as the smile score and the display frame to the graphic processing circuit 15 and displays them in such a manner as to be combined on the display device 16.
[Step S17] The expression evaluation section 41 determines whether or not the expression evaluation values for all the faces, which are computed in step S14, exceed a threshold value. When there is an expression evaluation value that does not exceed the threshold value, the process returns to step S11, where the expression evaluation section 41 instructs the camera signal processing circuit 14 to detect a face, thereby starting a process for detecting the next face and an expression evaluation process. When all the expression evaluation values exceed a threshold value, step S18 is performed.
[Step S18] The expression evaluation section 41 requests the recording operation controller 43 to record the data of the captured image in the recording apparatus 18. as a result, a recording process is performed on the captured image, and the coded image data after the processing is recorded in the recording apparatus 18.
As a result of the above processing, expression evaluation values are computed for all the detected faces, and information corresponding to the expression evaluation value is notified as display information to the image-captured person, thereby making it possible to prompt the image-captured person to form an expression appropriate for image capturing. When all the image-captured persons have formed expressions appropriate for image capturing, the data of the captured image is automatically recorded. Therefore, it is possible to reliably record an image with a high degree of satisfaction for the image-captured person and the image-capturing person.
The determination criterion in step S17 is only an example, and control is not necessarily performed in such a manner that image data is recorded when all the expression evaluation values exceed a threshold value. For example, when expression evaluation values of a fixed ratio of faces among the detected faces exceed a threshold value, image data may be recorded. Alternatively, image data may be recorded when expression evaluation values of a fixed number of faces exceed a threshold value, so that thereafter, expression evaluation is prevented from being performed on an inadvertently image-captured unwanted face.
In the expression response recording mode, when an expression evaluation value exceeds a predetermined threshold value, a captured image is automatically recorded. In addition, for example, when a shutter release button is depressed by an image-capturing person, the expression of the image-captured person is evaluated after a fixed period of time has elapsed, and a captured image may be automatically recorded when the image-captured person forms an expression appropriate for image capturing. In this case, for example, when the depression of the shutter release button is detected, the microcomputer 19 needs only to start counting time and start processing illustrated in
In the foregoing description, two expressions, that is, a “smile” and an “usual expression”, are defined, in addition to the degree to which the expression is close to a smile. In addition, a determination may be performed that an expression lies between a “smile” and expressions other than a “smile” (referred to as a non-smile). Non-smile expressions may include a plurality of expressions that are not a smile, such as a serious expression, a weeping expression, and an angry expression. In this case, a group of non-smile expressions, hereafter, “non-smiles”, is determined on the basis of the average of sample images of faces corresponding to the plurality of expressions, and a determination axis for an LDA process is computed on the basis of the group of “non-smiles” and a group of “smiles”.
Furthermore, the expression evaluation value does not necessarily need to be a measure of closeness to one particular expression, such as a “smile”. For example, by considering a plurality of specific expressions, such as a “smile” and a “serious expression”, to be expressions appropriate for image capturing, the expression evaluation value may indicate how close the expression is to any of the plurality of expressions. In this case, also, a group of “expressions appropriate for image capturing” may be determined on the basis of the average of sample images of faces corresponding to the plurality of expressions, and a determination axis for an LDA process may be computed on the basis of the group of “expressions appropriate for image capturing” and the group of “expressions inappropriate for image capturing”.
[Second Embodiment]
In this embodiment, information corresponding to an expression evaluation value is notified to an image-captured person by using a part of the LED light-emitting section 21 within the configuration illustrated in
Furthermore, in the LED light-emitting section 21a, the LED (the LED 21f in
[Third Embodiment]
In an image-capturing apparatus 120 shown in
When the image-capturing apparatus 120 has a self-timer function of the related art, an LED used when the self-timer operates can also be used in the operation of evaluating an expression. For example, when the self-timer operates, the blinking speed of the LED is gradually increased as time passes from when the shutter release button is depressed until recording is performed. Then, in the expression evaluation mode and in the expression response recording mode, the higher the expression evaluation value, the more the blinking speed of the LED is increased. With such a configuration, it is possible to notify the image-captured person of information corresponding to the expression evaluation value without changing the basic configuration and the exterior of the image-capturing apparatus of the related art. The double-function light-emitting section is not limited to the function of a self-timer section and can also be used as a light-measuring light-emitting section during exposure control. However, in this case, it is necessary for the section to be capable of emitting visible light at least at the time of expression evaluation.
[Fourth Embodiment]
In each of the above-described embodiments, information corresponding to an expression evaluation value is visually notified. In comparison, in this embodiment, information corresponding to an expression evaluation value is notified using sound by using the sound output section 22 shown in
[Fifth Embodiment]
The expression evaluation function, the function of notifying information on the basis of an expression evaluation value, and the function of automatically recording an image on the basis of an expression evaluation value in each of the above-described embodiments can also be implemented in various kinds of computers as in a PC 140 shown in
In such a computer, the above-described functions are implemented in the computer by the computer executing a program describing processing content of each of the functions. The program describing the processing content can be recorded in advance on a computer-readable recording medium. Examples of computer-readable recording media include a magnetic disk, an optical disc, a magneto-optical recording medium, and a semiconductor memory.
When this program is to be distributed, for example, portable recording media, such as optical discs on which programs are recorded, are sold. Alternatively, a program can be stored in advance in a storage apparatus of a server computer, and a program can also be transferred from a server computer to another computer.
The computer that executes a program stores, for example, a program recorded on a portable recording medium or a program extracted from the server computer, in the storage apparatus of the computer. Then, the computer reads a program from its storage apparatus and performs processing in accordance with the program. The computer can also directly read a program from a portable recording medium, and can perform processing in accordance with the program. The computer can also perform processing in accordance with received programs one by one each time a program is transferred from a server computer.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims.
Patent | Priority | Assignee | Title |
11030662, | Apr 16 2007 | Ebay Inc. | Visualization of reputation ratings |
11763356, | Apr 16 2007 | Ebay Inc. | Visualization of reputation ratings |
8799283, | Mar 29 2011 | Sony Corporation | Apparatus and method for playlist creation based on liking of person specified in an image |
9008429, | Feb 01 2013 | Xerox Corporation | Label-embedding for text recognition |
9626594, | Jan 21 2015 | Conduent Business Services, LLC | Method and system to perform text-to-image queries with wildcards |
Patent | Priority | Assignee | Title |
5659624, | Sep 01 1995 | Key Technology, Inc | High speed mass flow food sorting appartus for optically inspecting and sorting bulk food products |
6684130, | Oct 11 2000 | Sony Corporation | Robot apparatus and its control method |
7203346, | Apr 27 2002 | SAMSUNG ELECTRONICS CO , LTD | Face recognition method and apparatus using component-based face descriptor |
7652695, | Oct 15 2004 | RPX Corporation | System and a method for improving the captured images of digital still cameras |
7667736, | Feb 11 2005 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Optimized string table loading during imaging device initialization |
7734087, | Dec 04 2003 | Samsung Electronics Co., Ltd.; SAMSUNG ELECTRONICS CO , LTD | Face recognition apparatus and method using PCA learning per subgroup |
7856171, | Feb 04 2005 | SAMSUNG ELECTRONICS CO , LTD | Method and apparatus for expressing amount of camera motion |
7889886, | Jul 26 2005 | Canon Kabushiki Kaisha | Image capturing apparatus and image capturing method |
20050141063, | |||
20050201594, | |||
20050201595, | |||
20050280809, | |||
20060110014, | |||
20060115157, | |||
20070025722, | |||
CN101163199, | |||
EP1650711, | |||
JP2003219218, | |||
JP2004046591, | |||
JP2004294498, | |||
JP2005157679, | |||
JP4197019, | |||
WO2005008593, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 25 2007 | OGAWA, KANAME | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026566 | /0987 | |
May 17 2011 | Sony Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
May 30 2013 | ASPN: Payor Number Assigned. |
Sep 27 2016 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 18 2020 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Nov 25 2024 | REM: Maintenance Fee Reminder Mailed. |
Date | Maintenance Schedule |
Apr 09 2016 | 4 years fee payment window open |
Oct 09 2016 | 6 months grace period start (w surcharge) |
Apr 09 2017 | patent expiry (for year 4) |
Apr 09 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 09 2020 | 8 years fee payment window open |
Oct 09 2020 | 6 months grace period start (w surcharge) |
Apr 09 2021 | patent expiry (for year 8) |
Apr 09 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 09 2024 | 12 years fee payment window open |
Oct 09 2024 | 6 months grace period start (w surcharge) |
Apr 09 2025 | patent expiry (for year 12) |
Apr 09 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |