The present invention relates to a system and method for detecting a face that is capable of quickly and correctly deciding whether an input facial image is occluded, regardless of any type of facial image to be inputted. The present invention is characterized in that eigenvectors and weights are extracted from the input facial image using principal component analysis (PCA) and the extracted eigenvectors and weights of the user image are substituted into a occluding-decision algorithm, whereby it can be determined whether the facial image is occluded.
|
9. A method for detecting a face, comprising the steps of:
(a) extracting eigenvectors and weights of respective facial components from an input facial image; and
(b) obtaining an occluding-decision algorithm for deciding whether input facial images are occluded using eigenvectors and weights of a plurality of training images, and deciding whether the input facial image is occluded by substituting the extracted eigenvectors and weights of the input image into the occluding-decision algorithm,
wherein the occluding-decision algorithm is expressed as the following equation:
where yi, λi and b are factors obtained from the training images, and K(x,xi) is eigenvectors and weights extracted from the input facial image, and
wherein step (b) comprises the step of:
deciding the input facial image to be normal if a result value obtained by substituting the eigenvectors and weights into the decision algorithm is 1, and to be occluded if the result value is −1.
1. A system for detecting a face, comprising:
a memory unit for storing eigenvectors and weights extracted from a plurality of training images;
a facial image recognition unit for extracting eigenvectors and weights of respective face components from an input facial image; and
a facial image decision unit for deriving an algorithm for deciding whether input facial images are occluded using the eigenvectors and weights of the training images stored in the memory unit, and for deciding whether the input facial image is occluded by substituting the eigenvectors and weights of the input image extracted in the facial image recognition unit into the derived algorithm,
wherein the deciding algorithm is expressed as the following equation:
where yi, λi and b are factors obtained from the training images, and K(x,xi) is eigenvectors and weights extracted from the input facial image, and
wherein the facial image decision unit is configured to decide the input facial image to be normal if a result value obtained by substituting the eigenvectors and weights into the decision algorithm is 1 and to decide the input facial image to be occluded if the result value is −1.
2. The system according to
3. The system according to
4. The system according to
a monochrome part for converting an input color image into a monochrome image;
a facial image detection part for detecting a facial region from the converted monochrome image;
a facial image normalization part for normalizing the detected facial region;
a facial image division part for dividing the normalized facial region into higher and lower regions; and
an eigenvector/weight extraction part for extracting the eigenvectors and weights of the respective facial components using a principal component analysis (PCA) according to the divided facial regions.
5. The system according to
6. The system according to
7. The system according to
8. The system according to
10. The method according to
(a1) converting the input facial image into a monochrome image;
(a2) detecting a facial region from the converted monochrome image;
(a3) normalizing the detected facial region;
(a4) dividing the normalized facial region into higher and lower regions; and
(a5) extracting the eigenvectors and weights of the respective facial components using principal component analysis (PCA) according to the divided facial regions.
11. The method according to
12. The method according to
13. The method according to
14. The method according to
extracting the eigenvectors and weights of the respective facial components from the training images in which normal and occluded facial images are included and setting values of normal and occluded facial image classes to be different from each other; and
deriving the occluding-decision algorithm using the extracted values of the image classes, eigenvectors and weights of the training images.
15. The method according to
16. The method according to
transmitting a warning message if it is determined that the input facial image is occluded, and deciding again whether the input facial image is occluded; and
rejecting authentication if it is determined that the input facial image is occluded three times or more.
|
This application claims the priority of Korean Patent Application No. 10-2002-0067974 filed on Nov. 4, 2002, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
1. Field of Invention
The present invention relates to a system and method for detecting a face, and more particularly, to a system and method for detecting a face that is capable of quickly and correctly deciding, using an algorithm for determining whether a face is occluded, whether an input facial image is occluded.
2. Description of the Related Art
As the information society advances, automatic teller machines have rapidly come into wide use. Financial crimes in which money is illegally withdrawn using credit cards or passwords of other persons have also increased. To deter these financial crimes, a CCTV is installed in automatic teller machines to identify criminals. However, criminals often commit these crimes while wearing sunglasses or caps so as not to be photographed by the CCTV, and thus, it makes it difficult to identify the faces of the criminals.
Korean Patent No. 0306355 (entitled “User identification system and automatic teller machine using the same”) discloses a user identification system for identifying the face of a user, by acquiring a facial image of the user, obtaining a facial region through filtering of only a skin color region, extracting an eye position from the obtained region, setting a range of a mouth and nose on the basis of the eye position, and checking whether confirmable characteristic points exist. However, the eye position is extracted and the mouth and nose positions are then extracted on the basis of the extracted eye positions. Thus, if eye positions are not extracted, it is difficult to extract the mouth and nose positions. Further, image data of the user is not searched but the presence of facial components (e.g., eyes, nose, mouth, and the like) is merely checked to identify the user. Thus, the face detection of a user cannot be correctly performed.
In addition, Korean Patent No. 0293897 (entitled “Method for recognizing face of user of bank transaction system”) discloses a method for recognizing the face of a user, which comprises the steps of determining facial candidate entities matching with an input user image using chain tracking, extracting contour points and comparing brightness values of the contour points to search graphics corresponding to eyes and mouth of the user, calculating a recognition index for the face, extracting only a single facial candidate entity, and comparing the recognition index for the face of the extracted facial candidate entity with a reference recognition index for the face. However, the presence of the eyes and mouth is determined according to whether the contour points of the eyes or mouth have been extracted. Thus, there is a possibility of misrecognizing sunglasses as eyes in a case where the sunglasses a user wears are similar to the eyes of the user in view of their shape and size.
Furthermore, since conventional face detection technology uses color images, it is difficult to detect feature points for the nose and mouth of the user due to illumination. Consequently, the feature points of the nose and mouth of the user may not be detected. Therefore, there is also a problem in that the legal user may be recognized as an illegal user.
The present invention is conceived to solve the aforementioned problems. It is an object of the present invention to provide a system and method for detecting a face of a user that is capable of quickly and correctly deciding whether an input facial image is occluded although a variety of facial images under:different conditions are inputted.
According to an aspect of the present invention for achieving the object, there is provided a system and method for detecting a face of a user, wherein it can be determined whether a facial image is occluded by extracting eigenvectors and weights from an input facial image of a user using PCA (Principal Component Analysis) and assigning the SVM (Support Vector Machines) to an algorithm for determining whether the image is occluded.
The above and other objects and features of the present invention will become apparent from the following description of an embodiment given in conjunction with the accompanying drawings, in which:
An embodiment of the present invention will now be described with reference to the accompanying drawings.
The facial image recognition unit 200 functions to extract eigenvectors and weights from an input facial image using PCA to classify the normal and occluded face using Support Vector Machines and comprises a monochrome part 210, a facial image detection part 220, a facial image normalization part 230, a facial image division part 240, and an eigenvector/weight extraction part 250 using SVM.
The monochrome part 210 converts an input color image into a monochrome image. The reason for this is that since color and brightness components are mixed together in the color image configured in an RGB (Red, Green, Blue) mode, error due to brightness changes may be generated upon the extraction of the eigenvectors.
The facial image detector 220 uses a technique for dividing the input image into background and facial regions, and detects the facial region from the input image using Gabor filters. Here, a method of detecting a facial region using the Gabor filters is performed by applying some sets of Gabor filters having various directionalities and frequencies into the input image and then detecting the facial region in accordance with response values thereof.
The facial image normalization part 230 performs corrections for image brightness due to illumination, facial image size due to distance from the camera, inclination of the facial image, and the like to normalize the facial region.
The facial image division part 240 divides the normalized facial region into a higher region centered on the eyes and a lower region centered on the nose and mouth. Here, the reason for causing the facial region to be divided into the higher and lower regions is to quickly and correctly extract the eigenvectors of the respective facial components by restricting the size of the search region for each of the facial components since eigenvectors may be extracted from wrong region if the search region is too wide to extract the respective facial components. Further, since peripheral regions are eliminated noise components can be reduced.
The eigenvector/weight extraction part 250 extracts eigenvectors and weights of the eyes, nose and mouth, which are major components of the face, using PCA according to the divided facial regions. Here, eigenvectors of the eyes, nose and mouth can be simultaneously extracted, because the facial region in which the eyes, nose and mouth are located is restricted upon extraction of the eigenvectors.
Hereinafter, the mathematical expressions used for extracting the eigenvectors and weights using PCA will be described.
Γ=[r1,r2,r3, . . . ,rIJ−1,γIJ] (1)
Γ=[r1′,r2′,r3′, . . . ,rIJ−1′,rIJ′] (2)
where Γ in Equation (1) represents facial images having a size of I×J stored in the normal facial image class of the memory unit 100, and Γ′ in Equation (2) represents facial images having a size of I×J stored in the occluded facial image class of the memory unit 100.
Formula 1 is used to obtain an average facial image of the normal facial images and an average facial image of the occluded facial images,respectively. Here, N is the total number of normal facial images, and M is the total number of occluded facial images.
First, a method of extracting eigenvectors and weights based on the normal facial images will be explained.
Γ is applied to Ψ in Formula 1 so as to obtain the average facial image, and a vector Φi is then calculated by subtracting the average facial image (Ψ) from the facial images (Γ)
That is, Φi=Γ−Ψ.
Using the vector Φi calculated as such, a covariance matrix is produced in accordance with Formula 2 below.
Eigenvalues (λi) and eigenvectors (ui) can be calculated using Formula 2. In such a case, the eigenvalues are first calculated using the equation, Cx=λx, and the eigenvectors are then calculated.
Thereafter weights can be calculated using the eigenvectors calculated as such in accordance with the following Formula 3.
Wf=(Γ−Φ)×ui [Formula 3]
Using Formula 3, the weights (wf) are calculated.
Although only a method of extracting eigenvectors and weights from the normal facial images has been described above, a method of extracting eigenvectors and weights from the partly occluded facial images is performed in the same manner as the method of extracting eigenvectors and weights from the normal facial images. Further, eigenvectors and weights are extracted from the higher and lower regions in the facial region,respectively.
The facial image decision unit 300 decides whether the input facial image is occluded, through the occluding-decision algorithm that has been obtained from the training images stored in the memory unit 100. The occluding-decision algorithm, using Support Vector Machines, is expressed as the following Formula 4.
If a value obtained from the occluding-decision algorithm in Formula 4 is 1, the facial image is decided to be a normal one. On the other hand, if the value from the algorithm is −1, the facial image is determined to be a partly occluded one. This is because the algorithm for deciding whether the facial image is occluded has been configured after the normal and partly occluded facial images stored in the memory unit 100 are set to have class values of 1 and −1,respectively, and then trained.
In Formula 4 above, yi, λi, and b are set by substituting a set of the class value (1, in the present invention) of the normal training image, the eigenvectors and the weights, which are stored in the memory unit 100, and another set of the class value (−1, in the present invention) of the partly occluded facial images, eigenvectors and weights, into Formula 4,respectively. These values may vary as the training images stored in the memory unit 100 are updated.
The formula for calculating the polynomial kernel K is expressed as follows:
K(x,xi)=(<x·xi>)d
where K can be calculated by an inner product of x and xi (i.e., x·xi=|x∥xi| cos (θ)), x is the weights, xi is the eigenvectors, and d is a constant.
Thus, the class value f(x) of the facial images, which will determine whether the facial images are occluded, can be obtained by applying yi, λi and b obtained using the eigenvectors and weights of the training images and then applying the eigenvectors and weights extracted from the facial images to the polynomial kernel K(x,xi).
Hereinafter, a process of calculating yi, λi and b of the occluding-decision algorithm will be described by way of example.
In a case where 5 frames of normal facial images and 8 frames of partly occluded facial images are stored in the memory unit 100, a method of deriving the factors (yi, λi and b) of the algorithm in Formula 4 is expressed as follows:
First image of the normal facial image
Second image of the normal facial images
Fifth image of the normal facial images
First image of the partly occluded facial images
Second image of the partly occluded facial images
Eighth image of the partly occluded facial images
As described above, the class value f(x) is set to 1 and −1 for the normal and partly occluded facial images,respectively, and eigenvectors and weights of the respective facial images are then applied to K(x,xi) so as to calculate yi, λi and b that satisfy the formulas.
Therefore, it can be correctly determined whether the facial images are occluded, by extracting eigenvectors and weights from the input facial images and substituting the extracted eigenvectors and weights into the occluding-decision algorithm.
A facial region is detected from an input training image (S150). Here, the input training image includes normal facial images to which illumination changes, countenance expression changes, beards, scaling shift and rotation changes are applied, and partly occluded facial images to which illumination changes, countenance changes, scaling shift and rotation changes are applied.
The detected facial region is normalized (S152), and the normalized facial region is then divided into a higher region centered on the eyes and a lower region centered on the nose and mouth (S154). Eigenvectors and weights of the respective facial components are extracted using PCA according to the divided facial regions and are stored in the memory unit 100 (S156). Since the detection of the eigenvectors and weights in the facial region has been illustrated in detail with reference to Formulas 1 to 3, the detailed explanation thereof will be omitted.
Next, the factors of the algorithm for determining whether the face is occluded are set on the basis of the stored eigenvectors and weights (S153).
Steps S150 to S158 are to derive the algorithm for determining whether the face is occluded. After the hiding-decision algorithm has been derived, steps S100 to S120 of deriving the occluding-decision algorithm are not performed any longer.
Hereinafter, the process of determining whether an input user image is occluded will be discussed as follows.
If a user image is inputted (S100), the input color image is converted into a monochrome image (S102) and a facial region is then detected from the monochrome image using the Gabor filter response (S104). Thus, the use of a monochrome facial image can reduce improper recognition problems involved with facial color due to makeup, skin color, etc.
Then, the size, brightness, inclination and the like of the facial image are corrected, and the facial region is then normalized (S106). The normalized facial region is divided into a higher region centered on the eyes and a lower region centered on the nose and mouth (S108). Consequently, the sizes of search regions in the respective facial components (the eyes, nose and mouth) are limited (
Preferably, an entire facial image (320*240) is normalized into 60*60, and an eye region is set to (0,5) to (60,30), i.e. 60*25, and a nose and mouth region is set to (0,25) to (60,60), i.e. 60*35, in the normalized facial image, as shown in
Furthermore, since the respective facial components are extracted from the restricted region, the size of the search region is limited. Thus, time needed for extracting the eigenvectors of the facial components can be reduced and noise components such as hair and background can also be reduced.
Using PCA on the basis of the divided facial regions, the eigenvectors and weights of the eyes are extracted in the higher region and the eigenvectors and weights of the nose and mouth are similarly extracted in the lower region (S110).
Next, the extracted eigenvectors and weights of the user image are applied to the occluding-decision algorithm that has been derived from steps S150 to S158 (S112). That is, the detected eigenvectors and weights of the user image are applied to K(x,xi), and the class values f(x) are then calculated by using the values of yi, λi and b obtained through the eigenvectors and weights of the training image. Here, the class value f(x) may be obtained according to the higher and lower regions,respectively.
Thereafter, it is determined whether the class values f(x) of the higher and lower regions are 1 or −1, in order to determine whether the facial image is occluded (S114 and S118). Thus, if the value obtained through the occluding-decision algorithm is 1, it is determined that the facial image is normal (S116). On the other hand, if the value obtained through the algorithm is 31 1, it is determined that the facial image is partly occluded (S120).
In the meantime, the determination of whether the facial image is occluded is performed at the same time in the higher and lower regions of the user image. In such a case, if it is determined that any one of the higher and lower regions is a occluded facial image, it should be determined that the user's facial image is occluded. At this time, since it is simultaneously determined whether the facial images in the higher and lower regions are occluded, it can be more quickly determined that the facial image is occluded.
Accordingly, if the facial image of the user is determined to be normal, user authentication is performed (S202). If it is determined that the facial image of the user is occluded, a warning message is transmitted to the user (S206). Then, operations of detecting the user's facial image and deciding whether the facial image, is occluded are performed again (S200). At this time, when it is determined three or more times that the facial image of the user has been occluded (S208), user authentication is rejected (S210).
Tables 1 and 2 below show examples in which performance of the algorithm for deciding whether a facial image is occluded according to the present invention is tested using the memory unit 100 in which 3200 normal facial images, 2900 facial images of users wearing sunglasses, 4500 facial images of users wearing masks and mufflers, and additional images without any facial images, etc. are stored.
Table 1 represents test results performed by extracting 100 eigenvectors respectively from the normal facial image class and the occluded facial image class.
TABLE 1
Number of Errors
Number of Support Vectors
Occurring
Higher Region
250
0
Lower Region
266
1
That is, the determination of whether a facial image is occluded has been made at least 200 times based on the extracted eigenvectors. As a result, it can be understood that a low error generation rate is obtained.
Table 2 below represents results of the same test for occluded facial images as that for the normal facial image when results obtained when the determination for the normal image of whether the image is occluded show a 98% search rate. Table 2 shows test results for 370 facial images of users wearing sunglasses (i.e., in the higher region) and 390 facial images of users wearing mufflers (i.e. in the lower region).
TABLE 2
Search Rate
Improper Recognition Rate
Higher Region
95.2%
2.4%
Lower Region
98.8%
0%
As a result of the test, it can be understood that the determination whether a user's facial region is occluded is made at the search rate of 95% or above and that improper recognition due to the occluding of a user's facial features is extremely low.
Since facial images under all conditions capable of being produced with the training images are included, the occluding-decision algorithm that can be employed for all facial images under various conditions can be derived. Thus, a highly successful search rate can be obtained when it is determined whether the facial image of the user is occluded.
According to the present invention constructed as such, there is an advantage in that it can be correctly and quickly determined through the occluding-decision algorithm whether input facial images of the users are occluded, even though a variety of facial images are inputted.
Further, there are advantages in that since the input facial image is divided into a higher region and a lower region to restrict the search regions of the respective facial components, processing time needed for extracting eigenvectors and weights of the respective facial components can be reduced, noise components such as hair and background in the images can also be reduced, and eigenvectors and weights of the eyes, nose and mouth in the respective relevant regions can be simultaneously extracted.
Furthermore, there is another advantage in that any influence of the background or illumination can be reduced because of the use of monochrome images, and thus, improper recognition problems, which may occur upon extraction of the eigenvectors, can also be reduced.
Although the present invention has been described in connection with the preferred embodiment thereof shown in the accompanying drawings, they are mere examples of the present invention. It can also be understood by those skilled in the art that various changes and modifications thereof can be made thereto without departing from the scope and spirit of the present invention defined by the claims. Therefore, the true scope of the present invention should be defined by the technical spirit of the appended claims.
Kee, Seok-cheol, Yoon, Sang-Min
Patent | Priority | Assignee | Title |
10248848, | Mar 13 2012 | PIECE FUTURE PTE LTD | Method and apparatus for improved facial recognition |
10733472, | Jun 21 2007 | Adeia Imaging LLC | Image capture device with contemporaneous image correction mechanism |
11042727, | Sep 30 2019 | LENOVO SWITZERLAND INTERNATIONAL GMBH | Facial recognition using time-variant user characteristics |
11651229, | Nov 22 2017 | ZHEJIANG DAHUA TECHNOLOGY CO., LTD. | Methods and systems for face recognition |
11693413, | Feb 29 2016 | AI Incorporated | Obstacle recognition method for autonomous robots |
7403643, | Aug 11 2006 | FotoNation Limited | Real-time face tracking in a digital image acquisition device |
7460694, | Aug 11 2006 | FotoNation Limited | Real-time face tracking in a digital image acquisition device |
7460695, | Aug 11 2006 | FotoNation Limited | Real-time face tracking in a digital image acquisition device |
7469055, | Aug 11 2006 | FotoNation Limited | Real-time face tracking in a digital image acquisition device |
7616233, | Jun 26 2003 | FotoNation Limited | Perfecting of digital image capture parameters within acquisition devices using face detection |
7620218, | Aug 11 2006 | FotoNation Limited | Real-time face tracking with reference images |
7630527, | Jun 26 2003 | FotoNation Limited | Method of improving orientation and color balance of digital images using face detection information |
7634109, | Jun 26 2003 | FotoNation Limited | Digital image processing using face detection information |
7643684, | Jul 15 2003 | Samsung Electronics Co., Ltd.; SAMSUNG ELECTRONICS, CO , LTD | Apparatus for and method of constructing multi-view face database, and apparatus for and method of generating multi-view face descriptor |
7646915, | Apr 21 2004 | FUJIFILM Business Innovation Corp | Image recognition apparatus, image extraction apparatus, image extraction method, and program |
7684630, | Jun 26 2003 | FotoNation Limited | Digital image adjustable compression and resolution using face detection information |
7693311, | Jun 26 2003 | FotoNation Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
7702136, | Jun 26 2003 | FotoNation Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
7783083, | Jun 18 2005 | Samsung Electronics Co., Ltd | Apparatus and method for detecting occluded face and apparatus and method for discriminating illegal user using the same |
7809162, | Jun 26 2003 | FotoNation Limited | Digital image processing using face detection information |
7844076, | Jun 26 2003 | FotoNation Limited | Digital image processing using face detection and skin tone information |
7844135, | Jun 26 2003 | FotoNation Limited | Detecting orientation of digital images using face detection information |
7848549, | Jun 26 2003 | FotoNation Limited | Digital image processing using face detection information |
7853043, | Jun 26 2003 | FotoNation Limited | Digital image processing using face detection information |
7860274, | Jun 26 2003 | FotoNation Limited | Digital image processing using face detection information |
7864990, | Aug 11 2006 | FotoNation Limited | Real-time face tracking in a digital image acquisition device |
7912245, | Jun 26 2003 | FotoNation Limited | Method of improving orientation and color balance of digital images using face detection information |
7916897, | Aug 11 2006 | FotoNation Limited | Face tracking for controlling imaging parameters |
7925093, | Apr 21 2004 | FUJIFILM Business Innovation Corp | Image recognition apparatus |
7953251, | Oct 28 2004 | FotoNation Limited | Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images |
7962629, | Jun 17 2005 | FotoNation Limited | Method for establishing a paired connection between media devices |
7965875, | Jun 12 2006 | TOBII TECHNOLOGY LIMITED | Advances in extending the AAM techniques from grayscale to color images |
8005265, | Sep 08 2008 | FotoNation Limited | Digital image processing using face detection information |
8050465, | Aug 11 2006 | FotoNation Limited | Real-time face tracking in a digital image acquisition device |
8055029, | Jun 18 2007 | FotoNation Limited | Real-time face tracking in a digital image acquisition device |
8055067, | Jan 18 2007 | TOBII TECHNOLOGY LIMITED | Color segmentation |
8055090, | Oct 30 2008 | FotoNation Limited | Digital image processing using face detection information |
8126208, | Oct 30 2008 | FotoNation Limited | Digital image processing using face detection information |
8131016, | Oct 30 2008 | FotoNation Limited | Digital image processing using face detection information |
8131063, | Jul 16 2008 | 138 EAST LCD ADVANCEMENTS LIMITED | Model-based object image processing |
8135184, | Oct 28 2004 | FotoNation Limited | Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images |
8155397, | Sep 26 2007 | FotoNation Limited | Face tracking in a camera processor |
8204301, | Feb 25 2009 | 138 EAST LCD ADVANCEMENTS LIMITED | Iterative data reweighting for balanced model learning |
8208717, | Feb 25 2009 | 138 EAST LCD ADVANCEMENTS LIMITED | Combining subcomponent models for object image modeling |
8213737, | Jun 21 2007 | FotoNation Limited | Digital image enhancement with reference images |
8224108, | Oct 30 2008 | FotoNation Limited | Digital image processing using face detection information |
8260038, | Feb 25 2009 | 138 EAST LCD ADVANCEMENTS LIMITED | Subdivision weighting for robust object model fitting |
8260039, | Feb 25 2009 | 138 EAST LCD ADVANCEMENTS LIMITED | Object model fitting using manifold constraints |
8320641, | Oct 28 2004 | FotoNation Limited | Method and apparatus for red-eye detection using preview or other reference images |
8326066, | Jun 26 2003 | FotoNation Limited | Digital image adjustable compression and resolution using face detection information |
8330831, | Aug 05 2003 | Adeia Imaging LLC | Method of gathering visual meta data using a reference image |
8345114, | Jul 30 2008 | FotoNation Limited | Automatic face and skin beautification using face detection |
8379917, | Oct 02 2009 | FotoNation Limited | Face recognition performance using additional image features |
8384793, | Jul 30 2008 | FotoNation Limited | Automatic face and skin beautification using face detection |
8385610, | Aug 11 2006 | FotoNation Limited | Face tracking for controlling imaging parameters |
8498452, | Jun 26 2003 | FotoNation Limited | Digital image processing using face detection information |
8509496, | Aug 11 2006 | FotoNation Limited | Real-time face tracking with reference images |
8515139, | Mar 15 2012 | GOOGLE LLC | Facial feature detection |
8593542, | Dec 27 2005 | Adeia Imaging LLC | Foreground/background separation using reference images |
8666124, | Aug 11 2006 | FotoNation Limited | Real-time face tracking in a digital image acquisition device |
8666125, | Aug 11 2006 | FotoNation Limited | Real-time face tracking in a digital image acquisition device |
8675991, | Jun 26 2003 | FotoNation Limited | Modification of post-viewing parameters for digital images using region or feature information |
8682097, | Feb 14 2006 | FotoNation Limited | Digital image enhancement with reference images |
8705850, | Jun 20 2008 | Aisin Seiki Kabushiki Kaisha | Object determining device and program thereof |
8744145, | Aug 11 2006 | FotoNation Limited | Real-time face tracking in a digital image acquisition device |
8896725, | Jun 21 2007 | Adeia Imaging LLC | Image capture device with contemporaneous reference image capture mechanism |
8908932, | Jun 26 2003 | FotoNation Limited | Digital image processing using face detection and skin tone information |
8948468, | Jun 26 2003 | Adeia Imaging LLC | Modification of viewing parameters for digital images using face detection information |
8989453, | Jun 26 2003 | FotoNation Limited | Digital image processing using face detection information |
9007480, | Jul 30 2008 | FotoNation Limited | Automatic face and skin beautification using face detection |
9053545, | Jun 26 2003 | Adeia Imaging LLC | Modification of viewing parameters for digital images using face detection information |
9116925, | Mar 10 2011 | International Business Machines Corporation | Hierarchical ranking of facial attributes |
9129381, | Jun 26 2003 | FotoNation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
9177130, | Mar 15 2012 | GOOGLE LLC | Facial feature detection |
9330111, | Mar 10 2011 | International Business Machines Corporation | Hierarchical ranking of facial attributes |
9405995, | Jul 14 2008 | ABACUS INNOVATIONS TECHNOLOGY, INC ; LEIDOS INNOVATIONS TECHNOLOGY, INC | Method and apparatus for facial identification |
9558396, | Oct 22 2013 | Samsung Electronics Co., Ltd. | Apparatuses and methods for face tracking based on calculated occlusion probabilities |
9692964, | Jun 26 2003 | FotoNation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
9767539, | Jun 21 2007 | Adeia Imaging LLC | Image capture device with contemporaneous image correction mechanism |
Patent | Priority | Assignee | Title |
5438629, | Jun 19 1992 | United Parcel Service of America, Inc. | Method and apparatus for input classification using non-spherical neurons |
5710833, | Apr 20 1995 | Massachusetts Institute of Technology | Detection, recognition and coding of complex objects using probabilistic eigenspace analysis |
6430306, | Mar 20 1995 | L-1 IDENTITY SOLUTIONS OPERATING COMPANY, INC | Systems and methods for identifying images |
6608914, | Dec 12 1997 | Kaushiki Kaisha Toshiba | Person recognizing apparatus which can reduce time and year changing effects and method therefor |
6944319, | Sep 13 1999 | ZHIGU HOLDINGS LIMITED | Pose-invariant face recognition system and process |
7123754, | May 22 2001 | Panasonic Intellectual Property Corporation of America | Face detection device, face pose detection device, partial image extraction device, and methods for said devices |
JP199882154, | |||
JP20000061100, | |||
JP2001092963, | |||
JP2001169272, | |||
JP8083341, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 25 2003 | YOON, SANG-MIN | SAMSUNG ELECTRONICS CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014665 | /0326 | |
Oct 25 2003 | KEE, SEOK-CHEOL | SAMSUNG ELECTRONICS CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014665 | /0326 | |
Nov 04 2003 | Samsung Electronics Co., Ltd. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 13 2009 | ASPN: Payor Number Assigned. |
Jun 20 2011 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jun 29 2011 | ASPN: Payor Number Assigned. |
Jun 29 2011 | RMPN: Payer Number De-assigned. |
Sep 04 2015 | REM: Maintenance Fee Reminder Mailed. |
Jan 22 2016 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jan 22 2011 | 4 years fee payment window open |
Jul 22 2011 | 6 months grace period start (w surcharge) |
Jan 22 2012 | patent expiry (for year 4) |
Jan 22 2014 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 22 2015 | 8 years fee payment window open |
Jul 22 2015 | 6 months grace period start (w surcharge) |
Jan 22 2016 | patent expiry (for year 8) |
Jan 22 2018 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 22 2019 | 12 years fee payment window open |
Jul 22 2019 | 6 months grace period start (w surcharge) |
Jan 22 2020 | patent expiry (for year 12) |
Jan 22 2022 | 2 years to revive unintentionally abandoned end. (for year 12) |