An object area tracking apparatus has: a face detection unit for detecting a face area on the basis of a feature amount of a face from a supplied image; a person's body detection unit for detecting an area of a person's body on the basis of a feature amount of the person's body; and a main object determination unit for obtaining a priority for each of the objects by using detection results by the face detection unit and the person's body detection unit and determining a main object of a high priority, wherein for the object detected only by the person's body detection unit, the priority is changed in accordance with a past detection result of the object in the face detection unit.
|
1. An object area tracking apparatus for detecting a target object area from a supplied image and tracking the detected object area, comprising:
a first detection unit configured to detect a predetermined object area from the image on the basis of a first feature amount;
a second detection unit configured to detect a predetermined object area from the image on the basis of a second feature amount different from the first feature amount; and
a main object determination unit configured to obtain a priority for each of the objects by using a detection result of the first detection unit and a detection result of the second detection unit and determine a main object of a high priority from the objects,
wherein for the object which is not detected by the first detection unit and is detected by the second detection unit, the main object determination unit changes the priority in accordance with a detection result of the past of the object in the first detection unit.
11. A control method of an object area tracking apparatus for detecting a target object area from a supplied image and tracking the detected object area, comprising:
a first detection step of detecting a predetermined object area from the image on the basis of a first feature amount;
a second detection step of detecting a predetermined object area from the image on the basis of a second feature amount different from the first feature amount; and
a main object determination step of obtaining a priority for each of the objects by using a detection result in the first detection step and a detection result in the second detection step and determining a main object of a high priority from the objects,
wherein in the main object determination step, for the object which is not detected in the first detection step and is detected in the second detection step, the priority is changed in accordance with a detection result of the past of the object in the first detection step.
12. A non-transitory computer-readable storage medium storing a program for causing a computer to execute a control method of an object area tracking apparatus for detecting a target object area from a supplied image and tracking the detected object area,
the control method comprising:
a first detection step of detecting a predetermined object area from the image on the basis of a first feature amount;
a second detection step of detecting a predetermined object area from the image on the basis of a second feature amount different from the first feature amount; and
a main object determination step of obtaining a priority for each of the objects by using a detection result in the first detection step and a detection result in the second detection step and determining a main object of a high priority from the objects,
wherein in the main object determination step, for the object which is not detected in the first detection step and is detected in the second detection step, the priority is changed in accordance with a detection result of the past of the object in the first detection step.
2. An apparatus according to
wherein the main object determination unit provides a priority for each of the detected objects shown by the detection result integrated by the detection result integration unit.
3. An apparatus according to
4. An apparatus according to
5. An apparatus according to
6. An apparatus according to
7. An apparatus according to
8. An apparatus according to
9. An image pickup apparatus comprising:
an image pickup unit configured to supply a picked-up image;
the object area tracking apparatus according to claim 1; and
a control unit configured to control an image pickup condition in the image pickup unit in accordance with information of the object which is output by the object area tracking apparatus.
10. A display apparatus comprising:
a display unit configured to display an image which is supplied;
the object area tracking apparatus according to
a control unit configured to control a display condition of the image in the display unit in accordance with information of the object which is output by the object area tracking apparatus.
|
1. Field of the Invention
The present invention relates to an object area tracking apparatus and its control method and program.
2. Description of the Related Art
An image processing technique for automatically detecting a specific object pattern from an image is useful and can be used, for example, to specify an area of a person's face. Such an image processing technique can be used in many fields such as communication conference system, man-machine interface, security system, monitor system for tracking a person's face, image compression, and the like. In an image pickup apparatus such as digital camera, digital video camera, or the like, a specific object is detected from a photographed image and a detection result is set as a control target, thereby optimizing a focus and an exposure. For example, the Official Gazette of Japanese Patent Application Laid-Open No. 2005-318554 discloses a photographing apparatus which detects a position of a face of a person in an image to focus on the face and photographs the image under exposure which is optimum to the face. The Official Gazette of Japanese Patent Application Laid-Open No. 2009-211311also proposes image processing apparatus and method for detecting an upper half of a person's body from an image and counting the number of persons.
However, when the person's face is detected from the image, in the case where a feature of the face is not sufficiently obtained because the person is backward or the like, the face cannot be detected. In a case where the upper half of the person's body is detected from the image, even if the person is backward, the upper half can be detected. However, if the person is in a special attitude or if a portion of an area of the upper half of the person's body does not appear in the image, the upper half cannot be detected. That is, a detectable situation changes in dependence on a detecting method of the object. Therefore, in order to reduce a situation where the object cannot be detected and to improve a detection rate, a method whereby different detecting methods are used together is considered. For example, by using a result of the detection of the person's body for the object whose face could not be detected, the detection rate of the object can be improved.
There is a main object determination technique for selecting an object (main object) which is used in photographing control such as in-focusing or the like from the detected objects. In the case where a plurality of objects are detected, such a technique is used, for example, to select one main object to be subjected to adjustment of focus and exposure from the objects. The main object determination technique is required to automatically determine a target photographing object which the user intends to photograph.
By the detection of the person's body from the image, even in a state where the object is backward and the face cannot be seen, the object can be detected. However, even if the face could be detected, there is a case where the user does not want to select such a face as a main object. For example, in a scene in which a child or the like is always moving, in order to improve stability of the detection and tracking of the object, the user also wants to use detection data at the time when the person is backward or the like, as data of the main object. When such a show that a mascot is dancing is photographed, there is a case where an audience who is backward happens to be taken into a picture. In such a scene, even if the audience who is backward could be detected, the user does not want to select such an image as a main object.
The invention is made in consideration of the foregoing problems and, it is an aspect of the invention to enable a main object to be appropriately selected by a plurality of different detecting methods from objects detected from an image in an object area tracking apparatus.
According to an aspect of the invention, an object area tracking apparatus for detecting a target object area from a supplied image and tracking the detected object area, comprises: a first detection unit configured to detect a predetermined object area from the image on the basis of a first feature amount; a second detection unit configured to detect a predetermined object area from the image on the basis of a second feature amount different from the first feature amount; and a main object determination unit configured to obtain a priority for each of the objects by using a detection result of the first detection unit and a detection result of the second detection unit and determine a main object of a high priority from the objects, wherein for the object which is not detected by the first detection unit and is detected by the second detection unit, the main object determination unit changes the priority in accordance with a detection result of the past of the object in the first detection unit.
According to another aspect of the invention, there is provided an object area tracking apparatus for detecting a target object area from a supplied image and tracking the detected object area, comprising: a first detection unit configured to detect a face area of an object from the image; a second detection unit configured to detect a predetermined area different from the face area of the object from the image; and a main object determination unit configured to determine a main object on the basis of a detection result of the first detection unit and a detection result of the second detection unit, wherein the main object determination unit selects a main object from the objects whose face areas are detected by the first detection unit including the past objects.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments, features, and aspects of the invention and, together with the description, serve to explain the principles of the invention.
Various exemplary embodiments, features, and aspects of the present invention will be described in detail below with reference to the drawings.
In the image pickup apparatus 100, light forming an object image is converged by an image pickup optical system 101 including an image pickup lens and enters an image pickup element 102. The image pickup element 102 is, for example, a Charge Coupled Device (CCD) image sensor or a Complementary Metal Oxide Semiconductor (CMOS) image sensor. The image pickup element 102 has a plurality of pixels each having a photoelectric conversion element and outputs an electric signal corresponding to an intensity of the incident light on a pixel basis. That is, the electric signal which is output from the image pickup element 102 is a signal obtained by photoelectrically converting the light showing the incident object image by the image pickup element 102 and is an analog image signal showing an image including the object image picked up by the image pickup element 102.
An analog signal processing unit 103 performs an analog signal processing such as Correlated Double Sampling (CDS) or the like to the image signal which was output from the image pickup element 102. An AD (analog/digital) conversion unit 104 converts the analog image signal which is output from the analog signal processing unit 103 into a format of digital data. The image signal of the digital format converted by the AD conversion unit 104 is input to a control unit 105 and an image processing unit 106.
The control unit 105 is a Central Processing Unit (CPU), a microcontroller, or the like and central-controls the operation of the image pickup apparatus 100. The control unit 105 develops a program code stored in a Read Only Memory (ROM) (not shown) into a work area in a Random Access Memory (RAM) (not shown) and sequentially executes it, thereby controlling each functional unit of the image pickup apparatus 100.
The image processing unit 106 executes an image processing such as gamma correction, white balance processing, or the like to the image signal of the digital format which is input. In addition to the ordinary image processing, the image processing unit 106 has a function for executing an image processing using information regarding a specific object area in an image which is supplied from a main object determination unit 112, which will be described hereinafter.
A display unit 107 is, for example, a Liquid Crystal Display (LCD) or an organic Electroluminescence (EL) display and displays an image on the basis of the image signal which is supplied from the image processing unit 106. The image pickup apparatus 100 displays the images picked up sequentially by the image pickup element 102 in a time-sequential manner onto the display unit 107, thereby enabling the display unit 107 to function as an electronic viewfinder (EVF). The display unit 107 can display a position or the like of the object area selected by the main object determination unit 112, which will be described hereinafter. The image signal which was output from the image processing unit 106 can be recorded into a recording medium 108. The recording medium 108 is, for example, a memory card which is detachable to the image pickup apparatus 100. A recording medium on which the image signal is recorded may be a memory built in the image pickup apparatus 100 or an external apparatus connected to the image pickup apparatus 100 so that it can communicate therewith.
The face detection unit 109 is an example of a first detection unit. The face detection unit 109 receives the image signal supplied from the image processing unit 106, detects a predetermined target object in the image, and specifies an object area. The face detection unit 109 specifies a person's face area as an object area from the image. If faces of a plurality of persons exist in the image, areas of the number as many as the number of persons are detected. As a detecting method in the face detection unit 109, a well-known face detecting method may be applied. In the related arts regarding the face detection, for example, there are a method whereby knowledge (flesh color information, parts such as eyes, nose, mouse, and the like) regarding the face is used, a method whereby a discriminator for face detection is constructed by a learning algorithm represented by a neural network, and the like. In the face detection, in order to improve a recognition rate, generally, the face recognition is performed by combining those methods. For example, as disclosed in the Official Gazette of Japanese Patent Application Laid-Open No. 2002-251380, a method whereby the face detection is performed by using a wavelet conversion and an image feature amount or the like can be mentioned. The detection result by the face detection unit 109 is supplied to a detection result integration unit 111, which will be described hereinafter.
The person's body detection unit 110 is an example of a second detection unit. The person's body detection unit 110 receives the image signal supplied from the image processing unit 106, sets a target object area to an area of an upper half of the person's body, and detects a predetermined area from the image. If a plurality of persons exist in the image, areas of the number as many as the number of persons are detected. As a detecting method in the person's body detection unit 110, for example, a method disclosed in the Official Gazette of Japanese Patent Application Laid-Open No. 2009-211311 or the like can be applied. In the embodiment, the person's body detection unit 110 sets an edge intensity of a contour of a local upper half of the person's body to a local feature amount and detects an area of the upper half of the person's body. As a method of extracting the feature amount from the image, there are various kinds of methods such as Sobel filter, Prewitt filter, Haar filter, and the like. An upper half of the person's body and a non-upper half of the person's body are discriminated from the extracted local feature amount by a person discriminator. The discrimination in the person discriminator is performed on the basis of a machine learning such as AdaBoost learning or the like. A detection result in the person's body detection unit 110 is supplied to the detection result integration unit 111, which will be described hereinafter.
The detection result integration unit 111 compares a detection result in the face detection unit 109 with the detection result in the person's body detection unit 110, integrates the detection results for the same object, and supplies a result of the integration to the main object determination unit 112. The detection result integration unit 111 makes correspondence between the same object and the detection result thereof among the detection results at different times. That is, by judging the detection results for the same object in the time axis direction, the detection result integration unit 111 plays a role of tracking the object. The detection result integration unit 111 will be described in detail hereinafter.
On the basis of a detection result supplied from the detection result integration unit 111, the main object determination unit 112 determines an object (main object) which is mainly handled from the detected objects.
Information regarding the determined main object is supplied to, for example, the control unit 105 and the image processing unit 106. The main object determination unit 112 determines whether or not the object detected only by the person's body detection unit 110 is selected as a candidate of the main object in accordance with detection results of the present and past face detections for the detected object. If a plurality of candidates of the main object exist, the main object determination unit 112 selects the main object on the basis of a position and a size of the detected object in the image. A processing of the main object determination unit 112 will be described in detail hereinafter.
The control unit 105 can control image pickup conditions such as focusing state, exposure state, and the like, for example, at the time when an object is picked up by the image pickup element 102. For example, the control unit 105 controls a focus control mechanism (not shown) and an exposure control mechanism (not shown) of the image pickup optical system 101 on the basis of the image signal which is output from the AD converter 104. For example, the focus control mechanism is an actuator or the like for driving the image pickup lens included in the image pickup optical system 101, and the exposure control mechanism is an actuator or the like for driving an iris and a shutter included in the image pickup optical system 101.
In the control of the focus control mechanism and the exposure control mechanism, the control unit 105 can use information of the object area supplied from the main object determination unit 112. For example, the control unit 105 can perform the focus control using a contrast value of the object area and the exposure control using a luminance value of the object area. Therefore, the image pickup apparatus 100 has a function for executing an image pickup processing under the image pickup conditions in consideration of a specific object area in the picked-up image. The control unit 105 also controls output timing of the image pickup element 102 and a read-out of the image pickup element 102 such as an output pixel or the like. In the construction illustrated in
The face detection result obtaining unit 201 obtains a result detected by the face detection unit 109. As a detection result, the face detection result obtaining unit 201 obtains the number of persons detected, positions and sizes showing the areas in the image of the persons of the number as many as the number of detected persons, a reliability of the detection result, and the like. The person's body detection result obtaining unit 202 obtains a result detected by the person's body detection unit 110. As a detection result, the person's body detection result obtaining unit 202 obtains the number of persons detected, positions and sizes showing the areas in the image of the persons of the number as many as the number of detected persons, a reliability of the detection result, and the like. The area estimation unit 203 estimates a partial area corresponding to the area detected by the face detection unit 109 from the detection result obtained by the person's body detection result obtaining unit 202. As an estimating method, it is now assumed that, for example, the partial area is estimated by a linear conversion on the basis of a relation between the detection area by the face detection unit 109 and the detection area by the person's body detection unit 110.
The intra-frame correlation determination unit 204 specifies a detection result for the same object on the basis of a similarity of the positions and sizes between the detection result obtained by the face detection result obtaining unit 201 and the detection result obtained by the person's body detection result obtaining unit 202 and estimated by the area estimation unit 203. It is assumed that each of the face detection unit 109 and the person's body detection unit 110 detects the target object area on the basis of the images picked up at the same time. The intra-frame correlation determination unit 204 sets the object detected by the face detection unit 109 to the face detection result. With respect to the object detected only by the person's body detection unit 110, the intra-frame correlation determination unit 204 sets the result estimated as a partial area corresponding to the face area by the area estimation unit 203 to the detection result.
The inter-frame correlation determination unit 205 compares the current detection result by the intra-frame correlation determination unit 204 with the immediately preceding detection result by the detection result integration unit 111 and specifies a detection result for the same object. In a manner similar to the intra-frame correlation determination unit 204, the inter-frame correlation determination unit 205 specifies a detection result for the same object on the basis of the similarity of the positions and sizes between the detection results. A detection result of the past of each object can be referred to by making correspondence of the detection result in the time axis direction by the inter-frame correlation determination unit 205.
A flow for the processing mainly regarding the main object determination in the image pickup apparatus 100 according to the present embodiment will be described with reference to
Subsequently, the detection result integration unit 111 integrates two types of detection results by the intra-frame correlation determination between the detection result in the face detection unit 109 obtained in the same frame (photographed images at the same time) and the detection result in the person's body detection unit 110 (S304). An integration processing of the detection results by the intra-frame correlation determination unit 204 of the detection result integration unit 111 will now be described with reference to
The image 401 indicates the face detection result obtained by the face detection result obtaining unit 201. As shown in the rectangles 406 and 407 of solid lines, in the objects A and B, the face areas are detected, and in the object C, since it is backward, it is assumed that a face area is not detected. The image 402 indicates the person's body detection result obtained by the person's body detection result obtaining unit 202. As shown in the rectangles 408 and 409 of solid lines, in the objects B and C, the face areas are detected, and in the object A, it is assumed that an area of the person's body is not detected due to an influence of the attitude.
The image 403 indicates the result in which a partial area corresponding to the face area detected by the face detection unit 109 is estimated by the area estimation unit 203 from the result of the person's body detection result obtaining unit 202. The rectangle 410 of a broken line indicates the result estimated from the area of the person's body shown by the rectangle 408 of the solid line. The rectangle 411 of a broken line indicates the result estimated from the area of the person's body shown by the rectangle 409 of the solid line.
The image 404 indicates the state of a processing in the intra-frame correlation determination unit 204. It is assumed that the rectangles 412 and 413 of solid lines are the detection results obtained by the face detection result obtaining unit 201. It is assumed that the rectangles 414 and 415 of broken lines are the detection results obtained by the person's body detection result obtaining unit 202 and estimated by the area estimation unit 203. In the detection results shown by the rectangles 412 and 413 of the solid lines and the detection results shown by the rectangles 414 and 415 of the broken lines, the intra-frame correlation determination unit 204 calculates a degree of similarity of the area shapes and position coordinates. If the calculated degree of similarity is equal to or larger than a predetermined threshold value, the intra-frame correlation determination unit 204 determines that the detection results are the detection results for the same object. If the calculated degree of similarity is less than the predetermined threshold value, it is determined that the detection results are the detection results for the different objects. In the example of the image 404 illustrated in
The image 405 indicates the result obtained by integrating the data of the face detection and the data of the person's body detection as a result of the processing in the intra-frame correlation determination unit 204. In the objects A and B, since they have the result of the face detection, the face detection result is used as integration data. On the other hand, in the object C, since only the detection result of the person's body is detected, the result obtained by estimating the face area from the person's body detection result is used as integration data. Therefore, the detection results shown by the rectangles 416 and 417 of the solid lines are the detection results obtained by the face detection result obtaining unit 201, and the detection result shown by the rectangle 418 of the solid line is the face area estimated by the area estimation unit 203.
Returning to
Subsequently, the main object determination unit 112 executes a main object determination processing for selecting candidates of the main object from the detection data obtained by the detection result integration unit 111 and determining the main object (S306). The image pickup apparatus 100 executes photographing control and an image processing on the basis of information of the determined main object (S307). Subsequently, when a new photographed image is obtained, the foregoing processings in steps S301 to S307 are repetitively executed.
A flow for the main object determination processing by the main object determination unit 112 in step S306 shown in
In the case of the object detected only by the person's body detection by the person's body detection unit 110 (YES in S502), the main object determination unit 112 discriminates whether or not the past detection result of the object includes the face detection by the face detection unit 109 (S503). In the case of the detected object in which there is no face detection result in the past (NO in S503), the main object determination unit 112 handles the priority of the object as 0 (S504). The processing routine is returned to step S501. On the other hand, in the case of the detected object including the face detection result in the past (YES in S503), the main object determination unit 112 calculates the priority for the object on the basis of the position, size, and the like (S505). If the detection situation in the present frame is based on only the person's body detection by the person's body detection unit 110 (NO in S502), the main object determination unit 112 calculates the priority for the object on the basis of the position, size, and the like (S505). After the priority is calculated, the processing routine is returned to step S501.
In other words, in the case of the detected object including the detection data in the face detection by the face detection unit 109 in the present and past, the main object determination unit 112 sets the detected object to a candidate of the main object and executes the calculation of the priority. If the detected object does not include the detection data in the face detection by the face detection unit 109 in the present and past, the main object determination unit 112 sets the priority to 0 to indicate that the detected object is out of the candidates of the main object. It is assumed that the priority is set to a positive value and the minimum value thereof is 0.
An example of the calculation of the priority in step S505 will now be described with reference to
Returning to
The determination processing of step S503 has been described on the assumption that the determination is made on the basis of the presence or absence of the face detection by the face detection unit 109 in the past. In addition to the determination by the presence or absence of the face detection, a reliability of the face detection, a continuity, a position, a size, or the like may be added to the determination condition. The invention has been described with respect to the example in which in the case of the detected object in which there is no face detection result in the past in step S503, the priority to the detected object is set to 0. However, the invention is not limited to it. It is also possible to construct in such a manner that for the detected object in which there is no face detection result in the past, in step S505, the priority is obtained in a manner similar to that in the case of the detected object in which there is a face detection result in the past, and a value obtained by multiplying this priority by a coefficient such as 0.5 or the like less than 1 is set to the final priority. As mentioned above, by using such a construction that in the case of the detected object in which there is no face detection result in the past, the priority is changed to a small value, although the main object is backward all along, if other object is backward, it can be selected as a main object.
As mentioned above, according to the embodiment, in the detection of the object using the person's body detection as an auxiliary of the face detection in the object area tracking apparatus, in the case of using the detection data based on the person's body detection, the main object determination is performed in consideration of the detection result by the face detection of the object. Thus, such an error that a non-target object of the user is selected as a main object can be reduced. The main object is properly selected from the detected objects and the determination accuracy of the main object can be improved. More specifically speaking, in the object detection from the image using a plurality of different detecting methods, for the object which is not detected by the first detection unit but is detected by the second detection unit, the priority of the object is changed in accordance with the past detection result in the first detection unit. Thus, the main object can be properly selected from the detected objects in consideration of the past detection result in the first detection unit, and the determination accuracy of the main object can be improved.
Other Embodiments of the Invention
Although the embodiment has been described above with respect to the case, as an example, where the object area tracking apparatus is applied to the image pickup apparatus, the apparatus to which the object area tracking apparatus is applied is not limited to the image pickup apparatus. For example, the object area tracking apparatus for performing the object area tracking may be applied to a display apparatus for displaying an image (reproduction data) supplied from an external apparatus, recording medium, or the like. In the display apparatus, the reproduction data is used as data of the object area tracking processing and the object area tracking processing is executed. On the basis of the information (the position, size, and the like of the object in the image) of the object extracted by the object area tracking processing, the control unit such as a microcontroller or the like in the display apparatus controls display conditions at the time of displaying the image. Specifically speaking, a superimpose display of information showing the object such as a frame or the like to the position of the object in the image is performed, or control of brightness, color tone, and the like of the display image according to luminance and chrominance information of an object area is performed.
The invention is also realized by executing the following processing. That is, software (program) for realizing the functions of the embodiments mentioned above is supplied to a system or apparatus through a network or various kinds of storage media and a computer (or a CPU or MPU or the like) of the system or apparatus reads out the program and executes the processing in accordance with the program.
For example, the object area tracking apparatus shown in the embodiments mentioned above has a computer function 700 as illustrated in
The CPU 701 executes software stored in the ROM 702 or HD 711 or software which is supplied from the STD 712, thereby integratedly controlling each constructional unit connected to the system bus 704. That is, the CPU 701 reads out a processing program for executing the operation as mentioned above from the ROM 702, HD 711, or STD 712 and executes the processing program, thereby making control for realizing the operation in the embodiments mentioned above. The RAM 703 functions as a main memory, a work area, or the like of the CPU 701.
The CONSC 705 controls an instruction input from the CONS 709. The DISPC 706 controls a display of the DISP 710. The DCONT 707 controls an access to the HD 711 and STD 712 in which a boot program, various kinds of applications, a user file, a network management program, the processing program in the foregoing embodiments, and the like have been stored. The NIC 708 bidirectionally transmits and receives data to/from other apparatuses connected onto a network 713.
The foregoing embodiments are nothing but an example of embodiments for executing the present invention and the technical scope of the invention must not be restrictively interpreted. That is, the invention can be embodied in various forms without departing from the technical idea of the invention or a principal feature thereof.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2012-286000, filed on Dec. 27, 2012, which is hereby incorporated by reference herein in its entirety.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5563678, | Jun 06 1994 | Nikon Corporation | Focus detection device and method |
6038554, | Sep 12 1995 | Non-Subjective ValuingĀ© the computer aided calculation, appraisal and valuation of anything and anybody | |
7315630, | Jun 26 2003 | FotoNation Limited | Perfecting of digital image rendering parameters within rendering devices using face detection |
7929042, | Sep 22 2006 | Sony Corporation | Imaging apparatus, control method of imaging apparatus, and computer program |
8223216, | Mar 03 2008 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
8330849, | Oct 22 2008 | Canon Kabushiki Kaisha | Auto focusing apparatus and auto focusing method, and image sensing apparatus for stable focus detection |
8355047, | Jul 06 2007 | Nikon Corporation | Tracking device, focus adjustment device, image-capturing device, and tracking method |
8564661, | Oct 24 2000 | MOTOROLA SOLUTIONS, INC | Video analytic rule detection system and method |
9094610, | Jun 15 2009 | Canon Kabushiki Kaisha | Image capturing apparatus and image capturing apparatus control method |
20060203107, | |||
20070247321, | |||
20090009606, | |||
20120201450, | |||
JP2002251380, | |||
JP2005318554, | |||
JP2009211311, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 07 2013 | TSUJI, RYOSUKE | Canon Kabushiki Kaisha | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033072 | /0249 | |
Dec 26 2013 | Canon Kabushiki Kaisha | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jul 25 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 20 2023 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 09 2019 | 4 years fee payment window open |
Aug 09 2019 | 6 months grace period start (w surcharge) |
Feb 09 2020 | patent expiry (for year 4) |
Feb 09 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 09 2023 | 8 years fee payment window open |
Aug 09 2023 | 6 months grace period start (w surcharge) |
Feb 09 2024 | patent expiry (for year 8) |
Feb 09 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 09 2027 | 12 years fee payment window open |
Aug 09 2027 | 6 months grace period start (w surcharge) |
Feb 09 2028 | patent expiry (for year 12) |
Feb 09 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |