Techniques for capturing images are described. In one scenario, one or more processors of an imaging device can obtain geodata information and time information associated with the imaging device. The geodata and time information is obtained prior to capturing a first image. Next, one or more second images can be identified from a database of images based on the geodata information and the time information. The processor(s) can be communicatively coupled to the database via a communications network. Additionally, the one or more processors can determine one or more image capture conditions associated with the one or more second images; and automatically modify one or more settings of the imaging device to be used for capturing the first image. The settings can be modified based on at least one of the one or more image capture conditions associated with the one or more second images. Other embodiments are possible.
|
1. A computer-implemented method for image capture, comprising:
obtaining one or more of geodata information or time information associated with an imaging device, prior to capturing a first image with the imaging device;
identifying a plurality of second images external to the imaging device based on the one or more of the geodata information or the time information;
determining an image capture condition associated with the plurality of second images based on a first characteristic of a first one of the plurality of second images and a second characteristic of a second one of the plurality of second images; and
modifying a setting of the imaging device to be used for capturing the first image based on the determined image capture condition.
15. An electronic device for image capture, comprising:
an imaging device;
memory storing instructions that, when executed, cause one or more processors to:
obtain one or more of geodata information or time information associated with the imaging device, prior to capturing a first image with the imaging device;
identify a plurality of second images external to the imaging device based on the one or more of the geodata information or the time information;
determine an image capture condition associated with the plurality of second images based on a first characteristic of a first one of the plurality of second images and a second characteristic of a second one of the plurality of second images; and
modify a setting of the imaging device to be used for capturing the first image based on the determined image capture condition.
8. A non-transitory computer readable medium storing a program for image capture, the program comprising instructions that, when executed by one or more processors, cause the one or more processors to:
obtain one or more of geodata information or time information associated with an imaging device, prior to capturing a first image with the imaging device;
identify a plurality of second images external to the imaging device based on the one or more of the geodata information or the time information;
determine an image capture condition associated with the plurality of second images based on a first characteristic of a first one of the plurality of second images and a second characteristic of a second one of the plurality of second images; and
modify a setting of the imaging device to be used for capturing the first image based on the determined image capture condition.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
wherein the determined image capture condition is associated with the third image, and wherein the setting of the imaging device is modified based on the image capture condition associated with the third image.
9. The non-transitory computer readable medium of
10. The non-transitory computer readable medium of
11. The non-transitory computer readable medium of
12. The non-transitory computer readable medium of
13. The non-transitory computer readable medium of
14. The non-transitory computer readable medium of
wherein the determined image capture condition is associated with the third image, and wherein the setting of the imaging device is modified based on the image capture condition associated with the third image.
16. The electronic device of
17. The electronic device of
18. The electronic device of
19. The electronic device of
20. The electronic device of
21. The electronic device of
wherein the determined image capture condition is associated with the third image, and wherein the setting of the imaging device is modified based on the image capture condition associated with the third image.
|
This disclosure relates generally to the field of image manipulation. More particularly, but not by way of limitation, it relates to techniques for enhancing and repairing images using data from other images.
Photography has been an innovative field since the earliest crude photographs were produced, developing from camera obscura and pinhole cameras to chemically-developed film cameras in the 19th century to digital cameras in the late 20th century. With digital photography has come an ability to manipulate images, providing capabilities not practical or possible with film. Individuals may easily create or collect libraries of thousands of digital images, using software to organize and manipulate images in those libraries. In addition to standalone imaging devices such as traditional cameras, imaging devices are now ubiquitous in a wide variety of other devices, including smartphones and tablet computers.
However, as any photographer knows, not every photograph is a good one. Sometimes the photograph includes an image of a person whose face is blocked, blurry, or whose eyes were closed when the photograph was taken. Sometimes the image may show a spot or temporary blemish on the person's nose or somewhere else visible in the image.
Other times, a photographer may take a picture at some location and for some reason the photograph was taken with an incorrect exposure setting or out of focus, or the photographer inadvertently moved the camera while taking the photograph. Sometimes a photograph may contain some blurry object, such as a random person walking the camera and the subject of the photograph just as the photograph was taken.
While image manipulation techniques exist that may be used to manipulate areas of a photograph, such as the ability to remove the “red-eye” effect when using a photographic flash unit caused by reflection of light by the person's eyes. Other techniques have allowed cloning tools or healing brush tools that use sample data from other points on the photograph to correct imperfections, causing them to blend with the surrounding area of the image. While useful, these existing tools are not always capable of repairing or correcting a photograph as much as would be desired.
A image manipulation technique allows a user to correct an image using samples obtained from other images. These samples may be obtained from one or more other images in a library of images. Matching techniques may identify an image that best matches the image to be corrected, or may aggregate or average multiple images that are identified as containing an area corresponding to the area to be corrected. Identification of the image or images to use as the source of the samples may be automatic or manual. The images may be from a library of images under the control of the user or from a library of images maintained by another person or service provider. Application of the samples to correct the image may be manually or automatically directed.
An image modification method is disclosed. The method includes selecting a first region of a first image; identifying automatically, from a database of images, one or more second images, wherein each of the one or more second images has a region corresponding to the first region; and modifying automatically the first region based, at least in part, on the corresponding region from at least one of the one or more second images.
A non-transitory program storage device is disclosed. The storage device is readable by a programmable control device and has instructions stored thereon to cause the programmable control device to receive an indication that identifies a first region of a first image; identify, from a database of images, one or more second images, wherein each of the one or more second images has a region corresponding to the first region; and modify the first region based, at least in part, on the corresponding region from at least one of the one or more second images.
An programmable device is disclosed. The device includes a programmable control device; a memory coupled to the programmable control device; and software stored in the memory. The software includes instructions causing the programmable control device to receive an indication that identifies a first region of a first image; identify automatically one or more second images from a collection of images, wherein each of the one or more second images has a region corresponding to the first region; and modify the first region automatically based, at least in part, on the corresponding region from at least one of the one or more second images.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without these specific details. In other instances, structure and devices are shown in block diagram form in order to avoid obscuring the invention. References to numbers without subscripts or suffixes are understood to reference all instance of subscripts and suffixes corresponding to the referenced number. Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention, and multiple references to “one embodiment” or “an embodiment” should not be understood as necessarily all referring to the same embodiment.
By using samples from libraries of other images to correct or enhance an image automatically, additional capability becomes available to the photographer. Other images of a person may be used to correct or enhance an image of that person. Other images of a place may be used to enhance or correct an image of that place. The size of the sample data relative to the image to be corrected or enhanced is not limited. For example, a photograph of a landmark and the photographer's family might be manipulated by replacing the entirety of the photograph with another photograph, overlaying the photographer's family on the other photograph. Sample areas obtained from a source image may be included into a destination image at a resolution different from that of the destination image.
In block 140, if more than one other source image is included in the search results, a resulting source image may be selected using any desired technique, including techniques such as are described below regarding
Once one or more source images are selected, the selected source image or images may be processed to find corresponding facial features, and a source facial region corresponding to the destination facial region may be extracted as a sample data in block 150. The destination facial region may be replaced, at least in part, with the sample data from the one or more source images. Any desired techniques may be used to fit the sample data over the existing facial region, for smoothly stitching the sample data into the original image, such as morphing or healing brush techniques known to the art, which are not further described herein.
Where the source image and the destination image were exposed with different contrast ratios, the sample data may be adjusted to the contrast ratio of the destination image. Alternately, the contrast ratio of the sample data may not be adjusted to match that of the destination image, allowing a destination image with a less than optimum contrast ration to be enhanced by a sample data from a source image with a better contrast ratio. Other image modification techniques, such as gamma correction may be used to match the sample data to the destination image if desired.
The technique 100 may be performed automatically, identifying the facial region, selecting the source image or images, and performing the modification of the source facial region without user input. A user interface may provide an auto-enhance button or other similar user interaction element to trigger the performance of the technique 100.
Alternately, user interaction may be provided for at any stage, allowing the user to approve or disapprove any or all of the identification of the facial region, the selection of the source image or images, the extraction of the sample data, and the modification of the source image based on the sample data. User-configurable setting may provide additional constraints (e.g., allowable age ranges) as well as specifying what library of images to use.
Always described above, the entire facial region may be modified, more limited facial regions, such as a distinct facial feature (e.g., a nose), as well as other less distinct regions, such as simple skin regions.
Additional predetermined criteria may further constrain facial modification using any other desired information. For example, the technique may limit the available images in the library to those within a predefined age range, to avoid making inappropriate modifications. For example, the technique may be configured to avoid modifying an image of an 80-year-old woman with facial data from an image of that woman in her teens. Other criteria may be used, such as lighting information (e.g., do not use an image taken in fluorescent light to modify an image taken an incandescent light) and resolution (e.g., use only images taken with approximately similar resolution).
Other user-specifiable criteria may include shadowing, contrast, and exposure information, and information about the imaging device used to capture the images (e.g., the destination image was taken on an IPHONE® smartphone; only use source images taken by an IPHONE smartphone or an IPAD® tablet) (IPHONE and IPAD are registered trademarks of Apple Inc.). Temporal windows may provide constraints such as a constraint that limits source images to those taken at roughly the same age as the person represented in the destination image. Temporal windows may also be used to limit the source images to those taken in a narrow time range, such as a single photography session.
As described above, the technique 100 is used for correcting or enhancing a destination image. However, the same technique may be used for other purposes to modify facial regions of an image. The source images may be images of the person represented in the source image, but may also be images of other people. For example, a plastic surgeon may use the technique to provide a patient an idea of how the patient may look after surgery, and the selection process may either use a library of images of the patient (e.g., when doing reconstructive surgery, using images of the patient before the injury) or of other people (e.g., modifying the source image to give a cosmetic surgery patient an idea of how they may look after the surgery).
The number of images in the library may affect the usefulness of the technique. For example, a collection of images of a person with only a handful of images is less likely to produce as good a result as a library of hundreds or thousands of images. The library of images may be a personal library, or may be a library to which multiple people have submitted multiple images of multiple people.
The techniques described herein are not limited to modification of facial regions, but may be used for modifying any object represented in the source image, including other bodily features and inanimate objects.
The technique 200 would be useful for enhancing or correcting images of locations. For example, an image may be captured at a popular location. The image has bad exposure and not enough data to correct well using conventional techniques. Or there may be a blurry object in the image that the user may wish to remove, although the user may wish to remove non-blurry objects, as well. (e.g., the user may wish to remove an image of a bystander from a picture.) In this example, many other people have taken a photo in that same location at various times, and uploaded those photos to a shared library of images coupled to a server. The destination image may be automatically or manually enhanced or corrected based on sample data from the other photos that have been uploaded to the server. In this example, source images may be selected that were taken at approximately the same location. A location identifier, such as geodata information associated with the destination image, may be ascertained, then the search of the library of images may identify the source images by matching their location identifiers with the location identifier of the destination image. The sample data extracted from the source images may allow the various kinds of enhancements, including removal of blurry obstructions (e.g., walking passersby) and exposure correction (e.g., the destination image was underexposed, but the library has sample data on hundreds or thousands of correctly exposed photos of the same location at the same time of day, and may use sample data that to repair the photo). The modifications may include replacing data in the photo with better data from other one or more source images in the library. In a variant, the techniques may be used to inform an imaging device as to the best possible imaging attributes, such as exposure or level adjustments, to apply to the imaging, before the image is captured. (For example, a person standing at a very popular place on the edge of the Grand Canyon may be able to take a better image by obtaining sample data from thousands of other images captured at that time of day at that spot.)
The amount of the destination image that can be modified using sample data from the source images is not limited. Any portion or the complete image may be modified. For example, a simple snapshot taken at a popular tourist site may be enhanced, corrected, or even potentially completely replaced by sample data extracted from one or more images captured at the same popular tourist site.
Implementation in a Programmable Device
Storage device 414 may store media (e.g., image and video files), software (e.g., for implementing various functions on device 400), preference information, device profile information, and any other suitable data. Storage device 414 may include one more storage mediums for tangibly recording image data and program instructions, including for example, a hard-drive, permanent memory such as ROM, semi-permanent memory such as RAM, or cache. Program instructions may comprise a software implementation encoded in any desired language (e.g., C or C++).
Memory 412 may include one or more different types of memory which may be used for performing device functions. For example, memory 412 may include cache, ROM, and/or RAM. Communications bus 422 may provide a data transfer path for transferring data to, from, or between at least storage device 414, memory 412, and processor 416. Although referred to as a bus, communications bus 422 is not limited to any specific data transfer technology. User interface 418 may allow a user to interact with the programmable device 400. For example, the user interface 418 can take a variety of forms, such as a button, keypad, dial, a click wheel, or a touch screen.
In one embodiment, the programmable device 400 may be an programmable device capable of processing and displaying media, such as image and video files. For example, the programmable device 400 may be a device such as such a mobile phone, personal data assistant (PDA), portable music player, monitor, television, laptop, desktop, and tablet computer, or other suitable personal device.
The storage device 414 may provide storage for the library of images described above. Alternately, an external library of images may be communicatively coupled to the programmable device 400, as illustrated below in
Implementation in a Networked System
It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-described embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention therefore should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
Dellinger, Richard R., Sabatelli, Alessandro F.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10205875, | Apr 24 2012 | Apple Inc. | Image enhancement and repair using sample data from other images |
10321051, | Aug 02 2016 | International Business Machines Corporation | Intelligent image enhancement |
6574375, | Apr 06 1992 | Ricoh Corporation | Method for detecting inverted text images on a digital scanning device |
6788809, | Jun 30 2000 | Intel Corporation | System and method for gesture recognition in three dimensions using stereo imaging and color vision |
7804982, | Nov 26 2002 | L-1 SECURE CREDENTIALING, INC | Systems and methods for managing and detecting fraud in image databases used with identification documents |
7970181, | Aug 10 2007 | Adobe Inc | Methods and systems for example-based image correction |
8131116, | Aug 31 2006 | MATSUSHITA ELECTRIC INDUSTRIAL CO , LTD | Image processing device, image processing method and image processing program |
8208764, | Jan 21 2006 | VR REHAB, INC ; GUCKENBERGER, ELIZABETH T ; GUCKENBERGER, RONALD J | Photo automatic linking system and method for accessing, linking, and visualizing “key-face” and/or multiple similar facial images along with associated electronic data via a facial image recognition search engine |
8488893, | Nov 16 2010 | Canon Kabushiki Kaisha | Image compression apparatus and image compression method |
8665340, | Apr 29 2010 | Monument Peak Ventures, LLC | Indoor/outdoor scene detection using GPS |
8971612, | Dec 15 2011 | Microsoft Technology Licensing, LLC | Learning image processing tasks from scene reconstructions |
9172938, | Apr 27 2012 | PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO , LTD | Content reproduction method, content reproduction system, and content imaging device |
9386230, | Jun 12 2015 | GOOGLE LLC | Day and night detection based on one or more of illuminant detection, lux level detection, and tiling |
9413965, | Oct 22 2010 | LG Electronics Inc. | Reference image and preview image capturing apparatus of mobile terminal and method thereof |
9626747, | Apr 24 2012 | Apple Inc.; Apple Inc | Image enhancement and repair using sample data from other images |
20080196076, | |||
20090238419, | |||
20090304239, | |||
20100158410, | |||
20110102630, | |||
20110292221, | |||
20120250937, | |||
20130246409, | |||
20140126881, | |||
20150085145, | |||
20170064205, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 11 2019 | Apple Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Feb 11 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Aug 30 2023 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Mar 17 2023 | 4 years fee payment window open |
Sep 17 2023 | 6 months grace period start (w surcharge) |
Mar 17 2024 | patent expiry (for year 4) |
Mar 17 2026 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 17 2027 | 8 years fee payment window open |
Sep 17 2027 | 6 months grace period start (w surcharge) |
Mar 17 2028 | patent expiry (for year 8) |
Mar 17 2030 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 17 2031 | 12 years fee payment window open |
Sep 17 2031 | 6 months grace period start (w surcharge) |
Mar 17 2032 | patent expiry (for year 12) |
Mar 17 2034 | 2 years to revive unintentionally abandoned end. (for year 12) |