Embodiments of imaging devices of the present disclosure automatically utilize simultaneous image captures in an image processing pipeline. In one embodiment, control processing circuitry initiates simultaneous capture of the first image by the first image sensor and the second image by the second image sensor; and image processing circuitry generates an enhanced monoscopic image comprising at least portions of the first image and the second image.
|
10. An image processing method, comprising:
recording a first image captured by a first image sensor;
recording a second image captured by a second image sensor, wherein the first image and the second image are simultaneously captured;
comparing at least portions of the first image and the second image, wherein the comparing comprises:
identifying, using a mask, pixels in the second image that are similar to pixels in the first image;
removing the identified pixels in the second image from the second image;
processing the first image in a first processing path;
processing remaining pixels of the second image in a second processing path; and
responsive to outputs of the first processing path and the second processing path, generating an enhanced monoscopic image.
1. An image capture device, comprising:
a first image sensor for recording a first image of a scene;
a second image sensor for recording a second image of the scene;
control processing circuitry to initiate simultaneous capture of the first image of the scene by the first image sensor and the second image of the scene by the second image sensor; and
image processing circuitry to generate an enhanced monoscopic image of the scene comprising at least portions of the first image and the second image,
wherein the image processing circuitry is configured to process first pixels of the first image less than all of which are also contained in the second image in a first processing path and to process second pixels of the second image that are not contained in the first image in a second processing path.
17. A non-transitory computer readable medium having an image processing program, when executed by a hardware processor, causing the hardware processor to:
record a first image captured by a first image sensor;
record a second image captured by a second image sensor, wherein the first image and the second image are simultaneously captured;
compare at least portions of the first image and the second image, wherein the comparing comprises:
identifying, using a mask, pixels in the second image that are similar to pixels in the first image;
removing the identified pixels in the second image from the second image;
processing the first image in a first processing path;
processing remaining pixels of the second image in a second processing path; and
responsive to outputs of the first processing path and the second processing path, generate an enhanced monoscopic image.
2. The image capture device of
wherein the first portion comprises at least one of having an exposure level that is different from the second portion or having a depth of field that is different from the second portion.
3. The image capture device of
4. The image capture device of
5. The image capture device of
the control processing circuitry is configured to operate in a stereoscopic mode utilizing simultaneous operation of the first image sensor and the second image sensor to generate a stereoscopic image; a monoscopic mode utilizing singular operation of the first image sensor to generate a monoscopic image; and an enhanced monoscopic mode utilizing simultaneous operation of the first image sensor and the second image sensor to generate the enhanced monoscopic image; and
the control processing circuitry is configured to automatically switch between modes of operation comprising at least two of the monoscopic mode, the stereoscopic mode, or the enhanced monoscopic mode.
6. The image capture device of
7. The image capture device of
8. The image capture device of
9. The image capture device of
identify the first pixels of the first image that are also contained in the second image;
remove the identified pixels in the second image from the second image; and
process remaining pixels of the second image in the second processing path, wherein the remaining pixels comprise the second pixels.
11. The image processing method of
12. The image processing method of
13. The image processing method of
14. The image processing method of
the defect comprises a lens shading defect, and
the first image sensor comprises an image sensor that is configured to pass red, green, or blue light to sensor pixels and the second image sensor comprises a different type of image sensor than the first image sensor.
15. The image processing method of
switching operation of the image capture device between a stereoscopic mode utilizing simultaneous operation of the first image sensor and the second image sensor to generate a stereoscopic image; a monoscopic mode utilizing singular operation of the first image sensor to generate a monoscopic image; and an enhanced monoscopic mode utilizing simultaneous operation of the first image sensor and the second image sensor to generate the enhanced monoscopic image.
16. The image processing method of
18. The non-transitory computer readable medium of
switch operation of the image capture device between a stereoscopic mode utilizing simultaneous operation of the first image sensor and the second image sensor to generate a stereoscopic image; a monoscopic mode utilizing singular operation of the first image sensor to generate a monoscopic image; and an enhanced monoscopic mode utilizing simultaneous operation of the first image sensor and the second image sensor to generate the enhanced monoscopic image.
19. The non-transitory computer readable medium of
20. The non-transitory computer readable medium of
21. The non-transitory computer readable medium of
|
This application claims priority to copending U.S. provisional application entitled, “Image Capture Device Systems and Methods,” having Ser. No. 61/509,747, filed Jul. 20, 2011, which is entirely incorporated herein by reference.
This application is related to copending U.S. utility patent application entitled “Multiple Image Processing” filed Sep. 19, 2011 and accorded Ser. No. 13/235,975, which is entirely incorporated herein by reference.
Some types of image processing, such as high dynamic range (HDR) image processing, involves combining one camera's sequential still image output (e.g., each with differing exposure) into a single still image with a higher dynamic range (i.e., an image with a larger range of luminance variation between light and dark image areas). This approach is often called exposure bracketing and can be found in conventional cameras.
Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
This disclosure pertains to a device, method, computer useable medium, and processor programmed to automatically utilize simultaneous image captures in an image processing pipeline in a digital camera or digital video camera. One of ordinary skill in the art would recognize that the techniques disclosed may also be applied to other contexts and applications as well.
For cameras in embedded devices, e.g., digital cameras, digital video cameras, mobile phones, personal data assistants (PDAs), tablets, portable music players, and desktop or laptop computers, to produce more visually pleasing images, techniques such as those disclosed herein can improve image quality without incurring significant computational overhead or power costs.
To acquire image data, a digital imaging device may include an image sensor that provides a number of light-detecting elements (e.g., photodetectors) configured to convert light detected by the image sensor into an electrical signal. An image sensor may also include a color filter array that filters light captured by the image sensor to capture color information. The image data captured by the image sensor may then be processed by an image processing pipeline circuitry, which may apply a number of various image processing operations to the image data to generate a full color image that may be displayed for viewing on a display device, such as a monitor.
Conventional image processes, such as conventional high dynamic range (HDR) image processing requires multiple images to be captured sequentially and then combined to yield an HDR with enhanced image characteristics. In conventional HDR image processing, multiple images are captured sequentially by a single image sensor at different exposures and are combined to produce a single image with higher dynamic range than possible with capture of a single image. For example, capture of an outdoor night time shot with a neon sign might result in either over-exposure of the neon sign or under-exposure of the other portions of the scene. However, capturing both an over-exposed image and an under-exposed image and combining the multiple images can yield an HDR image with both adequate exposure for both the sign and the scene. This approach is often called exposure bracketing, but a requirement is that the images captured must be substantially similar even though taken sequentially to prevent substantial introduction of blurring or ghosting.
Embodiments of the present disclosure provide enhanced image processing by utilizing multiple images that are captured simultaneously. Referring to
One prospective use of an imaging device 150 with multiple cameras or image sensors would be to increase the number of dimensions represented in a displayed image. An example of this type of functionality is a stereoscopic camera which typically has two cameras (e.g., two image sensors). Embodiments of the present disclosure, however, may have more than two cameras or image sensors. Further, embodiments of an imaging device 150 may have modes of operation such that one mode may allow for the imaging device 150 to capture a 2-dimensional (2D) image; a second mode may allow for the imaging device to capture a multi-dimensional image (e.g., 3D image), and a third mode may allow the imaging device to simultaneously capture multiple images and use them to produce one or more 2D enhanced images for which an image processing effect has been applied. Accordingly, some embodiments of the present disclosure encompass a configurable and adaptable multi-imager camera architecture which operates in either a stereoscopic (3D) mode, monoscopic (single imager 2D) mode, and a combinational monoscopic (multiple imager 2D) mode. In one embodiment, mode configuration involves user selection, while adaptation can be automatic or prompted mode operation. For example, monoscopic mode may be used in normally sufficient situations but switched to combinational monoscopic operations when the need is detected by control logic 105.
In some embodiments, the image processing circuitry 100 may include various subcomponents and/or discrete units of logic that collectively form an image processing “pipeline” for performing each of various image processing steps. These subcomponents may be implemented using hardware (e.g., digital signal processors or ASICs (application-specific integrated circuits)) or software, or via a combination of hardware and software components. The various image processing operations may be provided by the image processing circuitry 100.
The image processing circuitry 100 may include front-end processing logic 103, pipeline processing logic 104, and control logic 105, among others. The image sensor(s) 101 may include a color filter array (e.g., a Bayer filter) and may thus provide both light intensity and wavelength information captured by each imaging pixel of the image sensors 101 to provide for a set of raw image data that may be processed by the front-end processing logic 103.
The front-end processing logic 103 may receive pixel data from memory 108. For instance, the raw pixel data may be sent to memory 108 from the image sensor 101. The raw pixel data residing in the memory 108 may then be provided to the front-end processing logic 103 for processing.
Upon receiving the raw image data (from image sensor 101 or from memory 108), the front-end processing logic 103 may perform one or more image processing operations. The processed image data may then be provided to the pipeline processing logic 104 for additional processing prior to being displayed (e.g., on display device 106), or may be sent to the memory 108. The pipeline processing logic 104 receives the “front-end” processed data, either directly from the front-end processing logic 103 or from memory 108, and may provide for additional processing of the image data in the raw domain, as well as in the RGB and YCbCr color spaces, as the case may be. Image data processed by the pipeline processing logic 104 may then be output to the display 106 (or viewfinder) for viewing by a user and/or may be further processed by a graphics engine. Additionally, output from the pipeline processing logic 104 may be sent to memory 108 and the display 106 may read the image data from memory 108. Further, in some implementations, the pipeline processing logic 104 may also include an encoder 107, such as a compression engine, etc., for encoding the image data prior to being read by the display 106.
The encoder 107 may be a JPEG (Joint Photographic Experts Group) compression engine for encoding still images, or an H.264 compression engine for encoding video images, or some combination thereof. Also, it should be noted that the pipeline processing logic 104 may also receive raw image data from the memory 108.
The control logic 105 may include a processor 620 (
Referring now to
In one embodiment, the first process element 201 of an image signal processing pipeline could perform a particular image process such as noise reduction, defective pixel detection/correction, lens shading correction, lens distortion correction, demosaicing, image sharpening, color uniformity, RGB (red, green, blue) contrast, saturation boost process, etc. As discussed above, the pipeline may include a second process element 202. In one embodiment, the second process element 202 could perform a particular and different image process such as noise reduction, defective pixel detection/correction, lens shading correction, demosaicing, image sharpening, color uniformity, RGB contrast, saturation boost process etc. The image data may then be sent to additional element(s) of the pipeline as the case may be, saved to memory 108, and/or input for display 106.
In one embodiment, an image process performed by a process element 201, 202 in the image signal processing pipeline is an enhanced high dynamic range process. A mode of operation for the enhanced high dynamic range process causes simultaneous images to be captured by image sensors 101. By taking multiple images simultaneously, the multiple pictures the object being photographed will be captured at the same time in each image. Under the mode of operation for the enhanced high dynamic range process, multiple images are to be captured at different exposure levels (e.g., different gain settings) or some other characteristic and then be combined to produce an image having an enhanced range for the particular characteristic. For example, an enhanced image may be produced with one portion having low exposure, another portion having a medium exposure, and another portion having a high exposure, depending on the number of images that have been simultaneously captured. In a different scenario, simultaneous images may be captured for different focus levels.
In another embodiment, a different image process performed by a process element 201, 202 in the image signal processing pipeline is an enhanced autofocusing process that can be utilized in many contexts including enhanced continuous autofocusing. A mode of operation for the enhanced high dynamic range process causes simultaneous images to be captured by image sensors 101. One of the image sensors 101 (in an assistive role) may be caused to focus on an object and then scan an entire focusing range to find an optimum focus for the first image sensor. The optimum focus range is then used by a primary image sensor to capture an image of the object. In one scenario, the primary image sensor 101 may be capturing video of the object or a scene involving the object. Accordingly, the optimum focus range attributed to the second or assistive image sensor 101 may change as the scene changes and therefore, the focus used by the primary image sensor 101 may be adjusted as the video is captured.
In an additional embodiment, an image process performed by a process element in the image signal processing pipeline is an enhanced depth of field process. A mode of operation for the enhanced process causes simultaneous images to be captured by image sensors 101. Focusing of the image sensors 101 may be independently controlled by control logic 105. Accordingly, one image sensor may be focused or zoomed closely on an object in a scene and a second image sensor may be focused at a different level on a different aspect of the scene. Image processing in the image single processing pipeline may then take the captured images and combine them to produce an enhanced image with a greater depth of field. Accordingly, multiple images may be combined to effectively extend the depth of field. Also, some embodiments may utilize images from more than two imagers or image sensors 101.
In various embodiments, multiple image sensors 101 may not be focused on a same object in a scene. For example, an order may be applied to the image sensors 101 or imagers, where a primary imager captures a scene and secondary camera captures scene at a different angle or different exposure, different gain, etc., where the second image is used to correct or enhance the primary image. Exemplary operations include, but are not limited to including, HDR capture and enhanced denoise operations by using one frame to help denoise the other, as one example. To illustrate, in one implementation, a scene captured in two simultaneous images may be enhanced by averaging the values of pixels for both images which will improve the signal-to-noise ratio for the captured scene. Also, by having multiple images captured simultaneously at different angles, a curve of the lens shading may be calculated (using the location difference of the same object(s) in the image captures between the two (or more) image sensors) and used to correct effected pixels.
Accordingly, in an additional embodiment, an image process performed by a process element 201, 202 in the image signal processing pipeline is a corrective process. A mode of operation for the enhanced process causes simultaneous images to be captured by image sensors 101. The lens of the respective imagers 101 may have different angles of views. Therefore, in the image process, images captured at the different angles of views may be compared to determine a difference in the two images. For example, defective hardware or equipment may cause a defect to be visible in a captured image. Therefore, the defect in captured images from multiple image sensors 101 is not going to be in the same position in both views/images due to the different angles of view. There will be a small difference, and the image signal processing pipeline is able to differentiate between the defect from the real image and apply some form of correction.
In an additional embodiment, an image process performed by a process element 201, 202 in the image signal processing pipeline is an enhanced image resolution process. A mode of operation for the enhanced process causes simultaneous images to be captured by image sensors 101 at a particular resolution (e.g., 10 Megapixels). Image processing in the image single processing pipeline may then take the captured images and combine them to produce an enhanced image with an increased or super resolution (e.g., 20 Megapixels). Further, in some embodiments, one of the captured images may be used to improve another captured image and vice versa. Accordingly, multiple enhanced monoscopic images may be produced from the simultaneous capture of images.
In an additional embodiment, an image process performed by a process element in the image signal processing pipeline is an enhanced image resolution process. A mode of operation for the enhanced process causes simultaneous video streams of images to be captured by image sensors 101 during low lighting conditions.
Consider that camera image quality often suffers during low light conditions. Ambient lighting is often low and not adequate for image sensor arrays designed for adequate lighting conditions. Thus, such sensor arrays receive insufficient photons to capture images with good exposure leading to dark images. Attempting to correct this via analog or digital gain may help somewhat but also tends to over amplify underlying noise (which is more dominant in low lighting conditions). One possible solution is to extend exposure time, but this may not be feasible as hand shaking may introduce blurring. Another conventional solution is to add larger aperture lensing and external flash. The former is a very expensive and size consuming proposition, while the latter may not be allowed (such as in museums) or may not be effective (such as for distance shots). Flash systems also are also costly and consume a lot of power.
Select embodiments of the present disclosure utilize a combination of different image sensors 101 (e.g., infrared, RGB, panchromatic, etc.). For example, one image sensor may advantageously compensate for image information not provided by the other image sensor and vice versa. Accordingly, the image sensors may capture images simultaneously where a majority of image information is obtained from a primary image sensor and additional image information is provided from additional image sensor(s), as needed.
In one embodiment, low light image sensors 101 or panchromatic image sensors 101 in concert with a standard RGB (Bayer pattern) image sensor array are used. Panchromatic sensors receive up to three times the photons of a single RGB sensor due to having a smaller imager die size, but rely on the RGB neighbors for color identification. Such sensor array design is outperformed by an ordinary RGB sensor at higher lighting levels due to the larger image die size. One embodiment of an imaging device 150 utilizes a RGB type CMOS or CCD type sensor array for high lighting situations, and a second low light type of sensor designed for low lighting conditions (e.g., fully panchromatic—black and white luma only, or interspersed panchromatic). Then, the imaging device 150 automatically switches between the two sensors to best capture images under current lighting conditions. Further, in one embodiment, simultaneous images may be captured during low lighting. In particular, by capturing multiple images using a panchromatic imager 101 and a normal lighting imager 101, the captured images can be correlated and combined to produce a more vivid low light image.
As an example, a panchromatic image sensor 101 may be used to capture a video stream at a higher frame rate under low lighting conditions while the chroma data is only sampled at half that rate. This corresponds to a temporal compression approach counterpart to a spatial approach that treats chroma with a lesser resolution than luma. Output of the process element 201, 202 may be a single frame sequence or may actually comprise two separate streams for post processing access.
In another scenario, motion blur can be reduced using the panchromatic imager 101 and a normal lighting imager 101. Motion blur is when an object is moving in front of the imaging device 150 and in a low light condition, for example, a chosen exposure for the low light condition may capture motion of an object being shot or of shaking of the imaging device 150 itself. Accordingly, the panchromatic imager is used to capture an image at a smaller exposure than a second image is captured by the normal lighting imager. The captured images can be correlated and combined to produce an image with motion blur corrected.
Embodiments of the imaging device 150 are not limited to having two image sensors and can be applied to a wide number of image sensors 101. For example, a tablet device could possibly have two imagers in the front and two imagers in the back of the device, where images (including video) from each of the imagers are simultaneously captured and combined into a resulting image.
Referring next to
Referring to
In some embodiments, the images generated by the first and second paths may be stored in memory 108 and made available for subsequent use by other procedures and elements that follow. Accordingly, in one embodiment, while a main image is being processed in a main path of the pipeline, another image which might be downsized or scaled of that image or a previous image may be read by the main path. This may enable more powerful processing in the pipeline, such as during noise filtering.
Also, in some embodiments, similar pixels in the multiple images may be processed once and then disparate pixels will be processed separately. It is noted that simultaneous capturing of images from two image sensors in close proximity with one another will be quite similar. Therefore, pixels of a first captured image may be processed in a main path of the pipeline. Additionally, similar pixels in a second captured image may be identified with a similarity mask, where the similar pixels are also contained in the first captured image (and are already being processed). After removal of the similar pixels in the second captured image, the remaining pixels may be processed in a secondary path of the pipeline. By removing redundant processing, significant power savings in the image signal processing pipeline may be realized.
Further, in some embodiments, the images generated by the first and second paths may be simultaneously displayed. For example, one display portion of a display 106 can be used to show a video (e.g., outputted from the first path) and a second display portion of the display 106 can be used to show a still image or “snap-shot” from the video (e.g., outputted from the second path) which is responsive to a pause button on an interface of the imaging device 150. Alternatively, an image frame may be shown in a split screen of the display (e.g., left section) and another image frame may be shown in a right section of the display. The imaging device may be configured to allow for a user to select a combination of frames (e.g., the frames being displayed in the split screen) and then compared and combined by processing logic 103, 104, 105 to generate an enhanced image having improved image quality and resolution.
As previously mentioned, embodiments of the imaging device 150 may employ modes of operation that are selectable from interface elements of the device. Interface elements may include graphical interface elements selectable from a display 106 or mechanical buttons or switches selectable or switchable from a housing of the imaging device 150. In one embodiment, a user may activate a stereoscopic mode of operation, in which processing logic 103, 104, 105 of the imaging device 150 produces a 3D image, using captured images, that is viewable on the display 106 or capable of being saved in memory 108. The user may also activate a 2D mode of operation, where a single image is captured and displayed or saved in memory 108. Further, the user may activate an enhanced 2D mode of operation, where multiple images are captured and used to produce a 2D image with enhanced characteristics (e.g., improved depth of field, enhanced focus, HDR, super-resolution, etc.) that may be viewed or saved in memory 108.
In processing an image, binning allows charges from adjacent pixels to be combined which can provide improved signal-to-noise ratios albeit at the expense of reduced spatial resolution. In various embodiments, different binning levels can be used in each of the multiple image sensors. Therefore, better resolution may be obtained from the image sensor having the lower binning level and better signal-to-noise ratio may be obtained from the image sensor having the higher binning level. The two versions of a captured scene or image may then be combined to produce an enhanced version of the image.
In particular, in one embodiment, multiple image sensors 101 capture multiple images, each with different exposure levels. A process element 201, 202 of an image signaling processing pipeline correlates and performs high dynamic range processing on different combinations of the captured images. The resulting images from the different combinations may be displayed to a user and offered for selection by the user as to the desired final image which may be saved and/or displayed. In some embodiments, a graphical interface slide-bar (or other user interface control element) may also be presented that allows gradual or stepwise shifting providing differing weighting combinations between images having different exposures. For video, such setting may be maintained across all frames.
Multiplexing of the image signal processing pipeline is also implemented in an embodiment utilizing multiple image sensors 101. For example, consider a stereoscopic imaging device (e.g., one embodiment of imaging device 150) that delivers a left image and a right image of an object to a single image signal processing pipeline, as represented in
Therefore, instead of processing one of the images in its entirety after the other has been processed in its entirety, the images can be processed concurrently by switching processing of the images between one another as processing time allows by front-end processing logic 103. This reduces latency by not delaying processing of an image until completion of the other image, and processing of the two images will finish more quickly.
Keeping the above points in mind,
Regardless of its form (e.g., portable or non-portable), it should be understood that the electronic device 650 may provide for the processing of image data using one or more of the image processing techniques briefly discussed above, among others. In some embodiments, the electronic device 650 may apply such image processing techniques to image data stored in a memory of the electronic device 650. In further embodiments, the electronic device 650 may include multiple imaging devices, such as an integrated or external digital camera or imager 101, configured to acquire image data, which may then be processed by the electronic device 650 using one or more of the above-mentioned image processing techniques.
As shown in
Before continuing, it should be understood that the system block diagram of the device 605 shown in
Referring next to
Beginning in step 702, control logic 105 triggers or initiates simultaneous capture of multiple images from image sensors 101, where the multiple images include at least a first image and a second image. The first image contains an imaging characteristic or setting that is different from an imaging characteristic of the second image. Possible imaging characteristics include exposure levels, focus levels, depth of field settings, angle of views, etc. In step 704, processing logic 103, 104 combines at least the first and second images or portions of the first and second images to produce an enhanced image having qualities of the first and second images. The enhanced image, as an example, may contain portions having depths of field from the first and second images, exposure levels from the first and second images, combined resolutions of the first and second images, etc. The enhanced image is output from an image signal processing pipeline of the processing logic and is provided for display, in step 706.
Next, referring to
In
Correspondingly, in step 904, control logic 105 activates a 2D or monoscopic mode of operation for the imaging device 150, where a single image is captured and displayed or saved in memory 108. In one embodiment, a user may generate a command for the control logic 105 to activate the 2D mode of operation. In an alternative embodiment, the control logic 105 may be configured to automatically activate the 2D mode of operation without user prompting.
Further, in step 906, control logic 105 activates an enhanced 2D or monoscopic mode of operation for the imaging device 150, where multiple images are captured and used to produce a 2D image with enhanced characteristics (e.g., improved depth of field, enhanced focus, HDR, super-resolution, etc.) that may be viewed or saved in memory 108. Additionally, in various embodiments, one of the outputs of the image processing may not be an enhanced image and may be image information, such as depth of field information, for the enhanced image. In one embodiment, a user may generate a command for the control logic 105 to activate the enhanced 2D mode of operation. In an alternative embodiment, the control logic 105 may be configured to automatically activate the enhanced 2D mode of operation without user prompting.
Any process descriptions or blocks in flow charts should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of embodiments of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art.
In the context of this document, a “computer readable medium” can be any means that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a nonexhaustive list) of the computer readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). In addition, the scope of certain embodiments includes embodying the functionality of the embodiments in logic embodied in hardware or software-configured mediums.
It should be emphasized that the above-described embodiments are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Fridental, Ron, Plowman, David, Brisedoux, Laurent, Sewell, Benjamin, Patuck, Naushir, Harding, Cresida
Patent | Priority | Assignee | Title |
10156706, | Aug 10 2014 | Corephotonics Ltd | Zoom dual-aperture camera with folded lens |
10225479, | Jun 13 2013 | Corephotonics Ltd.; Corephotonics Ltd | Dual aperture zoom digital camera |
10230898, | Aug 13 2015 | Corephotonics Ltd | Dual aperture zoom camera with video support and switching / non-switching dynamic control |
10250797, | Aug 01 2013 | Corephotonics Ltd.; Corephotonics Ltd | Thin multi-aperture imaging system with auto-focus and methods for using same |
10284780, | Sep 06 2015 | Corephotonics Ltd.; Corephotonics Ltd | Auto focus and optical image stabilization with roll compensation in a compact folded camera |
10288840, | Jan 03 2015 | Corephotonics Ltd | Miniature telephoto lens module and a camera utilizing such a lens module |
10288896, | Jul 04 2013 | Corephotonics Ltd.; Corephotonics Ltd | Thin dual-aperture zoom digital camera |
10288897, | Apr 02 2015 | Corephotonics Ltd.; Corephotonics Ltd | Dual voice coil motor structure in a dual-optical module camera |
10313570, | Nov 21 2012 | Infineon Technologies AG | Dynamic conservation of imaging power |
10319079, | Jun 30 2017 | Microsoft Technology Licensing, LLC | Noise estimation using bracketed image capture |
10326942, | Jun 13 2013 | Corephotonics Ltd.; Corephotonics Ltd | Dual aperture zoom digital camera |
10356332, | Aug 13 2015 | Corephotonics Ltd.; Corephotonics Ltd | Dual aperture zoom camera with video support and switching / non-switching dynamic control |
10371928, | Apr 16 2015 | Corephotonics Ltd | Auto focus and optical image stabilization in a compact folded camera |
10379371, | May 28 2015 | Corephotonics Ltd | Bi-directional stiffness for optical image stabilization in a dual-aperture digital camera |
10459205, | Apr 16 2015 | Corephotonics Ltd | Auto focus and optical image stabilization in a compact folded camera |
10469735, | Aug 01 2013 | Corephotonics Ltd.; Corephotonics Ltd | Thin multi-aperture imaging system with auto-focus and methods for using same |
10488631, | May 30 2016 | Corephotonics Ltd | Rotational ball-guided voice coil motor |
10498961, | Sep 06 2015 | Corephotonics Ltd.; Corephotonics Ltd | Auto focus and optical image stabilization with roll compensation in a compact folded camera |
10509209, | Aug 10 2014 | Corephotonics Ltd.; Corephotonics Ltd | Zoom dual-aperture camera with folded lens |
10534153, | Feb 23 2017 | Corephotonics Ltd | Folded camera lens designs |
10558058, | Apr 02 2015 | Corephontonics Ltd.; Corephotonics Ltd | Dual voice coil motor structure in a dual-optical module camera |
10567666, | Aug 13 2015 | Corephotonics Ltd.; Corephotonics Ltd | Dual aperture zoom camera with video support and switching / non-switching dynamic control |
10571644, | Feb 23 2017 | Corephotonics Ltd | Folded camera lens designs |
10571665, | Aug 10 2014 | Corephotonics Ltd.; Corephotonics Ltd | Zoom dual-aperture camera with folded lens |
10571666, | Apr 16 2015 | Corephotonics Ltd.; Corephotonics Ltd | Auto focus and optical image stabilization in a compact folded camera |
10578948, | Dec 29 2015 | Corephotonics Ltd | Dual-aperture zoom digital camera with automatic adjustable tele field of view |
10613303, | Apr 16 2015 | Corephotonics Ltd; Corephotonics Ltd. | Auto focus and optical image stabilization in a compact folded camera |
10616484, | Jun 19 2016 | Corephotonics Ltd | Frame syncrhonization in a dual-aperture camera system |
10620450, | Jul 04 2013 | Corephotonics Ltd | Thin dual-aperture zoom digital camera |
10645286, | Mar 15 2017 | Corephotonics Ltd | Camera with panoramic scanning range |
10656396, | Apr 16 2015 | Corephotonics Ltd.; Corephotonics Ltd | Auto focus and optical image stabilization in a compact folded camera |
10670827, | Feb 23 2017 | Corephotonics Ltd.; Corephotonics Ltd | Folded camera lens designs |
10670879, | May 28 2015 | Corephotonics Ltd.; Corephotonics Ltd | Bi-directional stiffness for optical image stabilization in a dual-aperture digital camera |
10694094, | Aug 01 2013 | Corephotonics Ltd.; Corephotonics Ltd | Thin multi-aperture imaging system with auto-focus and methods for using same |
10694168, | Apr 22 2018 | Corephotonics Ltd.; Corephotonics Ltd | System and method for mitigating or preventing eye damage from structured light IR/NIR projector systems |
10706518, | Jul 07 2016 | Corephotonics Ltd | Dual camera system with improved video smooth transition by image blending |
10762708, | Jun 23 2016 | Intel Corporation | Presentation of scenes for binocular rivalry perception |
10771684, | Jan 19 2015 | Microsoft Technology Licensing, LLC | Profiles identifying camera capabilities |
10841500, | Jun 13 2013 | Corephotonics Ltd.; Corephotonics Ltd | Dual aperture zoom digital camera |
10845565, | Jul 07 2016 | Corephotonics Ltd | Linear ball guided voice coil motor for folded optic |
10884321, | Jan 12 2017 | Corephotonics Ltd | Compact folded camera |
10904444, | Jun 13 2013 | Corephotonics Ltd | Dual aperture zoom digital camera |
10904512, | Sep 06 2017 | Corephotonics Ltd | Combined stereoscopic and phase detection depth mapping in a dual aperture camera |
10911740, | Apr 22 2018 | Corephotonics Ltd.; Corephotonics Ltd | System and method for mitigating or preventing eye damage from structured light IR/NIR projector systems |
10917576, | Aug 13 2015 | Corephotonics Ltd.; Corephotonics Ltd | Dual aperture zoom camera with video support and switching / non-switching dynamic control |
10935870, | Dec 29 2015 | Corephotonics Ltd | Dual-aperture zoom digital camera with automatic adjustable tele field of view |
10951834, | Oct 03 2017 | Corephotonics Ltd.; Corephotonics Ltd | Synthetically enlarged camera aperture |
10962746, | Apr 16 2015 | Corephotonics Ltd.; Corephotonics Ltd | Auto focus and optical image stabilization in a compact folded camera |
10976527, | Aug 10 2014 | Corephotonics Ltd.; Corephotonics Ltd | Zoom dual-aperture camera with folded lens |
10976567, | Feb 05 2018 | Corephotonics Ltd | Reduced height penalty for folded camera |
11002947, | Aug 10 2014 | Corephotonics Ltd.; Corephotonics Ltd | Zoom dual-aperture camera with folded lens |
11042011, | Aug 10 2014 | Corephotonics Ltd.; Corephotonics Ltd | Zoom dual-aperture camera with folded lens |
11048060, | Jul 07 2016 | Corephotonics Ltd.; Corephotonics Ltd | Linear ball guided voice coil motor for folded optic |
11125975, | Jan 03 2015 | Corephotonics Ltd.; Corephotonics Ltd | Miniature telephoto lens module and a camera utilizing such a lens module |
11172127, | Jun 19 2016 | Corephotonics Ltd | Frame synchronization in a dual-aperture camera system |
11188776, | Oct 26 2019 | GENETEC INC | Automated license plate recognition system and related method |
11262559, | Aug 10 2014 | Corephotonics Ltd | Zoom dual-aperture camera with folded lens |
11268829, | Apr 23 2018 | Corephotonics Ltd | Optical-path folding-element with an extended two degree of freedom rotation range |
11268830, | Apr 23 2018 | Corephotonics Ltd | Optical-path folding-element with an extended two degree of freedom rotation range |
11287081, | Jan 07 2019 | Corephotonics Ltd | Rotation mechanism with sliding joint |
11287668, | Jul 04 2013 | Corephotonics Ltd.; Corephotonics Ltd | Thin dual-aperture zoom digital camera |
11314146, | Dec 29 2015 | Corephotonics Ltd. | Dual-aperture zoom digital camera with automatic adjustable tele field of view |
11315276, | Mar 09 2019 | Corephotonics Ltd | System and method for dynamic stereoscopic calibration |
11333955, | Nov 23 2017 | Corephotonics Ltd | Compact folded camera structure |
11350038, | Aug 13 2015 | Corephotonics Ltd. | Dual aperture zoom camera with video support and switching / non-switching dynamic control |
11359937, | Apr 23 2018 | Corephotonics Ltd | Optical-path folding-element with an extended two degree of freedom rotation range |
11363180, | Aug 04 2018 | Corephotonics Ltd | Switchable continuous display information system above camera |
11367267, | Feb 08 2018 | GENETEC INC. | Systems and methods for locating a retroreflective object in a digital image |
11368631, | Jul 31 2019 | Corephotonics Ltd | System and method for creating background blur in camera panning or motion |
11392009, | Dec 29 2015 | Corephotonics Ltd. | Dual-aperture zoom digital camera with automatic adjustable tele field of view |
11470235, | Aug 01 2013 | Corephotonics Ltd.; Corephotonics Ltd | Thin multi-aperture imaging system with autofocus and methods for using same |
11470257, | Jun 13 2013 | Corephotonics Ltd. | Dual aperture zoom digital camera |
11527006, | Mar 09 2019 | Corephotonics Ltd. | System and method for dynamic stereoscopic calibration |
11531209, | Dec 28 2016 | Corephotonics Ltd | Folded camera structure with an extended light-folding-element scanning range |
11543633, | Aug 10 2014 | Corephotonics Ltd. | Zoom dual-aperture camera with folded lens |
11546518, | Aug 13 2015 | Corephotonics Ltd. | Dual aperture zoom camera with video support and switching / non-switching dynamic control |
11550119, | Jul 07 2016 | Corephotonics Ltd.; Corephotonics Ltd | Linear ball guided voice coil motor for folded optic |
11599007, | Dec 29 2015 | Corephotonics Ltd. | Dual-aperture zoom digital camera with automatic adjustable tele field of view |
11614635, | Jul 04 2013 | Corephotonics Ltd.; Corephotonics Ltd | Thin dual-aperture zoom digital camera |
11619864, | Nov 23 2017 | Corephotonics Ltd. | Compact folded camera structure |
11635596, | Aug 22 2018 | Corephotonics Ltd | Two-state zoom folded camera |
11637977, | Jul 15 2020 | Corephotonics Ltd.; Corephotonics Ltd | Image sensors and sensing methods to obtain time-of-flight and phase detection information |
11640047, | Feb 12 2018 | Corephotonics Ltd | Folded camera with optical image stabilization |
11650400, | May 30 2016 | Corephotonics Ltd.; Corephotonics Ltd | Rotational ball-guided voice coil motor |
11659135, | Oct 30 2019 | Corephotonics Ltd | Slow or fast motion video using depth information |
11671711, | Mar 15 2017 | Corephotonics Ltd | Imaging system with panoramic scanning range |
11686952, | Feb 05 2018 | Corephotonics Ltd.; Corephotonics Ltd | Reduced height penalty for folded camera |
11689803, | Jun 19 2016 | Corephotonics Ltd. | Frame synchronization in a dual-aperture camera system |
11693064, | Apr 26 2020 | Corephotonics Ltd | Temperature control for Hall bar sensor correction |
11693297, | Jan 12 2017 | Corephotonics Ltd | Compact folded camera |
11695896, | Oct 03 2017 | Corephotonics Ltd. | Synthetically enlarged camera aperture |
11703668, | Aug 10 2014 | Corephotonics Ltd. | Zoom dual-aperture camera with folded lens |
11716535, | Aug 01 2013 | Corephotonics Ltd.; Corephotonics Ltd | Thin multi-aperture imaging system with auto-focus and methods for using same |
11726388, | Dec 29 2015 | Corephotonics Ltd.; Corephotonics Ltd | Dual-aperture zoom digital camera with automatic adjustable tele field of view |
11733064, | Apr 23 2018 | Corephotonics Ltd. | Optical-path folding-element with an extended two degree of freedom rotation range |
11770609, | May 30 2020 | Corephotonics Ltd | Systems and methods for obtaining a super macro image |
11770616, | Aug 13 2015 | Corephotonics Ltd.; Corephotonics Ltd | Dual aperture zoom camera with video support and switching / non-switching dynamic control |
11770618, | Dec 09 2019 | Corephotonics Ltd | Systems and methods for obtaining a smart panoramic image |
11808925, | Apr 16 2015 | Corephotonics Ltd.; Corephotonics Ltd | Auto focus and optical image stabilization in a compact folded camera |
11809065, | Jan 12 2017 | Corephotonics Ltd | Compact folded camera |
11809066, | Nov 23 2017 | Corephotonics Ltd. | Compact folded camera structure |
11815790, | Jan 12 2017 | Corephotonics Ltd | Compact folded camera |
11830256, | Feb 08 2018 | GENETEC INC. | Systems and methods for locating a retroreflective object in a digital image |
11832008, | Jul 15 2020 | Corephotonics Ltd. | Image sensors and sensing methods to obtain time-of-flight and phase detection information |
11832018, | May 17 2020 | Corephotonics Ltd | Image stitching in the presence of a full field of view reference image |
11838635, | Jun 13 2013 | Corephotonics Ltd.; Corephotonics Ltd | Dual aperture zoom digital camera |
11852790, | Aug 22 2018 | Corephotonics Ltd. | Two-state zoom folded camera |
11852845, | Jul 04 2013 | Corephotonics Ltd. | Thin dual-aperture zoom digital camera |
11856291, | Aug 01 2013 | Corephotonics Ltd. | Thin multi-aperture imaging system with auto-focus and methods for using same |
11867535, | Apr 23 2018 | Corephotonics Ltd. | Optical-path folding-element with an extended two degree of freedom rotation range |
11910089, | Jul 15 2020 | Corephotonics Ltd | Point of view aberrations correction in a scanning folded camera |
RE48444, | Nov 28 2012 | Corephotonics Ltd. | High resolution thin multi-aperture imaging systems |
RE48477, | Nov 28 2012 | Corephotonics Ltd | High resolution thin multi-aperture imaging systems |
RE48697, | Nov 28 2012 | Corephotonics Ltd. | High resolution thin multi-aperture imaging systems |
RE48945, | Nov 28 2012 | Corephotonics Ltd. | High resolution thin multi-aperture imaging systems |
RE49256, | Nov 28 2012 | Corephotonics Ltd. | High resolution thin multi-aperture imaging systems |
Patent | Priority | Assignee | Title |
7086735, | May 27 2005 | Enhancement of visual perception | |
20050128323, | |||
20080030592, | |||
20080218611, | |||
20100238327, | |||
20130335535, | |||
CN101365071, | |||
CN101496415, | |||
CN2569474, | |||
JP2010521102, | |||
KR1020090033487, | |||
KR1020090088435, | |||
TW200937344, | |||
WO2006079963, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 15 2011 | FRIDENTAL, RON | Broadcom Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027435 | /0256 | |
Dec 15 2011 | BRISEDOUX, LAURENT | Broadcom Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027435 | /0256 | |
Dec 16 2011 | HARDING, CRESSIDA | Broadcom Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027435 | /0256 | |
Dec 16 2011 | PATUCK, NAUSHIR | Broadcom Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027435 | /0256 | |
Dec 21 2011 | SEWELL, BENJAMIN | Broadcom Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027435 | /0256 | |
Dec 21 2011 | PLOWMAN, DAVID | Broadcom Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 027435 | /0256 | |
Dec 22 2011 | Broadcom Corporation | (assignment on the face of the patent) | / | |||
Feb 01 2016 | Broadcom Corporation | BANK OF AMERICA, N A , AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT | 037806 | /0001 | |
Jan 19 2017 | BANK OF AMERICA, N A , AS COLLATERAL AGENT | Broadcom Corporation | TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS | 041712 | /0001 | |
Jan 20 2017 | Broadcom Corporation | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 041706 | /0001 | |
May 09 2018 | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | MERGER SEE DOCUMENT FOR DETAILS | 047229 | /0408 | |
Sep 05 2018 | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | CORRECTIVE ASSIGNMENT TO CORRECT THE PATENT NUMBER 9,385,856 TO 9,385,756 PREVIOUSLY RECORDED AT REEL: 47349 FRAME: 001 ASSIGNOR S HEREBY CONFIRMS THE MERGER | 051144 | /0648 | |
Sep 05 2018 | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | CORRECTIVE ASSIGNMENT TO CORRECT THE EFFECTIVE DATE PREVIOUSLY RECORDED ON REEL 047229 FRAME 0408 ASSIGNOR S HEREBY CONFIRMS THE THE EFFECTIVE DATE IS 09 05 2018 | 047349 | /0001 |
Date | Maintenance Fee Events |
Aug 23 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Aug 17 2023 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 23 2019 | 4 years fee payment window open |
Aug 23 2019 | 6 months grace period start (w surcharge) |
Feb 23 2020 | patent expiry (for year 4) |
Feb 23 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 23 2023 | 8 years fee payment window open |
Aug 23 2023 | 6 months grace period start (w surcharge) |
Feb 23 2024 | patent expiry (for year 8) |
Feb 23 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 23 2027 | 12 years fee payment window open |
Aug 23 2027 | 6 months grace period start (w surcharge) |
Feb 23 2028 | patent expiry (for year 12) |
Feb 23 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |