A method and device for capturing a mixed structured-light image and regular image using an integrated image sensor are disclosed, where the structured-light image is captured using a shorter frame period than the regular image. In order to achieve a shorter frame period for the structured-light image, the structured-light image may correspond to an image captured with reduced dynamic range, reduced spatial resolution, or a combination of them. The capturing process comprises applying reset signals to a pixel array to reset rows of pixels of the pixel array, reading-out analog signals from the rows of pixels of the pixel array and converting the analog signals from the rows of pixels of the pixel array into digital outputs for the image using one or more analog-to-digital converters.
|
1. A method of capturing images of a scene using a camera comprising an image sensor, the method comprising:
projecting, by a structured light source, a first structured light to a scene in a field of view of the image sensor;
capturing, by the image sensor, a first structured-light image formed on a common image plane during a first frame period by applying first reset signals to the image sensor to reset rows of pixels of the image sensor, exposing the rows of pixels of the image sensor to structured light to cause first analog signals from the rows of pixels and converting the first analog signals from the rows of pixels of the image sensor into first digital outputs to form the first structured-light image using one or more analog-to-digital converters;
capturing, by the image sensor, a regular image formed on a same image plane as the common image plane using the image sensor during a second frame period by applying second reset signals to the image sensor to reset the rows of pixels of the image sensor, exposing the rows of pixels to non-structured light to cause second analog signals from the rows of pixels and converting the second analog signals from the rows of pixels into second digital outputs to form the regular image using said one or more analog-to-digital converters; and
wherein the first frame period is shorter than the second frame period and wherein the first structured-light image is captured before or after the regular image to derive depth or shape information for the regular image.
31. A method of capturing images of a scene using a camera comprising an image sensor, the method comprising:
projecting, by a structured light source, a structured light to a scene in a field of view of the image sensor;
capturing, by the image sensor, a structured-light image using the image sensor during a first frame period by applying first reset signals to the image sensor to reset rows of pixels of the image sensor, exposing the rows of pixels of the image sensor to structured light to cause first analog signals from the rows of pixels and converting the first analog signals from the rows of pixels of the image sensor into first digital outputs to form the structured-light image using one or more analog-to-digital converters;
capturing, by the image sensor, a first regular image using the image sensor during a second frame period by applying second reset signals to the image sensor to reset the rows of pixels of the image sensor, exposing the rows of pixels to non-structured light to cause second analog signals from the rows of pixels and converting the second analog signals from the rows of pixels into second digital outputs to form the first regular image using said one or more analog-to-digital converters; and
capturing, by the image sensor, a second regular image during a third frame period by applying third reset signals to the image sensor to reset the rows of pixels of the image sensor, exposing the rows of pixels to the non-structured light to cause third analog signals from the rows of pixels and converting the third analog signals from the rows of pixels into third digital outputs to form the second regular image using said one or more analog-to-digital converters; and
combining the first regular image and the second regular image to form a combined regular image; and
wherein the structured-light image is captured between the first regular image and the second regular image, and the first frame period is shorter than a sum of the second frame period and the third frame period to derive depth or shape information for the combined regular image.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
15. The method of
16. The method of
18. The method of
19. The method of
20. The method of
21. The method of
22. The method of
23. The method of
24. The method of
25. The method of
26. The method of
27. The method of
28. The method of
29. The method of
capturing, by the image sensor, a second structured-light image formed on the same image plane as the common image plane during a third frame period by applying third reset signals to the image sensor to reset the rows of pixels of the image sensor, exposing the rows of pixels to the non-structured light to cause third analog signals from the rows of pixels and converting the third analog signals from the rows of pixels into third digital outputs for the second structured-light image using said one or more analog-to-digital converters, wherein the third frame period is shorter than the second frame period; and
wherein the regular image is captured between, before or after the first structured-light image and the second structured-light image, and both the first structured-light image and the second structured-light image are used to derive depth or shape information for the regular image.
30. The method of
32. The method of
33. The method of
34. The method of
35. The method of
36. The method of
37. The method of
38. The method of
|
The present invention relates to a single image sensor capable of capturing structured-light images and regular image, where the structured-light image is used to derive depth or shape information related to the corresponding regular image.
Devices for imaging body cavities or passages in vivo are known in the art and include endoscopes and autonomous encapsulated cameras. Endoscopes are flexible or rigid tubes that pass into the body through an orifice or surgical opening, typically into the esophagus via the mouth or into the colon via the rectum. An image is formed at the distal end using a lens and transmitted to the proximal end, outside the body, either by a lens-relay system or by a coherent fiber-optic bundle. A conceptually similar instrument might record an image electronically at the distal end, for example using a CCD or CMOS array, and transfer the image data as an electrical signal to the proximal end through a cable. Endoscopes allow a physician control over the field of view and are well-accepted diagnostic tools.
Capsule endoscope is an alternative in vivo endoscope developed in recent years. For capsule endoscope, a camera is housed in a swallowable capsule, along with a radio transmitter for transmitting data, primarily comprising images recorded by the digital camera, to a base-station receiver or transceiver and data recorder outside the body. The capsule may also include a radio receiver for receiving instructions or other data from a base-station transmitter. Instead of radio-frequency transmission, lower-frequency electromagnetic signals may be used. Power may be supplied inductively from an external inductor to an internal inductor within the capsule or from a battery within the capsule.
An autonomous capsule camera system with on-board data storage was disclosed in the U.S. Pat. No. 7,983,458, entitled “In Vivo Autonomous Camera with On-Board Data Storage or Digital Wireless Transmission in Regulatory Approved Band,” granted on Jul. 19, 2011. The capsule camera with on-board storage archives the captured images in on-board non-volatile memory. The capsule camera is retrieved upon its exiting from the human body. The images stored in the non-volatile memory of the retrieved capsule camera are then accessed through an output port on in the capsule camera.
While the two-dimensional images captured by the endoscopes have been shown useful for diagnosis, it is desirable to be able to capture gastrointestinal (GI) tract images with depth information (i.e., three-dimensional (3D) images) to improve the accuracy of diagnosis or to ease the diagnosis process. In the field of 3D imaging, 3D images may be captured using a regular camera for the texture information in the scene and a separate depth camera (e.g. Time of Flight camera) for the depth information of the scene in the field of view. The 3D images may also be captured using multiple cameras, where multiple cameras are often used in a planar configuration to capture a scene from different view angles. Then, point correspondence is established among multiple views for 3D triangulation. Nevertheless, these multi-camera systems may not be easily adapted to the GI tract environment, where the space is very limited. In the past twenty years, a structured light technology has been developed to derive the depth or shape of objects in the scene using a single camera. In the structured light system, a light source, often a projector is used to project known geometric pattern(s) onto objects in the scene. A regular camera can be used to capture images with and without the projected patterns. The images captured with the structured light can be used to derive the shapes associated with the objects in the scene. The depth or shape information is then used with regular images, which are captured with non-structured floodlit light, to create 3D textured model of the objects. The structured light technology has been well known in the field. For example, in “Structured-light 3D surface imaging: a tutorial” (Geng, in Advances in Optics and Photonics, Vol. 3, Issue 2, pp. 128-160, Mar. 31, 2011), structured light technology using various structured light patterns are described and the corresponding performances are compared. In another example, various design, calibration and implement issues are described in “3-D Computer Vision Using Structured Light: Design, Calibration and Implementation Issues” (DePiero et al., Advances in Computers, Volume 43, Jan. 1, 1996, pages 243-278). Accordingly, the details of the structured light technology are not repeated here.
While the structured light technology may be more suitable for 3D imaging of the GI tract than other technologies, there are still issues with the intended application for GI tract. For example, most of the structured light applications are intended for stationary object. Therefore, there is no object movement between the captured structured-light image and the regular image. Nevertheless, in the capsule camera application for GI tract imaging, both the capsule camera and the GI parts (e.g. small intestines and colons) may be moving. Therefore, there will be relative movement between the structured-light image and the regular image if they are captured consecutively. Furthermore, the capsule camera application is a very power-sensitive environment. The use of structured light will consume system power in addition to capturing the regular images. Besides, if one image with structured light is taken after each regular image, the useful frame rate will be dropped to half. If the same frame rate of regular images is maintained, the system would have to capture images at twice the regular frame rate and consume twice the power in image capture. Accordingly, it is desirable to develop technology for structured light application in the GI tract that can overcome these issues mentioned here.
A method and device of capturing a mixed structured-light image and regular image using an integrated image sensor are disclosed, where the structured-light image is captured using a shorter frame period than the regular image. In order to achieve a shorter frame period for the structured-light image, the structured-light image may correspond to an image captured with reduced dynamic range, reduced spatial resolution, or a combination of them. The capturing process comprises applying reset signals to a pixel array to reset rows of pixels of the pixel array, reading-out analog signals from the rows of pixels of the pixel array and converting the analog signals from the rows of pixels of the pixel array into digital outputs for the image using one or more analog-to-digital converters.
The reduced dynamic range may correspond to reduced resolution of the analog-to-digital converters. The reduced dynamic range may also corresponds to reduced ramping period for a ramp reference voltage, where the ramp reference voltage is used by said one or more analog-to-digital converters (ADCs) to compare with an input analog voltage. When successive approximation ADCs are used, the reduced dynamic range may correspond to a reduced number of successive approximations for refining a reference voltage supplied to the successive-approximation ADCs to compare with an input analog voltage. The reduced dynamic range may also correspond to reduced integration time for the image sensor to accumulate electronic charges. In this case, analog gain of the first analog signals from the rows of pixels of the image sensor can be increased for the structured-light image.
The method may further comprises projecting the structured light with first intensity onto the scene during the integration time of the structured-light image frame period and projecting non-structured light with second intensity onto the scene during an integration time of the first regular image frame period. The structured light can be generated using multiple light sources with at least two different colors or patterns. In one embodiment, the light spectrum associated with one of the multiple light sources or a combination of the multiple light sources is substantially distinct from second spectrum associated with images of an anticipated scene under ambient light or illuminated by the non-structured light. The image sensor may correspond to a color image sensor comprising at least first pixels for a first color and second pixels for a second color arranged in a mosaic pattern, and the first spectrum is concentrated on the first color associated with the first pixels.
The non-structured light can be generated using narrowband illumination or fluoroscopic excitation. The first intensity can be substantially higher than the second intensity. In another case, the period of the first intensity can be substantially shorter than the human visual retention time.
In one embodiment, a minimum row reset time among the rows of pixels of the image sensor for the structured-light image is substantially shorter than a minimum row reset time among the rows of pixels of the image sensor for the regular image.
The method may further include a step of generating a first control signal to cause a structured light triggered for capturing the structured-light image. The structured light can be applied during integration period for the structured-light image. The structured light can be applied before first row integration ends and after last row integration starts. The method may further include a step of generating a second control signal to cause a second light triggered for capturing the second regular image.
The method may further include a step of providing the structured-light image to derive depth or shape information for the regular image. The steps of applying the first reset signals, reading-out first analog signals and converting the first analog signals can be applied to selected rows of the pixel array only to reduce structured-light image period. Therefore, the structured-light image captured has reduced vertical resolution compared to the regular image. The structured-light image period mentioned above is equal to a the sum comprising first reset time, first integration time and first readout time for capturing the structured-light image. The selected rows of the pixel array may correspond to one row out of every N rows of the pixel array, where N is an integer greater than 1. The time period for structured-light image can be further reduced by applying sub-sampling to each selected row. In one embodiment, the process of capturing a tandem image including a structured-light image and a regular image can be repeated to obtain a structured-light image sequence and a regular image sequence, where the structured-light image sequence is used for deriving depth or shape information and a regular image sequence for viewing.
In another embodiment, reduced spatial resolution for the structured-light image can be performed regardless of whether lower dynamic range is applied. In other words, the structured-light image may also be captured with regular dynamic range, but with reduced spatial resolution. For example, the image sensor can be configured to output selected rows only. Alternatively, the image sensor can be configured to output sub-sampled pixels for every row. Furthermore, the image sensor can be configured to output sub-sampled pixels for selected rows only. In yet another embodiment, the first structured-light image can be captured with a reduced image area in a vertical direction, horizontal direction or both compared to the regular image.
In one embodiment, the integrated image sensor is inside a sealed capsule housing for imaging gastrointestinal tract of human body and the method further comprises generating a first control signal to cause a first light inside the sealed capsule housing triggered for capturing the structured-light image and generating a second control signal to cause a second light inside the sealed capsule housing triggered for capturing the regular image.
In another embodiment, the first structured light image may correspond to multiple structured-light images by generating a sequence control signals to cause a sequence of structure lights and to capture the multiple structured-light images. The structure lights may come from different light sources located at different directions with respected to the objects in the scene, from the same projector but with different patterns, or a combination of both pattern and location.
More 3D points may be derived from multiple structured light images than a single structured light image. Accordingly, in one embodiment of the present invention, multiple structured-light images may be captured consecutively.
In another embodiment, the method generates a combined regular image by capturing a first regular image and a second regular image with a structured-light image in between. The first and second regular images are then combined to generate the combined regular image. The frame period of the structured-light image is shorter than a sum of the frame period of the first regular image and the frame period of the second regular image. The first integration time of the first regular image can be longer or shorter than the second integration time of the second regular image to cause higher or lower weighting of the first regular image than the second regular image in the combined regular image respectively. For example, the first integration time can be three times as long as the second integration time. The combined image has the effect of weighted sum corresponding to ¾ of the first regular image and ¼ of the second regular image. The first regular image and the second regular image may also have the same integration time to cause the same weighting in the combined regular image.
In yet another embodiment, dual exposure controls for a single sensor to capture structured-light images and regular images is disclosed. The system captures one or more structured-light images in a field of view of an image sensor under structured light from a structured-light source by applying first exposure control to adjust the structured-light source, a first gain or a first integration time associated with the image sensor, or a combination thereof for said one or more structured-light images. The system also captures one or more regular images in the field of view of the image sensor by applying second exposure control to the image sensor to adjust a second gain or a second integration time associated with the image sensor, or a combination thereof for said one or more regular images. The regular images are captured in an interwoven fashion with the structured-light images. The first exposure control can be determined based on the second exposure control or the second exposure control can be determined based on the first exposure control. The second exposure control may further include adjusting a non-structure light source for the regular images.
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the systems and methods of the present invention, as represented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. References throughout this specification to “one embodiment,” “an embodiment,” or similar language mean that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, etc. In other instances, well-known structures, or operations are not shown or described in detail to avoid obscuring aspects of the invention. The illustrated embodiments of the invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of apparatus and methods that are consistent with the invention as claimed herein.
Endoscopes are normally inserted into the human body through a natural opening such as the mouth or anus. Therefore, endoscopes are preferred to be small sizes so as to be minimally invasive. To derive or capture the depth or shape information while capturing live images or videos of the GI tract with endoscopes, it is crucial to maintain the small-size form factor. Besides, with the small size and the capability to capture depth information along with corresponding images or video, such camera also finds its applications in other applications requiring compact size, such as a wearable devices.
One technique that may capture depth information is to use a color filter placed on top of selected sensor pixels with the passband reasonably narrow and capture the color information and depth information simultaneously. The environment light sources with spectrum in the filter passband will cause negligible amount of energy projected onto the sensor. For the case of RGB pixels, a fourth type of pixels may be added to capture light with the spectrum in the passband of the filter placed on top of these pixels. Then, the structured light that has the spectrum substantially in the passband can be projected onto the scene. However this approach will reduce the spatial resolution of the images or video captured using such image sensor.
Another technique is to obtain the depth information as well as 3D topology by projecting structured light patterns that are visible in the RGB sensors. However the real time image and/or video will be confounded by the structured light superimposed on it. This invention describes methods to use a single camera to achieve depth information by using the structured light approach while taking images or real time video using the camera.
As mentioned before, a conventional structured light approach with a single camera would incur several drawbacks. For example, the camera with a frame rate of 30 frames per second may be used. A conventional approach would take live video with interleaved images corresponding to images with and without the structured light. One issue is that the depth information is 1/30 second away from corresponding images to be viewed. If there is any movement in the scene, the depth information may not accurately represent the 3D topology of the corresponding images at 1/30 second away. In addition, the effective frame rate for the video to be viewed is dropped to 15 frames per second in this example.
In some video applications, the frame rate is crucial for the intended application. For example, a high frame-rate camera with frame rate in the 100's per second or more is required to capture video of fast moving objects such as a travelling bullet. In this case, the use of structured light would cut the frame rate to half and may hinder the intended application. For a capsule camera, the video for the gastrointestinal (GI) tract is normally a few frames per second and the camera could be operating at twice the original frame rate to compensate the reduction of effective frame rate due to capturing structured-light images. However, it would result in twice as much power consumption, which is not desirable in the power-limited capsule environment.
Each frame rate has a corresponding frame period. During the frame period, the sensor will spend a subset of the frame period for accumulating charges emitted in response to incidental light on the sensor. The integration time must be sufficiently small so that the image is substantially stationary to avoid causing any motion blur in the captured image.
There are several factors determining how fast the pixel can accumulate electronic charges and how fast the signal can be readout. As shown in the example of
There are other variations to implement ADC, such as successive approximation ADC. For the successive approximation ADC, the reference voltage starts with a coarse level. Depending on whether the input voltage is higher or lower than the reference voltage, the reference voltage is refined by increasing or decreasing the previous reference voltage by half of a previous voltage interval. The refined reference voltage is used as a current reference voltage for next successive comparison. The process is terminated until a desired resolution is achieved. In each round of successive approximation, one bit is used to indicate whether the input voltage is higher or lower than the reference voltage. Accordingly, the ADC resolution corresponds to the number of successive approximation of the successive-approximation ADC. In general, the higher the dynamic ranges, the longer the readout will be. Not only more comparisons will be required, but also the voltage will take longer time to settle down since the accuracy requirements of the ramp up voltage or reference voltages are high. The sensor array has a large intrinsic RC constant, which takes time to settle to within the limits required by the accuracy. In the case of a high dynamic range, the conductor line carrying the reference voltage (i.e., the ramp reference signal) requires more time to settle due to the inductance, along with R (resistance) and C (capacitance), of the conductor line. The length of the conductor line for the sensor array usually is in the order of 1,000's μm (micro meter), which may result in an inductance around a few nH (nano Henry). Unlike resistance, the inductance will not be scaled down inversely proportional to the conductor cross section. The high dynamical range is an important factor for image/video quality to provide detailed shades of the objects in the scene. On the other hand, the images for structured light pattern are mainly used to derive depth/shape information based on the geometric information of know patterns, such as grids. The important information to be derived is related to the locations of the grid lines. Accordingly, the requirement on the dynamic range is substantially lower than that for the regular images to be viewed by the human eyes.
Since the required dynamic range for the structured-light image is much less than that for a regular image, the present invention takes advantage of the different dynamic range requirements to shorten the frame period for the structured-light image.
The ADC circuit(s) is capable of operating at a first dynamic range and a second dynamic range. The first dynamic range is smaller than the second dynamic range. For example, the first dynamic range may correspond to 6 bits and the second dynamic range may correspond to 9 bits. Individual ADCs with different dynamic ranges may be used. Since the structured-light image and the regular image are captured in serial instead of parallel, a single ADC with configurable dynamic range may also be used. For example, an adaptively configurable ADC is disclosed in U.S. Pat. No. 8,369,458 issued to Wong et al. on Feb. 5, 2013. The timing/control circuits may include row scan circuit and column scan circuit. The timing/control circuits are also responsible to generate various control signals such as reset signals. In the following, preferred embodiments are provided regarding configuring the image sensor to capture structured-light images and regular images.
While the main intended application is for the GI tract, the usage of short-duration and high-intensity structured light can also benefit the non-GI applications. For example, the present invention may also be applied to conventional photography to capture mixed regular images and structured-light images of natural scenes and use the depth or shape information derived from the structured images to render 3D images of the scene. In order to derive more reliable depth or shape information using the structured-light images, it is desirable to select a structured-light source having light spectrum very different from the color spectrum of the underlying scene captured under ambient light or illuminated by one or more non-structured lights.
While there are readout schemes that may start to read the higher (i.e., most significant) bits during integration, the readout with a larger dynamic range will take a longer time to complete the readout. This is due to more comparisons and longer reference voltage settling time needed. Accordingly, reducing the dynamic range for the structured-light image will be able to reduce the row processing duration. This is also true for image sensors operated in the global shutter mode. Accordingly, the settling time associated with a reference voltage, provided to the analog-to-digital converters to compare with an input analog voltage, is shorter for the first structured-light image than for the regular image. Due to the less accuracy required, the reset signal does not need to be held for so long to reset the pixels and/or related circuits to an optimal level.
As shown in
In one embodiment, the structured light is generated using multiple light sources with at least two different colors or patterns. By using multiple colors, a color or a combination of colors can be selected to cause the light spectrum of the selected color(s) substantially different from the color spectrum of associated with regular images of an anticipated scene illuminated by the non-structured light or under ambient light. The spectrum associated with the structured light can be substantially distinct from the spectrum associated with regular images of an anticipated scene. The image sensor may correspond to a color image sensor comprising at least first and second color pixels arranged in a mosaic pattern, and the spectrum associated with the structured light can be substantially concentrated on the spectrum of first or second color pixels. The structured-light image can be captured at reduced spatial resolution by reading out only selected digital outputs related to the first or second pixels having spectrum substantially corresponding to the structured-light spectrum.
For capsule applications, the integrated image sensor may be inside a sealed capsule housing for imaging gastrointestinal tract of human body. Since there is no ambient light in the GI tract, the capsule device has to provide both the structured light and the lighting for regular images. In this case, the structured light sources for structured light images and the illumination light sources for regular images can be sealed in the housing.
In the embodiments disclosed above, a structured light image is captured temporally close to a regular image so as to provide more accurate depth/shape information for the associated regular image. In another embodiment, a two-session capture is disclosed, where the regular image is split into two sub-images with a structured light image captured in between. The regular integration time for the regular image is split between the two sub-images. The digital outputs for the two regular sub-images are combined to form a regular image output. This approach has several advantages. First, each sub-image is converted into digital outputs using the ADC that is used for the regular image. Accordingly, each sub-image will have the same dynamic range as the regular image of a one-session approach. When the digital outputs of the two sub-images are combined, the final regular image still preserves the full dynamic range. Assume that a pixel with full integration time would get an analog signal to be digitized into 128. By using half the integration time for each session, the pixel will get half the analog signal amount and thus will be digitized to 64. The half integration time is important because integration time may be a substantial component in the total period. Therefore, by using only half the integration time for each session will cause the total time shorter than otherwise.
In
In another application of structured-light images, multiple structured light images are used to derive more 3D points than a single structured-light image for one or more associated regular images. For example, multiple structured-light images may be captured consecutively by a capsule camera while traversing in the human gastrointestinal (GI) tract. The regular image can be captured between, before or after the multiple structured-light images. The captured structured-light images can be used to derive a 3D model of the GI tract. This 3D GI tract model can be useful for examining associated regular images of the GI tract.
For two-session regular image capturing with intervening structured-light image, the means for reducing the frame period for the structured-light image as mentioned before can be used. For example, the structured-light image can be captured with a reduced dynamic range of the image sensor compared to the first regular image and the second regular image. The structured-light image may also be captured at lower spatial resolution than the first regular image and the second regular image. Furthermore, the structured-light image can be captured with a reduced image area in a vertical direction, horizontal direction or both compared to the first regular image and the second regular image.
In some cases, the depth or shape information is of interest only for a selected image area. In these cases, the structured-light image can be captured for the selected image area only. Accordingly, it serves an alternative means to reduce the frame period of the structured-light image. The reduced image area may correspond to a reduced image area in the vertical direction, horizontal direction or both compared to the regular image. The means may also be combined with other means, such as reducing the dynamic range or reducing the spatial resolution, for reducing the frame period of the structured-light image.
Reducing the spatial resolution by itself can be used as a technique to reduce the frame period for the structured-light images. For example, the structured-light image can be captured with reduced vertical resolution by only retaining selected rows of pixels and skipping remaining rows of pixels of the image sensor.
For an endoscope application, including a capsule endoscope application, there is no ambient light and the lighting from the endoscope is the only light source. Therefore, the integration time of each row needs not to be the same as long as the duration of the light exposure is the same for every line. For the endoscope environment, the lower dynamic range of structured-light image than that of the regular image also benefits from temporal proximity between the structured-light image and the regular image. Therefore, the structured-light image according to the present invention should bear more accurate depth or shape information correlated with the regular image.
For power sensitive applications such as the capsule endoscope and wearable device, less dynamic range also saves power due to less comparison operations and shorter integration time, which requires less structured light energy. On the other hand, since signal to noise ratio is not so important to structured-light image, its gain can be set to substantial higher to further save energy.
A camera system usually includes an exposure control function to control the operating parameters of the image sensor so that the overall intensity of the image taken is at the right level within certain range conducive for viewing. The image intensity is derived from the pixel intensity. The detailed control often is subject to the preference of camera system designer. For example, the image intensity is determined by the average of pixel intensity of central portions of the image. In another example, the mean of the pixel intensity of the central portion is used as the image intensity. In another example, multiple areas of the image are used instead of the central portion. If the intensity is found to be too high, then the gain or the integration time can be reduced. If the intensity is too low then the gain or the integration time can be increased. Furthermore, the amount of adjustment from one image to the next can be dependent on how much the intensity is deviated from the preferred level or range.
A camera system may also provide the lighting to augment the ambient light. The lighting from the camera system may also be the sole lighting source, such as a regular endoscope or a capsule endoscope. For a camera used for pipe examination or for deep sea exploration, the lighting from the camera is also the sole lighting source. In such a system, the exposure control will control the gain, integration time, lighting intensity and/or energy or a combination of them. If an image has too strong intensity, the value of (gain×integration×light energy) will be reduced for the subsequent image or images. On the other hand, if an image has too weak intensity, the value of (gain×integration×light energy) will be increased for the subsequent image or images. The amount of adjustment from one image to the next may dependent on much the intensity is deviated from the preferred level or range.
An embodiment of the present invention addresses dual exposure controls for capturing structured-light images and regular images using a single image sensor. Based on this embodiment, there are two exposure control loops for the same image sensor, one for the structured-light image and the other for the regular image. In the case that the regular image lighting is substantially dependent on the light controlled by the camera system (e.g. negligible or no ambient light), the exposure condition is very similar for both structured light and the regular light since the distance to the scene is practically the same for both cases. Accordingly, one exposure control loop could be used and the other exposure control is dependent on the first exposure control loop. For example, (gain×integration×light energy) of structure light can be linearly dependent on (gain×integration×light energy) of regular light image or vice versa. In another embodiment, other dependence is used. For example, gamma-type dependence or dependence on the intensity distribution may also be used.
In the case where there is ambient light, the structured-light needs to be sufficiently strong to cause the structure light pattern more discernable in the structured-light image for analysis. In this case, the light intensity in the above analysis is composed of ambient light and light or lights projected to the scene controlled by the exposure control of the camera system. In this case, there might be no need for camera control to project light for regular image if ambient light is sufficient. However the structured light has another constraint that the projected structured-light must be strong enough to show its pattern and/or color in the structured-light image. If the spectrum of the structured light is substantially concentrated in the spectrum of one particular color of the image sensor, the intensity of that particular color of the structured light image and/or the overall intensity are considered. In one embodiment, if structured-light sources are capable of generating multiple colors, then the intensity of each color component in the regular image is considered. The structured light source color corresponding to the weaker color in the regular image is chosen in order to make the structured color stand out or to have a higher signal to background ratio statistically for easy analysis
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. Therefore, the scope of the invention is indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Xu, Yi, Wu, Chenyu, Wang, Kang-Huai, Wilson, Gordon C.
Patent | Priority | Assignee | Title |
10506921, | Oct 11 2018 | CAPSO VISION INC; CAPSOVISION, INC | Method and apparatus for travelled distance measuring by a capsule camera in the gastrointestinal tract |
10531074, | Oct 16 2015 | CAPSOVISION, INC.; CAPSOVISION, INC | Endoscope employing structured light providing physiological feature size measurement |
10835113, | Oct 11 2018 | CAPSOVISION INC. | Method and apparatus for travelled distance measuring by a capsule camera in the gastrointestinal tract |
10943935, | Mar 06 2013 | Apple Inc. | Methods for transferring charge in an image sensor |
11219358, | Mar 02 2020 | Capso Vision Inc.; CAPSOVISION INC | Method and apparatus for detecting missed areas during endoscopy |
11531112, | Jun 20 2019 | Cilag GmbH International | Offset illumination of a scene using multiple emitters in a hyperspectral, fluorescence, and laser mapping imaging system |
11546532, | Mar 16 2021 | Apple Inc.; Apple Inc | Dynamic correlated double sampling for noise rejection in image sensors |
11550057, | Jun 20 2019 | Cilag GmbH International | Offset illumination of a scene using multiple emitters in a fluorescence imaging system |
11563910, | Aug 04 2020 | Apple Inc. | Image capture devices having phase detection auto-focus pixels |
11589819, | Jun 20 2019 | Cilag GmbH International | Offset illumination of a scene using multiple emitters in a laser mapping imaging system |
11659298, | Jul 18 2018 | Apple Inc. | Seamless readout mode transitions in image sensors |
11903563, | Jun 20 2019 | Cilag GmbH International | Offset illumination of a scene using multiple emitters in a fluorescence imaging system |
11931009, | Jun 20 2019 | Cilag GmbH International | Offset illumination of a scene using multiple emitters in a hyperspectral imaging system |
Patent | Priority | Assignee | Title |
6469289, | Jan 21 2000 | CEDAR LANE TECHNOLOGIES INC | Ambient light detection technique for an imaging array |
6503195, | May 24 1999 | UNIVERSITY OF NORTH CAROLINA, THE | Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction |
6608296, | Oct 07 1998 | Hamamatsu Photonics K.K. | High-speed vision sensor having a parallel processing system |
6665010, | Jul 21 1998 | Intel Corporation | Controlling integration times of pixel sensors |
6977685, | Feb 26 1999 | MASSACHUSETTS INST OF TECHNOLOGY | Single-chip imager system with programmable dynamic range |
7983458, | Sep 20 2005 | Capso Vision, Inc | In vivo autonomous camera with on-board data storage or digital wireless transmission in regulatory approved band |
8369458, | Dec 20 2007 | MEDIATEK INC | Wireless receiving system with an adaptively configurable analog to digital converter |
20030117491, | |||
20050219552, | |||
20060113459, | |||
20060268153, | |||
20070103440, | |||
20080055130, | |||
20080218602, | |||
20090245614, | |||
20100073512, | |||
20120078044, | |||
20120188560, | |||
20130342830, | |||
20140098224, | |||
20140168378, | |||
20140291520, | |||
20150156434, | |||
20160307326, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Oct 07 2015 | WANG, KANG-HUAI | Capso Vision, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036806 | /0091 | |
Oct 07 2015 | XU, YI | Capso Vision, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036806 | /0091 | |
Oct 07 2015 | WILSON, GORDON C | Capso Vision, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036806 | /0091 | |
Oct 07 2015 | WU, CHENYU | Capso Vision, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 036806 | /0091 | |
Oct 16 2015 | CAPSOVISION INC | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Oct 01 2021 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Date | Maintenance Schedule |
Apr 03 2021 | 4 years fee payment window open |
Oct 03 2021 | 6 months grace period start (w surcharge) |
Apr 03 2022 | patent expiry (for year 4) |
Apr 03 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 03 2025 | 8 years fee payment window open |
Oct 03 2025 | 6 months grace period start (w surcharge) |
Apr 03 2026 | patent expiry (for year 8) |
Apr 03 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 03 2029 | 12 years fee payment window open |
Oct 03 2029 | 6 months grace period start (w surcharge) |
Apr 03 2030 | patent expiry (for year 12) |
Apr 03 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |