Systems and methods are presented for operating a vehicle camera system to detect, identify, and mitigate camera lens contamination. An image is received from a camera mounted on the vehicle and one or more metrics is calculated based on the received image. The system determines whether a lens of the camera is contaminated based on the one or more calculated metrics and, if so, determines a type of contamination. A specific mitigation routine is selected form a plurality of mitigation routines based on the determined type of contamination and is applied to the received image to create an enhanced image. The enhanced image is analyzed to determine whether the contamination is acceptably mitigated after application of the selected mitigation routine and a fault condition signal is output when the contamination is not acceptably mitigated.
|
1. A method for operating a vehicle camera system, the method comprising:
receiving an image from a camera mounted on a vehicle;
calculating one or more metrics based on the received image;
determining whether a lens of the camera is contaminated based on the one or more calculated metrics;
determining a type of contamination;
selecting one of a plurality of mitigation routines to enhance the image based on the determined type of contamination;
applying the selected mitigation routine to create an enhanced image;
analyzing the enhanced image to determine whether the contamination is acceptably mitigated after application of the selected mitigation routine; and
outputting a fault condition signal when the contamination is not acceptably mitigated after application of the selected mitigation routine.
2. The method of
3. The method of
4. The method of
displaying the enhanced image on a display mounted in an interior of the vehicle when the fault condition signal is not output; and
displaying a notification on the display indicating that the image received from the camera is degraded when the fault condition signal is output.
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
wherein analyzing the enhanced image to determine whether the contamination is acceptably mitigated after application of the selected mitigation routine includes identifying any individual sub-regions where the contamination is not acceptably mitigated,
wherein outputting the fault condition signal includes transmitting a message to the vehicle assistance system identifying any individual sub-regions in which the contamination is not acceptably mitigated,
wherein the vehicle assistance system is configured to operate based on image data from only a sub-set of the plurality of sub-regions, and
wherein the vehicle assistance system is disabled in response to the output fault condition signal only when the output fault condition signal identifies one or more of the sub-regions from the sub-set of sub-regions as being contaminated and not acceptably mitigated.
11. The method of
wherein calculating one or more metrics based on the received image includes:
comparing the received image to at least one previously received image from the camera, and
calculating pixel variation in a plurality of individual pixels from the received images, and
wherein determining the type of contamination includes:
determining that the contamination is condensation when the pixel variation for one or more of the plurality of individual pixels is greater than zero and less than a first threshold.
12. The method of
13. The method of
determining a mask pattern indicative of image distortion based on a plurality of receiving camera images, and
subtracting the mask patent from the received image from the camera.
14. The method of
comparing the enhanced image to at least one previously enhanced image from the camera,
calculating pixel variation in a plurality of individual pixels from the enhanced images, and
determining that the contamination is not acceptably mitigated after application of the selected mitigation routine when the calculated pixel variation from the enhanced images is less than a second threshold.
15. The method of
16. The method of
determining a date and a time when the enhanced image is determined to be not acceptably mitigated; and
storing the date and the time to a memory.
|
This application claims the benefit of U.S. Provisional Application No. 62/003,303, filed May 27, 2014 and entitled “SYSTEM AND METHOD FOR DETECTION AND MITIGATION OF LENS CONTAMINATION FOR VEHICLE MOUNTED CAMERAS,” the entire contents of which are incorporated herein by reference.
Cameras are increasingly being used in vehicle applications to improve the driver's situational awareness and to support various safety related functions of the vehicle (e.g., lane departure warnings). Further, the cameras used in vehicle applications often require digital processing of captured imagery. Cameras mounted to the exterior of the vehicle are susceptible to accumulating dust, dirt, road salt residue, insects, and other contaminants on the camera lens.
The present invention relates to the systems and methods for addressing contamination of camera lenses, particularly with regarding to contamination of camera lenses in vehicle-mounted camera systems.
Camera lenses can become contaminated (e.g., obstructed or dirtied) by various foreign objects, materials, or conditions including, for example, fingerprints, scratches, condensation, ice, or frost. Because such contaminants obstruct the field of view of a camera system, the image quality suffers. In some cases, contaminants are not necessarily imaged by the camera because of their location on the lens. For example, contaminating particles may build up inside the focal point of the lens, such that the contaminant is not in focus at the detector array. However, such contaminants still negatively affect (i.e., degrade) image quality by scattering light. Light scattering can create image regions of reduced contrast and/or resolution, as well as distortion.
When the lens is contaminated, true image content may still be available in the captured image data, but can be blurred or otherwise degraded. Further, since the contaminant itself is not clearly imaged by the camera system (e.g., on the detector array), determining which regions of the image are degraded, and to what extent, can become a challenge. Additionally, in some situations, contamination can become so severe that the image is no longer useful for visual or automated use and the camera should therefore be considered non-functional.
In various embodiments, the invention provides a camera lens contamination detection and mitigation system for use in a vehicle. The system detects camera lens contamination, and mitigates the reduced image quality arising from the lens contamination using software. In some embodiments, the system also estimates the type of contamination likely. Further, if the system determines that the lens contamination is too severe to be able to restore the image quality (i.e., that the camera is “blind”), the system issues one or more warnings to the driver.
In one embodiment, the invention provides a method for operating a vehicle camera system. An image is received from a camera mounted on the vehicle and one or more metrics is calculated based on the received image. The system determines whether a lens of the camera is contaminated based on the one or more calculated metrics and, if so, determines a type of contamination. A specific mitigation routine is selected form a plurality of mitigation routines based on the determined type of contamination and is applied to the received image to create an enhanced image. The enhanced image is analyzed to determine whether the contamination is acceptably mitigated after application of the selected mitigation routine and a fault condition signal is output when the contamination is not acceptably mitigated.
In some embodiments, the fault condition signal causes the system to display a message on a display in the interior of the vehicle informing the operator that the captured camera image is distorted. In some embodiments, the fault condition signal partially or entirety disables one or more vehicle assistance systems (e.g., an automated parking system or a lane detection system) when the fault condition signal indicates that the image is unacceptably distorted due to lens contamination.
Other aspects of the invention will become apparent by consideration of the detailed description and accompanying drawings.
Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways.
The camera 101 captures images and sends the image data to a processor 103. The processor 103 executes instructions stored on a non-transitory computer readable memory 105 (e.g., a Flash memory, a hard disk, or other type of ROM or RAM memory unit) to provide various functionality including, for example, the methods described below. The processor 103 may also store image data on the memory 105. The processor 103 may output the image data from the camera 101 to a user interface/display 107 for display to the user. Furthermore, in some implementations, the processor 103 provides the image data or other information to other vehicle assistance/automation system 109 (for example, an automated parking assist system, a lane monitoring system, or an adaptive cruise control system).
As described in further detail below, the processor 103 applies a detection algorithm to detect contaminants on the camera lens. Within each sub-regions, a set of image quality metrics are determined. In some implementations, the image quality metrics include, for example, image sharpness (e.g., via edge detection or MTF estimation), color variation (e.g., by generating local histograms of each color band), and spatial uniformity (e.g. by analyzing contrast ratio or optical flow). The processor 103 determines the absolute value of the determined metrics and monitors a change in each of the determined metrics over time. By monitoring the change in each of the determined metrics over time (i.e., by monitoring a series of images captured within a certain time frame), the system can flag regions where lens contamination is likely.
For example, in
Different types of lens contamination can affect the amount of true image content that is included in the degraded image. For example, in the condensation degraded image of
By estimating the type of contamination, the system is able to employ an appropriate compensating algorithm to restore image quality. For example, in
The mask-based method of mitigating image degradation due to lens contamination is just one example of an image enhancement technique that can be applied by the camera system to mitigate a specifically identified type of lens contamination. The systems and methods described herein are not necessarily limited to solutions for condensation or other light-scattering contamination. Accordingly, other approaches and algorithms can be used to restore other types of lens contamination.
Although some of the systems and methods described herein are configured to provide some form of image enhancement to mitigate lens contamination, in some cases, lens contamination can become so severe that the compensating algorithm cannot effectively restore the original camera image for practical use. In such cases, even the enhanced image could not be effectively relied upon by vehicle assistance/automation systems such as a parking assistance system.
In some such cases, the processor 103 can be configured to identify situations where the contamination is so severe that the output image cannot be relied upon. In some implementations, the processor 103 monitors the image metrics (i.e., pixel values) of the restored image as a function of time (i.e., as opposed to monitoring the metrics of the original camera image). In some implementations, the system applies the same contamination detection metric to the enhanced image as was applied to the raw image data to initially detect the lens contamination. If the system determines that the image quality has not improved enough for practical usage, the system concludes that the lens contamination is too severe to effectively restore image quality and concludes that the camera is effectively “blind” in one or more sub-regions of the output image.
As described in further detail below, when the system determines that the camera is effectively “blind,” the processor 103 issues a warning to the driver (e.g., visually through the user interface 107 or by other mechanisms including, for example, haptic feedback or an audible tone or message). In some implementations, the warning includes information indicating that (a) the camera is no longer operational due to an excessive and unmitigatable degree of lens contamination, (b) vehicle assistance and automation functions depending on the camera may no longer be available, and (c) the “blind” state of the camera system will persist until the driver cleans or replaces the camera lens. Furthermore, in some implementations, the system is configured to save in the non-transitory memory 105 the date and time that the system determines the camera system to be “blind,” notifies the driver, and ceases to display video output on the user interface/display 107.
If contamination is identified in one or more of the sub-regions (step 407), the system determines the type of contamination (step 413) and applies an appropriate mitigation technique to enhance the image to remove the distortion caused by the lens contamination (step 415). The system then determines one or more metrics for the enhanced image sub-region (step 417). If the restoration brings the image to a level of acceptable quality (step 419), then the system replaces the sub-region in the image with the enhanced sub-region image data and move on to the next sub-region for analysis. However, if the mitigation technique is unable to sufficiently improve the image quality and the enhanced image still fails to meeting acceptable levels (step 419), the system disables or limits the functionality provided by vehicle systems that rely upon image data (step 421) and sends a notification to the user (step 423) indicating that the camera system has been rendered at least partially “blind” by camera lens contamination.
As also discussed above, if the camera lens is contaminated with water or condensation, the affected pixels will exhibit some variation over time. However, the variation will be slow changing. Therefore, if the system determines that one or more pixels in a given sub-regions exhibit some variation over time, but the degree of variation is below a given threshold (step 509), then the system may conclude that the camera lens is contaminated with condensation (step 511).
As discussed above, the system may be configured with various different image processing techniques, each optimized to enhance an image based on a specific type of detected contamination.
The system then analyzes the quality of the enhanced image to determine whether the mitigation effectively restored the image quality to an acceptable level. In this example, the system is configured to apply the same metric and evaluation to the enhanced image as was applied to the raw image data. Specifically, the system analyzes variation of individual pixels within the enhanced image over time (step 605). If the enhanced image meets acceptable image quality standards (i.e., the pixel variation is greater than a threshold indicative of a quality image) (step 607), then the system concludes that the mitigation is acceptable (step 609). The enhanced image is displayed to the user and is made available to other vehicle assistance and automation systems. However, if the enhanced image still fails to meet acceptable image quality standards (i.e., the pixel variation is greater than zero, but fails to exceed the threshold indicative of a quality image) (step 607), then the system concludes that the camera system is at least partially blind in the given sub-region (step 611).
Various implementations can be configured to respond differently to partially blind image data depending on the requirements of specific vehicle automation/assistance systems employed on the vehicle and user preference. For example, in some implementations, if one or more sub-regions of the image fail to meet acceptable standards, the image may still be displayed to the user on the user interface/display 107. However, the system may be configured to indicate which sub-regions fail to meet quality standards by high-lighting the border of the particular sub-region(s) in a different color. In other implementations, the system may be configured to omit “blind” sub-regions from the displayed image data and will display image data only in sub-regions that meet the minimum quality standards.
Some vehicle assistance/automation systems require varying degrees of image quality in order to perform effectively. For example, some systems may be configured to monitor adjacent lanes and to provide a warning to the driver when nearby vehicles are detected. These systems may require a lesser degree of image quality as there is less risk associated with an incorrect determination. In contrast, some systems are configured to automatically operate the vehicle to perform parallel parking for a driver. These systems may require a greater degree of image quality as there is substantial risk associated with incorrect operation of such a fully automated vehicle system. As such, the threshold used to determine whether a specific sub-region of an image is “blind” may be varied for individual systems that will utilize the image output. In other words, a given sub-region may be identified as “blind” and unusable for the automated parking system, but the same sub-region may be identified as acceptable for the lane monitoring/warning system. As such, in some situations, one vehicle system may be disabled due to the current amount of lens contamination while another vehicle system is allowed to continue operating.
Similarly, different vehicle systems and functionality may require different fields of view. For example, a vehicle system may only analyze information regarding the road surface immediately behind the vehicle. For such systems, lens contamination that only affects the upper sub-regions of the camera output image would be of no concern. Therefore, some systems may be configured to disable certain specific functionality depending on which specific sub-regions of the camera system are determined to be effectively “blind.”
Thus, various embodiments of the invention provide, among other things, a camera system that analyzes image data to detect camera lens contamination, determine a specific type of lens contamination, apply image enhancement routines to mitigate the specific type of detected lens contamination, and to disable certain vehicle functionality if the degree of lens contamination cannot be sufficiently mitigated. Various features and advantages of the invention are set forth in the following claims.
Patent | Priority | Assignee | Title |
10643338, | Dec 02 2015 | Mitsubishi Electric Corporation | Object detection device and object detection method |
10706293, | Apr 11 2019 | Ford Global Technologies, LLC | Vehicle camera clearness detection and alert |
10943129, | Jan 04 2019 | Ford Global Technologies, LLC | Low-light sensor cleaning |
11130450, | Apr 16 2018 | DENSO TEN Limited; Toyota Jidosha Kabushiki Kaisha | Deposit removal system, and deposit removal method |
Patent | Priority | Assignee | Title |
7016045, | Mar 12 1999 | Regents of the University of Minnesota | Video camera-based visibility measurement system |
7030928, | Mar 31 2000 | Canon Kabushiki Kaisha | Information display control in image sensing apparatus |
7423752, | Apr 19 2005 | Valeo Vision | Method for detecting nocturnal fog and system for implementing the method |
7505604, | May 20 2002 | SIMMONDS PRECISION PRODUCTS, INC | Method for detection and recognition of fog presence within an aircraft compartment using video images |
7672510, | Sep 01 2003 | Arriver Software AB | Vehicle environment monitoring device |
7729510, | May 20 2002 | Simmonds Precision Products, Inc. | Method for distinguishing a smoke condition from a dust condition using video images |
7983447, | Feb 27 2006 | HITACHI ASTEMO, LTD | Imaging environment recognition device |
8009208, | Feb 14 2006 | FotoNation Limited | Detection and removal of blemishes in digital images utilizing original images of defocused scenes |
8135176, | Jan 04 2005 | Robert Bosch GmbH | Method for determining the self-motion of a vehicle |
8369650, | Sep 30 2003 | FotoNation Limited | Image defect map creation using batches of digital images |
8400502, | May 20 2002 | Simmonds Precision Products, Inc. | Detection and recognition of fog presence within an aircraft compartment using video images |
9043129, | Oct 05 2010 | Deere & Company | Method for governing a speed of an autonomous vehicle |
9219890, | Aug 22 2012 | The United States of America as represented by the Secretary of the Navy | Optical surface analysis system and method |
9288381, | Jul 27 2012 | NISSAN MOTOR CO , LTD | In-vehicle image recognizer |
9681062, | Sep 26 2011 | MAGNA ELECTRONICS INC. | Vehicle camera image quality improvement in poor visibility conditions by contrast amplification |
20080100428, | |||
20090174773, | |||
20100302398, | |||
20120303331, | |||
20130235201, | |||
20130300869, | |||
20140010408, | |||
20170332072, | |||
DE10322087, | |||
EP1826648, | |||
WO20100038223, | |||
WO20130092247, | |||
WO2014007175, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 29 2014 | GEHRKE, MARK | Robert Bosch LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 040323 | /0409 | |
May 27 2015 | Robert Bosch GmbH | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Dec 22 2021 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Jul 03 2021 | 4 years fee payment window open |
Jan 03 2022 | 6 months grace period start (w surcharge) |
Jul 03 2022 | patent expiry (for year 4) |
Jul 03 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 03 2025 | 8 years fee payment window open |
Jan 03 2026 | 6 months grace period start (w surcharge) |
Jul 03 2026 | patent expiry (for year 8) |
Jul 03 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 03 2029 | 12 years fee payment window open |
Jan 03 2030 | 6 months grace period start (w surcharge) |
Jul 03 2030 | patent expiry (for year 12) |
Jul 03 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |