A system and method of determining a classification of at least one underwater object is provided. The method includes generating at least one low-resolution imagery of an object from a plurality of frequencies produced by a detection device and extracting at least 5 characteristics of the object within the low-resolution imagery at each of the plurality of frequencies. The method further includes generating at least 15 features from the at least 5 characteristics and classifying the features to determine the identity of the at least one underwater object.
|
15. A method to determine a classification of at least one underwater object comprising:
generating at least one low-resolution imagery of an object from a plurality of frequencies produced by a detection device;
extracting at least 5 characteristics of the object within the low-resolution imagery at each of the plurality of frequencies;
generating at least 15 ratios from the at least 5 characteristics; and
classifying the ratios to determine an identity of the at least one underwater object.
1. A method of determining a classification of at least one underwater object comprising:
generating at least one low-resolution imagery of an object from a plurality of frequencies produced by a detection device;
extracting at least 5 characteristics of the object within the low-resolution imagery at each of the plurality of frequencies;
generating at least 15 features from the at least 5 characteristics; and
classifying the features to determine an identity of the at least one underwater object.
10. A system for determining a classification of at least one underwater object comprising:
a first processor capable of generating at least one low-resolution imagery of an object from a plurality of frequencies produced by a detection device;
a second processor capable of extracting at least 5 characteristics of the object within the low-resolution imagery at each of the plurality of frequencies;
a third processor capable of generating at least 15 features from the at least 5 characteristics, and
a fourth processor capable of classifying the features to determine an identity of the at least one underwater object.
2. The method of clam 1, wherein the plurality of frequencies comprises at least three different frequencies.
3. The method of
measuring a height of the object within the low-resolution imagery at each of the plurality of frequencies;
measuring a width of the object within the low-resolution imagery at each of the plurality of frequencies at a first predetermined level of the height;
measuring a depth of the object within the low-resolution imagery at each of the plurality of frequencies at a first predetermined level of the height;
measuring a width of the object within the low-resolution imagery at each of the plurality of frequencies at a second predetermined level of the height; and
measuring a depth of the object within the low-resolution imagery at each of the plurality of frequencies at a second predetermined level of the height.
4. The method of
i=1 to 5
wherein y1-y15 are the at least 15 features and x1-x5 are the 5 characteristics.
5. The method of
extracting at least four additional characteristics of the object within the low-resolution imagery at each of the plurality of frequencies when the at least one object includes a distal side that produces specular returns;
generating at least four additional features from the at least four additional characteristics; and
classifying the four additional features to determine the identity of the at least one underwater object.
6. The method of
measuring a height of the object within the low-resolution imagery at each of the plurality of frequencies;
measuring the width of the object within the low-resolution imagery at each of the plurality of frequencies at a first predetermined level of the height;
measuring a depth of the object within the low-resolution, imagery at each of the plurality of frequencies at a first predetermined level of the height; and
measuring a range difference between specular returns from the distal side of the underwater object and specular returns from near side of the underwater object.
7. The method of
i=6 to 8 and y19=x9(f1), wherein y16-y19 are the at least four additional features and x6-x9 are the four additional characteristics.
8. The method of
9. The method of
11. The system of
i=1 to 5,
wherein y1-y15 are the at least 15 features and x1-x5 are the at least 5 characteristics.
12. The system of
13. The system of
14. The system of
i=6 to 8 and y19=x9(f1), wherein y16-y19 are the at least four additional features and x6-x9 are the four additional characteristics.
16. The method of
extracting at least four additional characteristics of the object within the low-resolution imagery at each of the plurality of frequencies corresponding to specular returns from a distal side of the at least one underwater object;
generating at least four additional ratios from the at least four additional characteristics; and
classifying the features to determine the identity of the at least one underwater object.
|
The invention described herein was made in the performance of official duties by employees of the Department of the Navy and may be manufactured, used, licensed by or for the Government for any governmental purpose without payment of any royalties thereon.
The present teachings relate to a system and method for classifying underwater objects located in coastal regions as either targets or clutter (non-targets) using low-resolution imagery.
Typically, mines designed to be utilized in coastal regions are smaller in size compared to mines designed to be utilized in deep ocean depths. Therefore, when mining these coastal regions a larger number of smaller mines need to be deployed. One of the challenges encountered during littoral mine countermeasures (MCM) is distinguishing seafloor clutters from mines present in coastal regions. These seafloor clutters, which can range from man-made debris to rock outcrops, can also appear as having mine-like characteristics when viewed by sonar imagery. Accordingly, the ability to accurately distinguish between actual mines and seafloor clutter in sonar imagery is of utmost importance.
One known method of detecting underwater objects in coastal regions is by installing sensors capable of operating at multiple narrowband frequencies on a variety of unmanned underwater vehicles (UUVs). An example of a UUV having a multiple narrow band frequency sensor is a seafloor crawler including an IMAGENEX 881A rotating head sonar. Crawling vehicles operate in close proximity to the seafloor; therefore, objects imaged by the sensor produce no acoustic shadow. Additionally, the use of low-resolution imagery results in images of objects being poorly defined or having no shape. It is often the shape and size of the object and acoustic shadow that yield some of the most salient features used by classifiers in determining whether the detected underwater object is a target or clutter. Not having produced a defined shape or acoustic shadow causes detected underwater objects to appear as bright spots on a screen. Referring to
There exists a need for a method and system that is capable of accurately classifying the objects detected by low-resolution sonar systems which is relatively inexpensive and easy to deploy.
The present teachings disclose a system and method for determining the class of each underwater object that is detected by a sonar system operating at a plurality of frequencies.
According to the present teachings, the method includes generating at least one low-resolution imagery of an object from a plurality of frequencies produced by a detection device and extracting at least 5 characteristics of the object within the low-resolution imagery at each of the plurality of frequencies. The method further includes generating at least 15 features from the at least 5 characteristics and classifying the features of each underwater object.
The present teachings also provide a system for determining the class of each underwater object. The system includes four processors. The first processor is capable of generating at least one low-resolution imagery of an object from a plurality of frequencies produced by a detection device. The second processor is capable of extracting at least 5 characteristics of the object within the low-resolution imagery at each of the plurality of frequencies. The third processor is capable of generating at least 15 features from the at least 5 characteristics and the further processor is capable of classifying the features of each underwater object.
Additional features and advantages of various embodiments will be set forth, in part, in the description that follows, and, in part, will be apparent from the description, or may be learned by practice of various embodiments. The objectives and other advantages of various embodiments will be realized and attained by means of the elements and combinations particularly pointed out in the description herein.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only, and are intended to provide an explanation of various embodiments of the present teachings.
The present teachings are directed to a method of processing low-resolution imagery of an object in order to determine the class of each underwater object. As shown in
Preferably, in an exemplary embodiment, the algorithm stage 30 can include at least two steps. The first step in the algorithm stage 30 can be a feature extracting step 35 and the second step can be a feature generating step 70. Generally, the feature extracting step 35 of the present teachings can include the extraction of at least 5 characteristics of the object within the low-resolution imagery from input step 20. These characteristics can then be forwarded to the feature generating step 70. The feature generating step 70 of the present teachings can include the generation of at least 15 features from the characteristics extracted at the feature extracting step 35.
In an exemplary embodiment, each of the images 22, 24, and 26 of the underwater object can be sent to its corresponding feature extraction steps 40, 50, and 60, respectively. Preferably, the feature extraction process at 40, 50, and 60 includes extracting at least 5 characteristics, xi(fn), of each of images 22, 24, and 26, wherein x is the characteristic, i ranges from 1-5, and fn is the frequency. Additionally, at low frequencies, such as at frequency f1, which can correspond to image 22 as shown in
According to
In an exemplary embodiment, higher frequencies generally do not produce a specular return from the distal side of the underwater object. Therefore, the second and third feature extractions 50 and 60, which respectively correspond to images 24 and 26 formed at higher frequencies (f2 and f3), can extract 5 characteristics as shown in
However, the characteristics extracted at feature extraction step 35 are range dependent. For example, the width of an object in degrees will vary depending on the distance between the sonar and the object. To avoid or minimize errors that can be created due to the extracted characteristics being range dependent, the characteristics can be inputted into the feature generating step 70 of the algorithm stage 30. At the feature generating step 70, the ratios of the at least 5 characteristics that were extracted in feature extraction step 35 can be calculated to generate at least 15 features. These at least 15 features can then be presented to the classifier 80. It is noted that at least 19 features are generated when the feature extraction step 35 provides the feature generating step 70 with 9 characteristics. The 9 characteristics include the four additional characteristics of the image 22 when the underwater object produces a specular return from its distal side.
In an exemplary embodiment as shown in
i=1 . . . 5
Generally, the process of sub-step 74 generates features associated with the four additional characteristics of the portion of image 22 corresponding to the distal side of the underwater object. Preferably, the process of sub-step 74 generates features 16-19 (y16-y19). The process of sub-step 74 can include two sets of algorithms. The first set of algorithms 75 can generate features 16-18 (y16-y18) and the second set of algorithms 76 can generate features 19 (y19). The second set of algorithms 76 is not involved in a ratio computation and is inherently a range-independent feature. In an exemplary embodiment, the first set of algorithms 75 includes the mathematical formulation:
i=6 . . . 8 and the second algorithms 76 includes the mathematical formulation: y19=x9(f1). Once the 19 features are generated, these features are advanced to the classifier 80.
Any known classifier 80 can be used in the present teachings. Some exemplary classifiers include the self-organizing map (SOM) classifier, the multi-layered perceptron (MLP) classifier, and/or the fuzzy classifier. When an SOM classifier is utilized, the SOM takes the N-dimensional input features and projects them onto a 2-dimensional arrangement of output neurons known as the Kohonen layer. The projections in the Kohonen layer are also clustered by the SOM according to the statistical properties of the input features. Preferably, the clustering can be performed in such a way as to preserve the topological order of the input feature space. In an exemplary embodiment, when an input feature is presented to the SOM, a winning neuron is found by determining which Kohonen layer neuron's connection weights most closely match the input feature.
Once the network is trained, the connection weights are frozen and the clusters can then be identified. For this process, an input feature with known class is presented to the network. The winning neuron is found, marked, and the process is repeated for all input features with a known class. For a properly trained network, all input features from one class should occupy the same topological region of the Kohonen layer.
To evaluate the performance of the algorithm, nine targets were placed on the seafloor (in the surf zone) at a depth of approximately 3 meters. The targets were positioned in a circular pattern and the sonar was placed at the middle of the circle. The position, orientation, and range of the targets and sonar were periodically varied. The nine targets were as follows:
Each target was insonified at the 3 operating frequencies of f1=310 kHz, f2=675 kHz, and f3=900 kHz with multiple (redundant) scans at each frequency. Multiple scans were acquired to validate the robustness of the technique under variable conditions (turbulent surging water disturbing targets and sonar, schools of fish entering the target field, etc.).
For one particular instance, 10 images at each of the 3 operating frequencies were acquired. Each of the 10 sets of 3 images was then provided to the feature extraction algorithm and was used to train the self-organizing map. After training, the projected location of each feature vector and its corresponding class was labeled in the Kohonen layer as shown in
In an attempt to further improve clustering, pairs of the 10 raw images were then averaged together. This averaging helped to mitigate noise and spurious information in the images (e.g., turbidity, wave induced target motion, aquatic life, etc.). For the 5 sets of averaged images, the process was repeated and the results are shown in
The self-organizing map was trained on a set of data and then tested on data from a different instance of the target field. The differences between the two target field instances in the following example include: aspect angle between targets and sonar (in the horizontal plane), the vertical orientation of the sonar (which drastically affects signal return strength), and the presence of new targets that were not present in the test data set.
Results from this experiment are shown in
As shown in
Those skilled in the art can appreciate from the foregoing description that the present invention can be implemented in a variety of forms. Therefore, while these teachings have been described in connection with particular embodiments and examples thereof, the true scope of the present teachings should not be so limited. Various changes and modifications may be made without departing from the scope of the teachings herein.
Patent | Priority | Assignee | Title |
8620082, | Apr 01 2011 | The United States of America as represented by the Secretary of the Navy | Sonar image texture segmentation |
8743657, | Apr 22 2011 | The United States of America as represented by the Secretary of the Navy | Resolution analysis using vector components of a scattered acoustic intensity field |
9405959, | Mar 11 2013 | The United States of America, as represented by the Secretary of the Navy; NAVY, THE USA AS REPRESENTED BY THE SECRETARY OF THE | System and method for classification of objects from 3D reconstruction |
Patent | Priority | Assignee | Title |
4939698, | Apr 26 1966 | The United States of America as represented by the Secretary of the Navy | Sonar system |
5018214, | Jun 23 1988 | LSI Logic Corporation | Method and apparatus for identifying discrete objects in a visual field |
5155706, | Oct 10 1991 | Northrop Grumman Systems Corporation | Automatic feature detection and side scan sonar overlap navigation via sonar image matching |
5200931, | Jun 18 1991 | OL SECURITY LIMITED LIABILITY COMPANY | Volumetric and terrain imaging sonar |
5214744, | Dec 14 1990 | Northrop Grumman Systems Corporation | Method and apparatus for automatically identifying targets in sonar images |
5231609, | Sep 28 1992 | The United States of America as represented by the Secretary of the Navy | Multiplatform sonar system and method for underwater surveillance |
5275354, | Jul 13 1992 | Lockheed Martin Corp | Guidance and targeting system |
5321667, | Apr 27 1993 | Raytheon Company | Sonar systems |
5438552, | Apr 27 1993 | Raytheon Company | Sonar system for identifying foreign objects |
5555532, | May 23 1984 | The United States of America as represented by the Secretary of the Navy | Method and apparatus for target imaging with sidelooking sonar |
5612928, | May 28 1992 | Northrop Grumman Systems Corporation | Method and apparatus for classifying objects in sonar images |
5909190, | Oct 30 1997 | Raytheon Company | Clutter rejection using adaptive estimation of clutter probability density function |
5937078, | Apr 10 1996 | The United States of America as represented by the Secretary of the Navy | Target detection method from partial image of target |
6052485, | Feb 03 1997 | The United States of America as represented by the Secretary of the Navy | Fractal features used with nearest neighbor clustering for identifying clutter in sonar images |
6108454, | Apr 27 1998 | The United States of America as represented by the Secretary of the Navy | Line contrast difference effect correction for laser line scan data |
6130641, | Sep 04 1998 | Simon Fraser University | Imaging methods and apparatus using model-based array signal processing |
6549660, | Feb 12 1996 | Massachusetts Institute of Technology | Method and apparatus for classifying and identifying images |
6754390, | Dec 01 2000 | The United States of America as represented by the Secretary of the Navy | Fusing outputs from multiple detection/classification schemes |
7164618, | Mar 24 2005 | The United States of America as represented by the Secretary of the Navy | Dual unit eidetic topographer |
7221621, | Apr 06 2004 | College of William & Mary | System and method for identification and quantification of sonar targets in a liquid medium |
20020110279, | |||
20030002712, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 14 2005 | STACK, JASON | USA AS REPRESENTED BY THE SECRETARY OF THE NAVY | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017446 | /0246 | |
Dec 14 2005 | DOBECK, GERALD | USA AS REPRESENTED BY THE SECRETARY OF THE NAVY | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 017446 | /0246 | |
Dec 20 2005 | United States of America as represented by the Secretary of the Navy | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 08 2011 | REM: Maintenance Fee Reminder Mailed. |
Jan 01 2012 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jan 01 2011 | 4 years fee payment window open |
Jul 01 2011 | 6 months grace period start (w surcharge) |
Jan 01 2012 | patent expiry (for year 4) |
Jan 01 2014 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 01 2015 | 8 years fee payment window open |
Jul 01 2015 | 6 months grace period start (w surcharge) |
Jan 01 2016 | patent expiry (for year 8) |
Jan 01 2018 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 01 2019 | 12 years fee payment window open |
Jul 01 2019 | 6 months grace period start (w surcharge) |
Jan 01 2020 | patent expiry (for year 12) |
Jan 01 2022 | 2 years to revive unintentionally abandoned end. (for year 12) |