A method, system, and computer program product for detecting objects. members of a general class of objects are searched for in a number of images. members of a specific class of objects are searched for in a number of regions in the number of images where the number of regions contains at least a portion of the members of the general class. A member in the members of the specific class is a potential threat to a rotorcraft.
|
1. A method for detecting objects, the method comprises:
searching for members of a general class of objects in a number of images; and
searching for members of a specific class of objects in a number of regions in the number of images where the number of regions contains at least a portion of the members of the general class, wherein a member in the members of the specific class is a potential threat to a rotorcraft, wherein the searching steps are performed using a cognitive swarm.
10. A method for detecting threats against a rotorcraft, the method comprises:
receiving a number of images from a sensor associated with the rotorcraft,
searching for humans in the number of images using agents configured to search the number of images for the humans;
searching for a pose that is a potential threat to the rotorcraft in a number of regions in the number of images where the number of regions contains at least a portion of the humans; and
initiating an action when the pose that is the potential threat to the rotorcraft is detected.
14. A threat detection system comprising:
a computer system configured to receive a number of images from a sensor; search for members of a general class of objects in the number of images; and search for members of a specific class of objects in a number of regions in the number of images where the number of regions contains at least a portion of the members of the general class, wherein a member in the members of the specific class is a potential threat to a rotorcraft, wherein the computer system is configured to use a cognitive swarm to search for the members of the general class of objects in the number of images.
20. A computer program product comprising:
a non-transitory computer readable storage media;
first program code for searching for members of a general class of objects in a number of images, the searching using a cognitive swarm; and
second program code for searching for members of a specific class of objects in a number of regions in the number of images where the number of regions contains at least a portion of the members of the general class, wherein a member in the members of the specific class is a potential threat to a rotorcraft and the first program code and the second program code are stored on the computer readable storage media.
2. The method of
receiving the number of images from a sensor associated with the rotorcraft.
3. The method of
4. The method of
initiating an action when the member is present from searching for the members of the specific class.
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
searching for the members of the general class of objects in the number of images using agents configured to search the number of images for the members of the general class of objects.
12. The method of
13. The method of
15. The threat detection system of
the sensor, wherein the sensor is selected from one of an optical sensor, a visible light camera, an infrared camera, and a light detection and ranging unit.
16. The threat detection system of
the rotorcraft, wherein the sensor and the computer system are connected to the rotorcraft.
17. The threat detection system of
18. The threat detection system of
19. The threat detection system of
21. The computer program product of
third program code for initiating an action when the member is present from searching for the members of the specific class, wherein the action comprises at least one of illuminating the member, displaying an alert, projecting a light on the member, generating a sound directed at the member, obscuring a vision of the member, indicating a location of the member, sending a message to a ground unit,
displaying a location of the member on a personal head-worn display, and display the location of the member on a display device.
22. The computer program product of
23. The computer program product of
|
This application is a continuation-in-part of patent application U.S. Ser. No. 12/456,558, filed Jun. 18, 2009, entitled “Multi-Stage Method for Object Detection Using Cognitive Swarms and System for Automated Response to Detected Objects”, now issued as U.S. Pat. No. 8,515,126, which is incorporated herein by reference.
The present disclosure relates generally to object detection and in particular to detecting objects in images. Still more particularly, the present disclosure relates to a method and apparatus for detecting objects that are threats to a rotorcraft.
Object detection involves determining whether particular objects are present in an image of a sequence of images. These images may be generated by sensors. For example, a camera may generate images in a video stream. One or more of these images may be processed to identify objects in the images.
In object detection, an image is processed to determine whether an object of a particular class is present. For example, a process may be employed to determine if humans are in the images. As another example, the process may determine if buildings, cars, airplanes, and/or other objects are present. Each of these types of objects represents a class. An object that is identified as being in the class is a member of the class.
Current approaches for detecting objects may have issues in identifying members of a class. For example, one object in an image may be falsely identified as a member of a class. In another example, current approaches may fail to identify an object in an image as a member of the class. Some current approaches may attempt to increase the speed that objects are identified. However, in increasing the speed, accuracy in correctly identifying the objects may decrease. In other examples, the amount of time to correctly identify the object may be greater than desired.
Accordingly, it would be advantageous to have a method and apparatus, which takes into account one or more of the issues discussed above as well as possibly other issues.
In one advantageous embodiment, a method is present for detecting objects. Members of a general class of objects are searched for in a number of images. Members of a specific class of objects are searched for in a number of regions in the number of images where the number of regions contains at least a portion of the members of the general class. A member in the members of the specific class is a potential threat to a rotorcraft.
In another advantageous embodiment, a method is present for detecting threats against a rotorcraft. A number of images are received from a sensor associated with the rotorcraft. Humans are searched for in the number of images using agents configured to search the number of images. A pose that is a potential threat to the rotorcraft is searched for in a number of regions in the number of images. The number of regions contains at least a portion of the humans. An action is initiated when at least one pose that is a potential threat to the rotorcraft is detected.
In yet another advantageous embodiment, a threat detection system comprises a computer system. The computer system is configured to receive a number of images from a sensor. The computer system is configured to search for members of a general class of objects in the number of images. The computer system is configured to search for members of a specific class of objects in a number of regions in the number of images. The number of regions contains at least a portion of the members of the general class. A member in the members of the specific class is a potential threat to a rotorcraft.
In still yet another advantageous embodiment, a computer program product comprises a computer recordable storage media. First program code is present for searching for members of a general class of objects in a number of images. Second program code is present for searching for members of a specific class of objects in a number of regions in the number of images. The number of regions contains at least a portion of the members of the general class. A member in the members of the specific class is a potential threat to a rotorcraft. The first program code and the second program code are stored on the computer readable storage media.
The features, functions, and advantages can be achieved independently in various embodiments of the present disclosure or may be combined in yet other embodiments in which further details can be seen with reference to the following description and drawings.
The novel features believed characteristic of the advantageous embodiments are set forth in the appended claims. The advantageous embodiments, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an advantageous embodiment of the present disclosure when read in conjunction with the accompanying drawings, wherein:
The different advantageous embodiments recognize and take into account a number of different considerations. For example, the different advantageous embodiments recognize and take into account that it may be desirable to identify a threat before the threat has occurred. For example, weapons fired at a rotorcraft can pose a threat to the rotorcraft. Thus, it may be desirable to identify the threat before the weapon is fired.
Several advantages are present with detecting threats before a weapon has been fired. Advanced detection of threats allow for actions to be taken before the threat occurs. For example, an alert may be generated if a threat of a weapon is detected before the weapon is fired. If an alert is sounded after an attack occurs, then actions may be limited to defending or evading the attack before responding to the threat. However, advanced detection can provide additional options for responding to the threat. For example, the attack may be prevented.
Current methods of object detection focus on detection of objects by searching the entire image. The different advantageous embodiments recognize and take into account that these types of methods may fail in detecting objects such as a person in a pose with their arms held vertically upward. The different advantageous embodiments recognize and take into account that this type of detection may use a feature detection algorithm tailored to detecting a person in such a pose.
The different advantageous embodiments recognize and take into account object detection algorithms that currently exist for detecting a human form. The different advantageous embodiments recognize and take into account that tailoring these algorithms to detect a human form engaged in a specific position can be very complex. The complexity can increase a processing time of the image without a guarantee of accurate results. The different advantageous embodiments recognize and take into account that such a method is also impractical if the object detection task changes. The changes may require creation of a different specific object detection algorithm.
As an alternative to creating a specific detection algorithm, the different advantageous embodiments recognize and take into account that one could search for the double vertical signatures created by the arm position of the object. However, such a method will likely yield a high false alarm rate since this method will detect all similar vertical signatures in the scene, including signatures from trees, buildings, and other irrelevant objects.
Thus, the different advantageous embodiments provide a method, system, and computer program product for detecting an object. Members of a general class of objects are searched for in a number of images. Members of a specific class of objects are searched for in a number of regions in the number of images where the number of regions contains at least a portion of the members of the general class. A member in the members of the specific class is a potential threat to a rotorcraft. As used herein, a number of items may be one or more items.
With reference now to
In this example, object detection system 108 searches object detection environment 100 to detect objects that may pose a threat to rotorcraft 102. Object detection system 108 first identifies individual persons in crowd 106 as a general class of objects that may pose a threat to rotorcraft 102. Once crowd 106 has been identified, object detection system 108 then searches for specific individuals in crowd 106 that may pose a threat to rotorcraft 102.
For example, object detection system 108 searches for individuals in crowd 106 in a pose that may be threatening to rotorcraft 102. In this example, person 110 is in a pose with arms raised, pointing a weapon at rotorcraft 102. Object detection system 108 identifies the pose of person 110 as a potential threat to rotorcraft 102. In another example, object detection system 108 may also identify the weapon pointed at rotorcraft 102 as a potential threat.
Once person 110 has been identified as a potential threat, rotorcraft 102 may take action. For example, rotorcraft 102 may issue warnings or take other actions to prevent an attack by person 110. Rotorcraft 102 may also communicate the potential threat and the location of person 110 to ground vehicles 104. Ground vehicles 104 can then respond to the potential threat.
By first narrowing the set of objects in object detection environment 100 that may pose a threat to crowd 106, object detection system 108 can apply tailored algorithms to detect threats in crowd 106. Object detection system 108 does not need to use time applying tailored algorithms to all objects in object detection environment 100.
Instead, object detection system 108 first identifies crowd 106 as a general class of objects that may be a threat to rotorcraft 102. Then, object detection system 108 can spend time searching for specific individuals in crowd 106 that may have a pose or weapon that may be a potential threat to rotorcraft 102.
In this example, object detection system 108 is positioned beneath the fuselage of rotorcraft 102. However, in other advantageous embodiments, object detection system 108 may be positioned in other locations on rotorcraft 102. For example, without limitation, object detection system 108 may be located on the mast, cowl, tail boom, skids, and/or any other suitable locations on rotorcraft 102.
The illustration of object detection environment 100 is not meant to imply physical or architectural limitations to the manner in which different advantageous embodiments may be implemented. Other components in addition to and/or in place of the ones illustrated may be used. Some components may be unnecessary in some advantageous embodiments. Also, the blocks are presented to illustrate some functional components. One or more of these blocks may be combined and/or divided into different blocks when implemented in different advantageous embodiments.
For example, although rotorcraft 102 is illustrated as a helicopter, advantageous embodiments may be applied to other types of aircraft that hover. For example, without limitation, advantageous embodiments may be applied to vertical take-off vehicles, tiltrotors, tiltwing aircraft, and/or any other type of aircraft utilizing powered lift.
In other advantageous embodiments, object detection system 108 may be located remotely from rotorcraft 102. In yet other advantageous embodiments, rotorcraft 102 may respond to the potential threat of person 110 in crowd 106 without assistance from ground vehicles 104.
With reference now to
Platform 202 exists in object detection environment 200. In this example, platform 202 is rotorcraft 204. In other examples, platform 202 may take other forms. In this example, rotorcraft 204 is associated with computer system 206. Computer system 206 is one or more computers.
A first component may be considered to be associated with a second component by being secured to the second component, included in the second component, fastened to the second component, and/or connected to the second component in some other suitable manner. The first component also may be connected to the second component through using a third component. The first component may also be considered to be associated with the second component by being formed as part of and/or an extension of the second component.
In these advantageous embodiments, computer system 206 identifies objects 208 that may be potential threat 210 to rotorcraft 204. Objects 208 are anything that may exist in object detection environment 200. Sensor system 214 generates number of images 212 of objects 208 in object detection environment 200. For example, without limitation, sensor system 214 may include at least one of an optical sensor, a visible light camera, an infrared camera, and a light detection and ranging unit.
As used herein, the phrase “at least one of”, when used with a list of items, means that different combinations of one or more of the listed items may be used and only one of each item in the list may be needed. For example, “at least one of item A, item B, and item C” may include, for example, without limitation, item A or item A and item B. This example also may include item A, item B, and item C, or item B and item C.
In this example, sensor system 214 includes camera 216. Camera 216 generates image 218 of objects 208. Object detection module 220 is connected to camera 216. Object detection module 220 receives image 218 from camera 216. Object detection module 220 is hardware, software, or a combination of hardware and software in computer system 206. Object detection module 220 processes image 218 to detect objects 208 that may be potential threat 210.
In these advantageous embodiments, object detection module 220 searches image 218 to identify objects 208 that are members 222 of general class 224. For example, general class 224 is humans 226. Members 222 of general class 224 are humans 226 that are present in object detection environment 200.
Once members 222 of general class 224 have been identified, object detection module 220 further processes image 218 and members 222. Object detection module 220 processes the image of members 222 in image 218 to identify members 228 of specific class 230. Specific class 230 may be objects 208 that present potential threat 210 to rotorcraft 204. Members 228 of specific class 230 are members 222 of general class 224 that present potential threat 210 to rotorcraft 204. For example, member 232 of members 228 may have pose 234 controlling weapon 236 in orientation 245. In these examples, controlling a weapon means that the member may be holding a weapon, touching the weapon, standing near the weapon, and/or operating the weapon in some manner. For example, weapon 236 may be located on a platform, such as, for example, a vehicle or a building. Weapon 236 may also be operated by member 232 remotely. For example, without limitation, weapon 236 may be a rocket propelled grenade, a missile launcher, a gun, and/or any other type of weapon that could present potential threat 210 to rotorcraft 204.
In these illustrative examples, pose 234 is a pose that may be threatening to rotorcraft 204. For example, pose 234 may be a human with arms raised toward rotorcraft 204. Thus, object detection module 220 searches members 222 of general class 224 for member 232 in pose 234 having arms raised. In another example, object detection module 220 searches members 222 of general class 224 for member 232 in controlling weapon 236 in orientation 245 and may be potential threat 210 to rotorcraft 204. Weapon 236 in orientation 245, if fired, may threaten rotorcraft 204. In these illustrative examples, object detection module 220 may also search for member 232 in controlling weapon 236 in which orientation 245 is one in which weapon 236 is pointed towards rotorcraft 204.
When potential threat 210 has been identified, object detection module 220 sends notification 238 to response system 240. Object detection module 220 may also include location 242 of potential threat 210 in notification 238. For example, computer system 206 may include a global positioning system for identifying the location of rotorcraft 204. Object detection module 220 can identify a position of potential threat 210 relative to rotorcraft 204 from image 218 to form location 242. In other examples, sensor system 214 may use radar, sonar, lidar, infrared sensors, or some form of proximity sensor to identify location 242 of potential threat 210.
Response system 240 is associated with rotorcraft 204. Response system 240 takes action 243 and responds to potential threat 210. Response system 240 may take actions without user or human input to prevent potential threat 210 from attacking rotorcraft 204. For example, response system 240 may generate an audible warning via speaker 244. In another example, response system 240 may use light system 246. For example, spot light 248 may illuminate member 232 to warn member 232 that member 232 has been identified. In another example, response system 240 may use dazzler 250 to disorient member 232. Dazzler 250 is a device that sends concentrated beams of light towards an individual to disorient the individual.
Response system 240 may also signal location 242 of potential threat 210. For example, spot light 248 can illuminate member 232. Response system 240 may also display location 242 on display device 252.
In these examples, response system 240 signals location 242 so that potential threat 210 can be neutralized. Response system 240 may signal location 242 to an operator of rotorcraft 204. Rotorcraft 204 can evade potential threat 210. Rotorcraft 204 may also fire weapon 254 to respond to potential threat 210. Response system 240 may signal location 242 to support units 256. Support units 256 may include, for example, without limitation, ground forces and/or air forces. Support units 256 can respond to potential threat 210 in assistance of rotorcraft 204. In one example, response system 240 may signal location 242 to a personal head-worn display of one or more of support units 256.
The illustration of object detection environment 200 is not meant to imply physical or architectural limitations to the manner in which different advantageous embodiments may be implemented. Other components in addition to and/or in place of the ones illustrated may be used. Some components may be unnecessary in some advantageous embodiments. Also, the blocks are presented to illustrate some functional components. One or more of these blocks may be combined and/or divided into different blocks when implemented in different advantageous embodiments.
Although the different advantageous embodiments have been described with respect to rotorcraft 204, the different advantageous embodiments also recognize that some advantageous embodiments may be applied to other types of platforms. For example, without limitation, other advantageous embodiments may be applied to a mobile platform, a stationary platform, a land-based structure, an aquatic-based structure, a space-based structure, and/or some other suitable object. More specifically, the different advantageous embodiments may be applied to, for example, without limitation, a submarine, a bus, a personnel carrier, tank, a train, an automobile, a spacecraft, a space station, a satellite, a surface ship, a power plant, a dam, a manufacturing facility, a building, and/or some other suitable object.
In other advantageous embodiments, portions of computer system 206 and/or sensor system 214 may be located remotely from rotorcraft 204. For example, a command center may detect and/or respond to threats for rotorcraft 204. The command center may be mobile or in a fixed location. The command center may process images, such as, for example, satellite images of areas surrounding rotorcraft 204 to detect threats to rotorcraft 204. Information regarding potential threat 210 and location 242 may be sent to rotorcraft 204. The command center may also take action 243 to respond to potential threat 210 for rotorcraft 204.
With reference now to
In this illustrative example, object detection module 300 includes agents 302 that process image 304. Agents 302 are software processes that analyze portions of an image. For example, agents 302 include algorithms for detecting a vector for groups of pixels in image 304. Agents 302 use the vector for the groups of pixels to identify locations and poses of objects in image 304.
In these examples, agents 302 include swarms 306. Swarms 306 are separate processes that process windows 308 in number of regions 310 in image 304. Windows 308 are smaller portions of image 304. Windows 308 in image 304 may be the same size or different sizes. Swarms 306 can process different windows 308 simultaneously. For example, different swarms of swarms 306 may process different objects in image 304. Different swarms of swarms 306 may also process different portions of an object in image 304.
In these illustrative examples, swarms 306 may be cognitive swarms 312. Cognitive swarms 312 are a type of swarms 306 that search for and recognize objects while using a function that has a confidence value for accuracy in correctly identifying objects. Object detection module 300 can utilize cognitive swarms 312 to improve the accuracy of identifying objects in image 304 that may be a threat.
With reference now to
The process begins by searching for members of a general class of objects in a number of images (operation 402). In operation 402, the members of the general class of objects may be humans in the number of images. The process then searches for members of a specific class of objects in a number of regions in the number of images (operation 404) with the process terminating thereafter.
In operation 404, the number of regions contains at least a portion of the members of the general class. For example, the number of regions may contain all or a part of a human. Also in operation 404, a member in the members of the specific class is a potential threat to a rotorcraft. For example, the member may have a pose that is threatening to the rotorcraft. In another example, the member may be holding a weapon. The process may identify that the weapon is pointed towards the rotorcraft.
With reference now to
The process begins by receiving a number of images from a sensor associated with a rotorcraft (operation 502). In operation 502, the sensor may be a camera located on the rotorcraft. In other examples, the sensor may be located remotely from the rotorcraft.
The process then searches for humans in the number of images using agents (operation 504). In operation 504, the agents reduce the number of objects in an image to a general class of objects. Thereafter, the process forms a general class (operation 506). In operation 506, the general class is humans that have been identified in the number of images.
The process then identifies a pose of a member of the general class (operation 508). In operation 508, the agents may process images of one or more members of the general class to identify the pose of the member.
Thereafter, the process compares the pose with poses that are potential threats to the rotorcraft (operation 510). In operation 510, the process narrows the general class of members to a specific class of members that present a potential threat to the rotorcraft. For example, the process searches for poses of members that have their arms raised. In other examples, the process searches weapons positioned in an orientation that is threatening to the rotorcraft. For example, the weapon may be held by a human and pointed at the rotorcraft. In other examples, the weapon may be located on a platform, such as a vehicle or building, for example. The weapon may be controlled by the human.
The process then determines whether the pose is a potential threat (operation 512). In operation 512, the process may determine that the member has arms raised or is controlling a weapon threatening the rotorcraft. If the process determines that the pose is not a potential threat, the process proceeds to operation 518 discussed below. If, however, the process determines that the pose is a potential threat, the process identifies the location of the potential threat (operation 514). In operation 514, the process may send notifications of the location of the potential threat to operators of the rotorcraft and/or ground units to respond to the potential threat.
Thereafter, the process initiates an action (operation 516). For example, without limitation, in operation 516, the action may be to illuminate the member identified as the potential threat, display an alert, project a light on the member, generate a sound directed at the member, obscure a vision of the member, indicate a location of the member, send a message to a ground unit, display a location of the member on a personal head-worn display, display the location of the member on a display device, fire a weapon to neutralize the potential threat, and/or any other type of action to respond to the potential threat.
The process then determines whether additional members of the general class should be processed (operation 518). In operation 518, the process determines whether the potential threat identified is the only potential threat in the number of images. For example, the process may determine whether additional members are present in the specific class of objects that present a potential threat to the rotorcraft. If the process determines that additional members of the general class should be processed, the process returns to operation 508 to identify a pose of another member of the general class. If, however, the process determines that additional members of the general class should not be processed, the process terminates.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various advantageous embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, function, and/or a portion of an operation or step. For example, one or more of the blocks may be implemented as program code in hardware or a combination of program code and hardware. When implemented in hardware, the hardware may, for example, take the form of integrated circuits that are manufactured or configured to perform one or more operations illustrated in the flowcharts or block diagrams.
In some alternative implementations, the function or functions noted in the block may occur out of the order noted in the figures. For example, in some cases, two blocks shown in succession may be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order depending upon the functionality involved. Also, other blocks may be added in addition to the illustrated blocks in a flowchart or block diagram.
It should be noted that additional search stages beyond the two stages shown may be used if greater search specificity is required. A determination is then made as to whether members of the specific-class are found in the image portions (operation 607). If members of the specific-class are found in the image portions, an operator is cued or alerted as to the location of the objects (operation 608) with the process terminating thereafter. If members of the specific-class are not found in the image portions, the image is further searched for potential general-class members (operation 602). The advantage of the two-stage approach is three-fold: (1) only regions already identified as containing members of the general-class will be searched for members of the specific-class which reduces the false alarm rate; (2) the classifier which detects the specific-class has a more narrowly defined task which improves the detection accuracy; and (3) searching for members of the specific-class only within regions containing members of the general-class uses far fewer computational resources than searching for the members of the specific-class directly over the entire image.
Compared to conventional methods, the two-stage object detection method provides much faster object detection capabilities, as well as the ability to detect an object based on the context of its surroundings. For example, the system is capable of detecting the pose of the person holding a small object when the object itself is too small to be detected directly. The conventional processing flow for recognition of objects in images or video using computer vision consists of three steps, as shown in
First, an analysis window is defined to select the portion of video image 600 that is to be analyzed for the presence or absence of the object of interest (operation 610). In these examples, the analysis window may be an example of windows 308 in
The search or window positioning stage of the specific object detection system in operation 610 can be implemented using cognitive swarms, which is based on particle swarm optimization (PSO). Particle swarm optimization is known in the art and was described by Kennedy, J., Eberhart, R. C., and Shi, Y. in “Swarm Intelligence,” San Francisco: Morgan Kaufmann Publishers, 2001. Particle Swarm Optimization was also described by R. C. Eberhart and Y. Shi in “Particle Swarm Optimization: Developments, Applications, and Resources,” 2001.
Cognitive swarms are a new variation and extension of particle swarm optimization. Cognitive swarms search for and recognize objects by combining particle swarm optimization with an objective function that is based on the recognition confidence. In these examples, the cognitive swarms may be examples of implementations of cognitive swarms 312 in
Particle swarm optimization is a relatively simple optimization method that has its roots in artificial life in general and to bird flocking and swarming theory in particular. Conceptually, it includes aspects of genetic algorithms and evolutionary programming. A population of potential solutions is maintained as the positions of a set of particles in a solution space where each dimension represents one solution component. Each particle is assigned a velocity vector, and the particles then explore cooperatively the solution space in search of the objective function optima. Each particle keeps track of its coordinates in multi-dimensional space that are associated with the best solution (p) it has observed so far. A global best parameter (pg) is used to store the best location among all particles. The velocity of each particle is then changed towards p and pg in a probabilistic way according to:
vi(t+1)=wvi(t)+c1φ1[pi(t)−xi(t)]+c2φ2└pg(t)−xi(t)┘,
xi(t+1)=xi(t)+χvi(t+1)
where xi(t) and vi(t) are the position and velocity vectors at time t of the i-th particle and c1 and c2 are parameters that weight the influence of their respective terms in the velocity update equation, w is a decay constant which allows the swarm to converge to a solution more quickly, φ1 and φ2 are random numbers between 0 and 1 that introduce a degree of random exploration, and χ is a parameter that controls the convergence properties of the swarm.
The above particle swarm optimization dynamics reflect a socio-psychological model where individual particles change their beliefs in accordance with a combination of their own experience and the best experience of the group. This is in contrast to other models of cognition where an individual changes his beliefs to become more consistent with his own experience only. The random element introduces a source of noise which enables an initial random search of the solution space. The search then becomes more directed after a few iterations as the swarm starts to concentrate on more favorable regions.
This type of search is much more efficient than exhaustive or gradient-based search methods. Particle swarm optimization relies on the fact that in most practical problems, the optimum solution usually has better than average solutions residing in a volume around it. These good solutions tend to attract the particles to the region where the optimum lies. The swarm becomes more and more concentrated until the optimum is found (e.g., pg no longer changes). In cognitive swarms, the particle swarm optimization objective function is the confidence level of an object classifier. The cognitive swarm locates objects of interest in the scene by maximizing the classifier confidence.
The feature extraction and feature value calculation stage 612 can be implemented using various types of features known in the art. As a non-limiting example,
The parameter tval controls how the continuously-valued version of the Gabor wavelet is converted into a thresholded version that assumes values of −1, 0, or 1 only. The Threshold Gabor Wavelet has computational advantages because multiplication is not required to calculate the feature values. All of the adjustable parameters in the above equation are optimized for high recognition rate during the classifier development process.
A third possible feature set is Fuzzy Edge Symmetry Features which is known in the art and shown in
The feature sets used can be selected by sorting against their importance for a classification task using any of a number of techniques known in the art including, but not limited to, using metrics such as mutual information or latent feature discovery models. The feature sets used for training the first-stage classifiers are different than those used to train the second-stage classifiers. For example, wavelet feature sets for a human/non-human classification stage are shown in
Once an object has been identified as a member of the human class, it is sent to a second-stage classifier to identify the object as a member of a predetermined specific-class. In this case, the predetermined specific-class is a human holding his arms vertically upward, as would a football referee signaling a touchdown.
In order to achieve both high accuracy and speed, a classifier cascade as exemplified in
Potential objects that pass this stage are then fed through false alarm mitigation classifier 1210 and window diversity test 1212 before being output as a detected object 1214. By using less accurate but fast object classifier 1206 in the early stages of the cascade, non-objects of interest images can be quickly rejected. Only candidate images with a higher probability of being true objects of interest are passed on to the later more accurate but also more complex classifiers 1206, 1208, and 1210.
Plots of an experimental detection rate versus false alarm rate for the two-stage classification method of the present disclosure are shown in
The detailed algorithm flow for the two-stage detection system is shown in
Humans in a specific pose are then detected 1410 using local pose-classifier cognitive swarms that search the local regions around the humans or, alternatively, by a simple scanning search performed in the local vicinity of the humans. The local search method, which is used, depends on the localization accuracy of the human-classifier cognitive swarms. If the uncertainty is relatively large, then local pose-classifier cognitive swarms should be used. An image must pass through both the human and pose detection stages in order to be recognized as a specific-class object. A visual, audio, or tactile alert is issued when a specific-class object is detected (operation 1412).
With reference now to
The present disclosure further incorporates the methods described above into a system for display of, and automated response to, detected objects.
One possible array of cameras is an array of six cameras, each with a 60 degrees field of view to provide full coverage across 860 degrees. The images sensed by the cameras are fed into processing subsystem 1604 where the computer vision algorithms described in section (2.0) above are used to detect and identify objects of interest 1602. If multiple sensors are used, the system will also require a networked array of multiple computers in network subsystem 1606 processing the camera images in parallel. Multiple computers in network subsystem 1606 can be connected with a master processor for coordinating results from the network of data processors. When objects of interest are detected, their locations are sent to the two output subsystems (i.e., automated response subsystem 1608 and display subsystem 1610). In this example, automated response subsystem 1608 is an example of one embodiment of response system 240 in
Automated response subsystem 1608 is an optional component of this system. There may be situations in which a response to identified objects is needed faster than human operators will be able to react to any alert. Non-limiting examples of an automated response subsystem can be automatically re-directing cameras to the location of a recent play in a sporting event, automated contact with law enforcement upon detection of unauthorized personnel, or automated locking/unlocking of doors in a building.
Another system component, as shown in
One display option is to illuminate the object with a spotlight. Another display option is to use Augmented Reality technologies to display the object location through personal head-worn displays 1700 worn by the operators, as shown in
A third and more conventional display approach is to display the object information in a head-down display or on a small flat panel display, such as a personal digital assistant (PDA). Possible implementations include two-dimensional (2-D) and three-dimensional (3-D) versions of such a display, as shown in
Turning now to
Processor unit 1904 serves to process instructions for software that may be loaded into memory 1906. Processor unit 1904 may be a number of processors, a multi-processor core, or some other type of processor, depending on the particular implementation. Further, processor unit 1904 may be implemented using a number of heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. As another illustrative example, processor unit 1904 may be a symmetric multi-processor system containing multiple processors of the same type.
Memory 1906 and persistent storage 1908 are examples of storage devices 1916. A storage device is any piece of hardware that is capable of storing information, such as, for example, without limitation, data, program code in functional form, and/or other suitable information either on a temporary basis and/or a permanent basis. Memory 1906, in these examples, may be, for example, a random access memory or any other suitable volatile or non-volatile storage device. Persistent storage 1908 may take various forms, depending on the particular implementation.
For example, persistent storage 1908 may contain one or more components or devices. For example, persistent storage 1908 may be a hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The media used by persistent storage 1908 also may be removable. For example, a removable hard drive may be used for persistent storage 1908.
Communications unit 1910, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit 1910 is a network interface card. Communications unit 1910 may provide communications through the use of either or both physical and wireless communications links.
Input/output unit 1912 allows for input and output of data with other devices that may be connected to data processing system 1900. For example, input/output unit 1912 may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, input/output unit 1912 may send output to a printer. Display 1914 provides a mechanism to display information to a user.
Instructions for the operating system, applications, and/or programs may be located in storage devices 1916, which are in communication with processor unit 1904 through communications fabric 1902. In these illustrative examples, the instructions are in a functional form on persistent storage 1908. These instructions may be loaded into memory 1906 for processing by processor unit 1904. The processes of the different embodiments may be performed by processor unit 1904 using computer implemented instructions, which may be located in a memory, such as memory 1906.
These instructions are referred to as program code, computer usable program code, or computer readable program code that may be read and processed by a processor in processor unit 1904. The program code in the different embodiments may be embodied on different physical or tangible computer readable media, such as memory 1906 or persistent storage 1908.
Program code 1918 is located in a functional form on computer readable media 1920 that is selectively removable and may be loaded onto or transferred to data processing system 1900 for processing by processor unit 1904. Program code 1918 and computer readable media 1920 form computer program product 1922 in these examples. In one example, computer readable media 1920 may be computer readable storage media 1924 or computer readable signal media 1926. Computer readable storage media 1924 may include, for example, an optical or magnetic disk that is inserted or placed into a drive or other device that is part of persistent storage 1908 for transfer onto a storage device, such as a hard drive, that is part of persistent storage 1908. Computer readable storage media 1924 also may take the form of a persistent storage, such as a hard drive, a thumb drive, or a flash memory, that is connected to data processing system 1900. In some instances, computer readable storage media 1924 may not be removable from data processing system 1900. In these illustrative examples, computer readable storage media 1924 is a non-transitory computer readable storage medium.
Alternatively, program code 1918 may be transferred to data processing system 1900 using computer readable signal media 1926. Computer readable signal media 1926 may be, for example, a propagated data signal containing program code 1918. For example, computer readable signal media 1926 may be an electromagnetic signal, an optical signal, and/or any other suitable type of signal. These signals may be transmitted over communications links, such as wireless communications links, optical fiber cable, coaxial cable, a wire, and/or any other suitable type of communications link. In other words, the communications link and/or the connection may be physical or wireless in the illustrative examples.
In some advantageous embodiments, program code 1918 may be downloaded over a network to persistent storage 1908 from another device or data processing system through computer readable signal media 1926 for use within data processing system 1900. For instance, program code stored in a computer readable storage medium in a server data processing system may be downloaded over a network from the server to data processing system 1900. The data processing system providing program code 1918 may be a server computer, a client computer, or some other device capable of storing and transmitting program code 1918.
In this example, program code 1918 may include program code for detecting threats against a rotorcraft. For example, program code 1918 may include software that is a part of object detection module 220 in
In some advantageous embodiments, processor unit 1904 is configured to receive images and output locations of detected object. Data processing system 1900 may be further configured to perform the acts of the method of the present disclosure, including: searching for members of a predetermined general-class of objects in an image, detecting members of the general-class of objects in the image, selecting regions of the image containing detected members of the general-class of objects, searching for members of a predetermined specific-class of objects within the selected regions, detecting members of the specific-class of objects within the selected regions of the image, and outputting the locations of detected objects to an operator display unit and optionally to an automatic response system. While data processing system 1900 can be configured for specific detection of humans in certain poses, data processing system 1900 also can be configured for generic object detection.
The different components illustrated for data processing system 1900 are not meant to provide architectural limitations to the manner in which different embodiments may be implemented. The different advantageous embodiments may be implemented in a data processing system including components in addition to, or in place of, those illustrated for data processing system 1900. Other components shown in
In another illustrative example, processor unit 1904 may take the form of a hardware unit that has circuits that are manufactured or configured for a particular use. This type of hardware may perform operations without needing program code to be loaded into a memory from a storage device to be configured to perform the operations.
For example, when processor unit 1904 takes the form of a hardware unit, processor unit 1904 may be a circuit system, an application specific integrated circuit (ASIC), a programmable logic device, or some other suitable type of hardware configured to perform a number of operations. With a programmable logic device, the device is configured to perform the number of operations. The device may be reconfigured at a later time or may be permanently configured to perform the number of operations. Examples of programmable logic devices include, for example, a programmable logic array, programmable array logic, a field programmable logic array, a field programmable gate array, and other suitable hardware devices. With this type of implementation, program code 1918 may be omitted, because the processes for the different embodiments are implemented in a hardware unit.
In still another illustrative example, processor unit 1904 may be implemented using a combination of processors found in computers and hardware units. Processor unit 1904 may have a number of hardware units and a number of processors that are configured to run program code 1918. With this depicted example, some of the processes may be implemented in the number of hardware units, while other processes may be implemented in the number of processors.
As another example, a storage device in data processing system 1900 is any hardware apparatus that may store data. Memory 1906, persistent storage 1908, and computer readable media 1920 are examples of storage devices in a tangible form.
In another example, a bus system may be used to implement communications fabric 1902 and may be comprised of one or more buses, such as a system bus or an input/output bus. Of course, the bus system may be implemented using any suitable type of architecture that provides for a transfer of data between different components or devices attached to the bus system. Additionally, a communications unit may include one or more devices used to transmit and receive data, such as a modem or a network adapter. Further, a memory may be, for example, memory 1906, or a cache, such as found in an interface and memory controller hub that may be present in communications fabric 1902.
As another example, a storage device in data processing system 1900 is any hardware apparatus that may store data. Memory 1906, persistent storage 1908, and computer readable media 1920 are examples of storage devices in a tangible form.
The description of the different advantageous embodiments has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. Further, different advantageous embodiments may provide different advantages as compared to other advantageous embodiments. The embodiment or embodiments selected are chosen and described in order to best explain the principles of the embodiments, the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.
Owechko, Yuri, Kim, Kyungnam, Medasani, Swarup S.
Patent | Priority | Assignee | Title |
10274320, | Aug 04 2017 | MOTOROLA SOLUTIONS, INC. | Method and device for providing safe zone information for incident area |
10542961, | Jun 15 2015 | The Research Foundation for The State University of New York | System and method for infrasonic cardiac monitoring |
10689955, | Mar 05 2019 | SWM International, LLC | Intelligent downhole perforating gun tube and components |
10936886, | Dec 29 2017 | PLUSAI, INC | Method and system for stereo based vehicle pose estimation |
11078762, | Mar 05 2019 | SWM INTERNATIONAL INC | Downhole perforating gun tube and components |
11205271, | Apr 14 2017 | BEIJING TUSEN ZHITU TECHNOLOGY CO , LTD | Method and device for semantic segmentation of image |
11268376, | Mar 27 2019 | Acuity Technical Designs, LLC | Downhole safety switch and communication protocol |
11478215, | Jun 15 2015 | The Research Foundation for The State University o | System and method for infrasonic cardiac monitoring |
11619119, | Apr 10 2020 | INTEGRATED SOLUTIONS, INC | Downhole gun tube extension |
11624266, | Mar 05 2019 | SWM International, LLC | Downhole perforating gun tube and components |
11686195, | Mar 27 2019 | Acuity Technical Designs, LLC | Downhole switch and communication protocol |
11804076, | Oct 02 2019 | University of Iowa Research Foundation | System and method for the autonomous identification of physical abuse |
11875511, | Apr 14 2017 | BEIJING TUSEN ZHITU TECHNOLOGY CO., LTD. | Method and device for semantic segmentation of image |
Patent | Priority | Assignee | Title |
6178141, | Nov 20 1996 | Raytheon BBN Technologies Corp | Acoustic counter-sniper system |
6434271, | Feb 06 1998 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Technique for locating objects within an image |
6621764, | Apr 30 1997 | Weapon location by acoustic-optic sensor fusion | |
7046187, | Aug 06 2004 | Humatics Corporation | System and method for active protection of a resource |
7066427, | Feb 26 2004 | CHANG INDUSTRY, INC | Active protection device and associated apparatus, system, and method |
7104496, | Feb 26 2004 | CHANG INDUSTRY, INC | Active protection device and associated apparatus, system, and method |
7110569, | Sep 27 2001 | Koninklijke Philips Electronics N V | Video based detection of fall-down and other events |
7135992, | Dec 17 2002 | iRobot Corporation | Systems and methods for using multiple hypotheses in a visual simultaneous localization and mapping system |
7139222, | Jan 20 2004 | SHOTSPOTTER, INC | System and method for protecting the location of an acoustic event detector |
7151478, | Feb 07 2005 | Raytheon Company | Pseudo-orthogonal waveforms radar system, quadratic polyphase waveforms radar, and methods for locating targets |
7181046, | Nov 01 2000 | Koninklijke Philips Electronics N.V. | Person tagging in an image processing system utilizing a statistical model based on both appearance and geometric features |
7190633, | Aug 24 2004 | Raytheon BBN Technologies Corp | Self-calibrating shooter estimation |
7266045, | Jan 22 2004 | SHOTSPOTTER, INC | Gunshot detection sensor with display |
7292501, | Aug 24 2004 | Raytheon BBN Technologies Corp | Compact shooter localization system and method |
7359285, | Aug 23 2005 | Raytheon BBN Technologies Corp | Systems and methods for determining shooter locations with weak muzzle detection |
7408840, | Aug 24 2004 | Raytheon BBN Technologies Corp | System and method for disambiguating shooter locations |
7558762, | Aug 14 2004 | HRL Laboratories, LLC | Multi-view cognitive swarm for object recognition and 3D tracking |
7586812, | Jan 24 2003 | SHOTSPOTTER, INC | Systems and methods of identifying/locating weapon fire including return fire, targeting, laser sighting, and/or guided weapon features |
7599252, | Oct 10 2006 | Shotspotter, Inc. | Acoustic location of gunshots using combined angle of arrival and time of arrival measurements |
7599894, | Mar 04 2005 | HRL Laboratories, LLC | Object recognition using a cognitive swarm vision framework with attention mechanisms |
7636700, | Feb 03 2004 | HRL Laboratories, LLC | Object recognition system incorporating swarming domain classifiers |
7672911, | Aug 14 2004 | HRL Laboratories, LLC | Graph-based cognitive swarms for object group recognition in a 3N or greater-dimensional solution space |
8213709, | Nov 03 2009 | HRL Laboratories, LLC | Method and system for directed area search using cognitive swarm vision and cognitive Bayesian reasoning |
8285655, | Feb 03 2004 | HRL Laboratories, LLC | Method for object recongnition using multi-layered swarm sweep algorithms |
8488877, | Dec 02 2009 | HRL Laboratories, LLC | System for object recognition in colorized point clouds |
8515126, | May 03 2007 | HRL Laboratories, LLC | Multi-stage method for object detection using cognitive swarms and system for automated response to detected objects |
8589315, | Aug 14 2004 | HRL Laboratories, LLC | Behavior recognition using cognitive swarms and fuzzy graphs |
8649565, | Jun 18 2009 | HRL Laboratories, LLC | System for automatic object localization based on visual simultaneous localization and mapping (SLAM) and cognitive swarm recognition |
20050182518, | |||
20050238200, | |||
20070019865, | |||
20070090973, | |||
20080033645, | |||
20090244309, | |||
20090290019, | |||
20100111374, | |||
WO3093947, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 14 2011 | OWECHKO, YURI | The Boeing Company | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026449 | /0322 | |
Jun 14 2011 | MEDASANI, SWARUP S | The Boeing Company | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026449 | /0322 | |
Jun 14 2011 | KIM, KYUNGNAM | The Boeing Company | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026449 | /0322 | |
Jun 15 2011 | The Boeing Company | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jan 20 2015 | ASPN: Payor Number Assigned. |
Aug 24 2018 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Aug 24 2022 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 24 2018 | 4 years fee payment window open |
Aug 24 2018 | 6 months grace period start (w surcharge) |
Feb 24 2019 | patent expiry (for year 4) |
Feb 24 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 24 2022 | 8 years fee payment window open |
Aug 24 2022 | 6 months grace period start (w surcharge) |
Feb 24 2023 | patent expiry (for year 8) |
Feb 24 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 24 2026 | 12 years fee payment window open |
Aug 24 2026 | 6 months grace period start (w surcharge) |
Feb 24 2027 | patent expiry (for year 12) |
Feb 24 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |