A method and apparatus to identify an object include extracting first location information, second location information, and motion information of an object from a polarimetric radar signal that is reflected from the object. Each of the first location information, the second location information, and the motion information correspond to each of polarized waves. The apparatus and the method also include generating a first image and a second image, combining the first image and the second image to generate first composite images, each corresponding to each of the polarized waves, and identifying the object using a neural network based on the first composite images. The first image corresponds to each of the polarized waves and includes the first location information and the second location information, and the second image corresponds to each of the polarized waves and includes the first location information and the motion information.
|
1. A method of identifying an object, the method comprising:
extracting first location information, second location information, and motion information of an object from a polarimetric radar signal that is reflected from the object, wherein each of the first location information, the second location information, and the motion information correspond to each of polarized waves in a multi polarization;
generating a first image comprising the first location information and the second location information for the each of the polarized waves;
generating a second image comprising the first location information and the motion information for the each of the polarized waves;
combining the first image and the second image for each of the respective polarized waves to generate first composite images; and
identifying the object using a neural network based on the first composite images.
11. An apparatus to identify an object, the apparatus comprising:
a processor configured to
extract first location information, second location information, and motion information of an object from a polarimetric radar signal reflected from the object, wherein each of the first location information, the second location information and the motion information correspond to each of polarized waves in a multi-polarization;
generate a first image comprising the first location information and the second location information for the each of the polarized waves;
generate a second image comprising the first location information and the motion information for the each of the polarized waves;
combine the first image and the second image for the each of the respective polarized waves to generate first composite images; and
identify the object using a neural network based on the first composite images.
2. The method of
inputting a second composite image generated by combining the first composite images to the neural network to identify the object.
3. The method of
the first location information is range information,
the second location information is angle information,
the first image is a range-angle image, and
the second image is a range-velocity image.
4. The method of
the first location information is vertical direction information,
the second location information is horizontal direction information,
the first image is a vertical direction-horizontal direction image, and
the second image is a vertical direction-velocity image.
5. The method of
in response to determining that a signal-to-noise ratio (SNR) of the polarimetric radar signal satisfies a criterion, determining the first location information to be range information, determining the second location information to be angle information, determining the first image to be a range-angle image, and determining the second image to be a range-velocity image, and
in response to determining that the SNR being does not satisfy the criterion, determining the first location information to be vertical direction information, determining the second location information to be horizontal direction information, determining the first image to be a vertical direction-horizontal direction image, and determining the second image to be a vertical direction-velocity image.
6. The method of
7. The method of
8. The method of
generating differential images between the first composite images; and
inputting a third composite image generated by combining the differential images to the neural network to identify the object.
9. The method of
generating cross-correlation images between the first composite images; and
inputting a fourth composite image generated by combining the cross-correlation images to the neural network to identify the object.
10. A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the method of
12. The apparatus of
13. The apparatus of
the first location information is range information,
the second location information is angle information,
the first image is a range-angle image, and
the second image is a range-velocity image.
14. The apparatus of
the first location information is vertical direction information,
the second location information is horizontal direction information,
the first image is a vertical direction-horizontal direction image, and
the second image is a vertical direction-velocity image.
15. The apparatus of
the processor is further configured to determine whether a signal-to-noise ratio (SNR) of the polarimetric radar signal satisfies a predetermined criterion,
in response to the SNR being determined to satisfy the criterion, the processor determines the first location information to be range information, the second location information to be angle information, the first image to be a range-angle image, and the second image to be a range-velocity image, and
in response to the SNR being determined not to satisfy the criterion, the processor determines the first location information to be vertical direction information, the second location information to be horizontal direction information, the first image to be a vertical direction-horizontal direction image, and the second image to be a vertical direction-velocity image.
16. The apparatus of
17. The apparatus of
18. The apparatus of
19. The apparatus of
20. The apparatus of
a transmission antenna configured to radiate the polarimetric radar signal to the object; and
a reception antenna configured to receive the reflected polarimetric radar signal from the object.
|
This application claims the benefit under 35 USC § 119(a) of Korean Patent Application No. 10-2017-0108469, filed on Aug. 28, 2017, in the Korean Intellectual Property Office, the entire disclosure of which is incorporated herein by reference for all purposes.
The following description relates to an apparatus and a method of identifying an object using a neural network.
A quantity of data transmitted by a polarimetric RADAR is four times that of a single-polarization RADAR. In a polarimetric RADAR, required data is selectively used depending on the purpose of use. For example, a range-Doppler map includes range information and velocity information, and a micro-Doppler analysis includes velocity information.
This Summary is provided to introduce a selection of concepts that are further described below in the Detailed Description in simplified form. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In accordance with an embodiment, there is provided a method of identifying an object, the method including: extracting first location information, second location information, and motion information of an object from a polarimetric RADAR signal that may be reflected from the object, wherein each of the first location information, the second location information, and the motion information correspond to each of polarized waves; generating a first image and a second image, wherein the first image corresponds to each of the polarized waves and may include the first location information and the second location information, and the second image corresponds to each of the polarized waves and may include the first location information and the motion information; combining the first image and the second image to generate first composite images, each corresponding to each of the polarized waves; and identifying the object using a neural network based on the first composite images.
The method may be further include: generating a second composite image by combining the first composite images to the neural network to identify the object.
The first location information may be range information, the second location information may be angle information, the first image may be a range-angle image, and the second image may be a range-velocity image.
The first location information may be vertical direction information, the second location information may be horizontal direction information, the first image may be a vertical direction-horizontal direction image, and the second image may be a vertical direction-velocity image.
The method may be further include: in response to determining that a signal-to-noise ratio (SNR) of the polarimetric RADAR signal satisfies a criterion, determining the first location information to be range information, determining the second location information to be angle information, determining the first image to be a range-angle image, and determining the second image to be a range-velocity image, and in response to determining that the SNR being does not satisfy the criterion, determining the first location information to be vertical direction information, determining the second location information to be horizontal direction information, determining the first image to be a vertical direction-horizontal direction image, and determining the second image to be a vertical direction-velocity image.
The polarimetric RADAR signal may include a vertical/vertical (V/V) polarization signal, a vertical/horizontal (V/H) polarization signal, a horizontal/vertical (H/V) polarization signal and a horizontal/horizontal (H/H) polarization signal.
The polarimetric RADAR signal may include a left-handed circular polarization (LHCP)/right-handed circular polarization (RHCP) signal, an LHCP/LHCP signal, an RHCP/LHCP signal, and an RHCP/RHCP signal.
The method of claim 1, may be further include: generating differential images between the first composite images; and generating a third composite image by combining the differential images to the neural network to identify the object.
The method may be further include: generating cross-correlation images between the first composite images; and generating a fourth composite image by combining the cross-correlation images to the neural network to identify the object.
In accordance with an embodiment, there is provided a non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the method described above.
In accordance with an embodiment, there is provided an apparatus to identify an object, the apparatus including: a processor configured to extract first location information, second location information, and motion information of an object from a polarimetric RADAR signal reflected from the object, wherein each of the first location information, the second location information and the motion information correspond to each of polarized waves in a multi-polarization; generate a first image and a second image, wherein the first image corresponds to each of the polarized waves and may include the first location information and the second location information, and the second image corresponds to each of the polarized waves and may include the first location information and the motion information;
combine the first image and the second image to generate first composite images, each corresponding to each of the polarized waves; and
identify the object using a neural network based on the first composite images.
The processor may be further configured to generate a second composite image by combining the first composite images to the neural network to identify the object.
The first location information may be range information, the second location information may be angle information, the first image may be a range-angle image, and the second image may be a range-velocity image.
The first location information may be vertical direction information, the second location information may be horizontal direction information, the first image may be a vertical direction-horizontal direction image, and the second image may be a vertical direction-velocity image.
The processor may be further configured to determine whether a signal-to-noise ratio (SNR) of the polarimetric RADAR signal satisfies a predetermined criterion, in response to the SNR being determined to satisfy the criterion, the processor determines the first location information to be range information, the second location information to be angle information, the first image to be a range-angle image, and the second image to be a range-velocity image, and in response to the SNR being determined not to satisfy the criterion, the processor determines the first location information to be vertical direction information, the second location information to be horizontal direction information, the first image to be a vertical direction-horizontal direction image, and the second image to be a vertical direction-velocity image.
The polarimetric RADAR signal may include a vertical/vertical (V/V) polarization signal, a vertical/horizontal (V/H) polarization signal, a horizontal/vertical (H/V) polarization signal and a horizontal/horizontal (H/H) polarization signal.
The polarimetric RADAR signal may include a left-handed circular polarization (LHCP)/right-handed circular polarization (RHCP) signal, an LHCP/LHCP signal, an RHCP/LHCP signal, and an RHCP/RHCP signal.
The processor may be further configured to generate differential images between the first composite images, and to generate a third composite image by combining the differential images to the neural network to identify the object.
The processor may be further configured to generate cross-correlation images between the first composite images, and to generate a fourth composite image by combining the cross-correlation images to the neural network to identify the object.
The apparatus may be further include: a transmission antenna configured to radiate the polarimetric RADAR signal to the object; and a reception antenna configured to receive the reflected polarimetric RADAR signal from the object.
Other features and aspects will be apparent from the following detailed description, the drawings, and the claims.
Throughout the drawings and the detailed description, the same reference numerals refer to the same elements. The drawings may not be to scale, and the relative sizes, proportions, and depictions of elements in the drawings may be exaggerated for the purpose of clarity, illustration, and convenience.
The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known in the art may be omitted for increased clarity and conciseness.
The features described herein may be embodied in different forms, and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application.
The following structural or functional descriptions of examples disclosed in the present disclosure are merely intended for the purpose of describing the examples and the examples may be implemented in various forms. The examples are not meant to be limited, but it is intended that various modifications, equivalents, and alternatives are also covered within the scope of the claims.
Although terms of “first” or “second” are used to explain various components, the components are not limited to the terms. These terms should be used only to distinguish one component from another component. For example, a “first” component may be referred to as a “second” component, or similarly, and the “second” component may be referred to as the “first” component within the scope of the right according to the concept of the present disclosure.
It will be understood that when a component is referred to as being “connected to” another component, the component can be directly connected or coupled to the other component or intervening components may be present.
As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components or a combination thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined herein, all terms used herein including technical or scientific terms have the same meanings as those generally understood. Terms defined in dictionaries generally used should be construed to have meanings matching with contextual meanings in the related art and are not to be construed as an ideal or excessively formal meaning unless otherwise defined herein.
Hereinafter, examples will be described in detail with reference to the accompanying drawings. In the following description, it should be noted that the same elements will be designated by the same reference numerals, wherever possible, even though they are shown in different drawings.
Referring to
The object identification apparatus 100 includes a transmission antenna 121, a reception antenna 123 and a processor 110. The transmission antenna 121 radiates or transmits a polarimetric RADAR signal to an object 130. The reception antenna 123 receives a polarimetric RADAR signal that is reflected from the object 130 and that returns to the object identification apparatus 100. The processor 110 identifies a location or shape of the object 130 based on the received polarimetric RADAR signal. In the present disclosure, the term “RADAR” is an acronym for “radio detection and ranging.”
The object identification apparatus 100 generates a single-channel image from pieces of location information for each polarized wave based on the polarimetric RADAR signal, and generates a two-channel image by combining the single-channel image with velocity information. The object identification apparatus 100 generates a single image by combining two-channel images for each polarized wave, inputs the generated image to a pre-trained neural network, and identifies an object represented by the image.
The neural network is referred to as an “artificial neural network.” The neural network is a recognition model or a recognition process or method implemented by hardware that mimics a calculation ability of a biological system using a large number of artificial neurons (or nodes). The neural network performs a human cognition or learning process using the artificial neurons. The neural network includes, for example, a deep neural network, a convolutional neural network or a recurrent neural network. For example, the neural network is trained using or based on training data acquired through an error backpropagation algorithm.
In an example, the polarimetric RADAR signal includes a vertical/vertical (V/V) polarization signal, a vertical/horizontal (V/H) polarization signal, a horizontal/vertical (H/V) polarization signal and a horizontal/horizontal (H/H) polarization signal. In another example, the polarimetric RADAR signal includes a left-handed circular polarization (LHCP)/right-handed circular polarization (RHCP) signal, an LHCP/LHCP signal, an RHCP/LHCP signal, and an RHCP/RHCP signal. Different types of polarized signals or waves do not affect each other.
The object identification apparatus 100 extracts range information, angle information or velocity information from the polarimetric RADAR signal, for each polarized wave. A type of polarizations includes, for example, four pairs of vertical polarizations and horizontal polarizations, or four pairs of LHCPs and RHCPs.
In an example, the object identification apparatus 100 generates a two-dimensional (2D) range-angle image based on the range information and the angle information for each polarized wave. A range-angle image includes, for example, a real aperture RADAR (RAR) image. In another example, the object identification apparatus 100 generates a 2D vertical distance-horizontal distance image based on vertical distance information and horizontal distance information for each polarized wave. A vertical distance-horizontal distance image includes, for example, a synthetic-aperture RADAR (SAR) image.
Also, the object identification apparatus 100 generates a three-dimensional (3D) first composite image by synthesizing the velocity information with a 2D image. The object identification apparatus 100 inputs the 3D first composite image to a neural network trained through deep learning, and identifies an object represented in the 3D first composite image. An artificial intelligence function that imitates the workings of the human brain in processing data and creating patterns for use in decision making. Deep learning is a subset of machine learning in Artificial Intelligence (AI) that has networks which are capable of learning unsupervised from data that is unstructured or unlabeled. An image processing algorithm, according to a related art, uses a three-channel image including RGB information and, as a result, data loss occurs due to a polarimetric RADAR signal that provides four-channel information. The object identification apparatus 100, in accordance with an example of the present disclosure, processes a single-channel image using the neural network and, thus, the object identification apparatus 100 makes it possible to select desired information from a polarimetric RADAR signal and to reduce data loss.
The object identification apparatus 100 generates a two-channel image by reflecting the velocity information in the single-channel image. Thus, the object identification apparatus 100 synthesizes the velocity information with an insufficient amount of information in the single-channel image to provide a two-channel image including a more sufficient amount of information to the neural network. Thus, an accuracy of an object identification result of the neural network is enhanced.
In another example, the object identification apparatus 100 increases a number of transmitters of a polarimetric RADAR and a number of receivers of the polarimetric RADAR. The object identification apparatus 100 extracts more diverse information from polarimetric RADAR signals using a multiple-input and multiple-output (MIMO) technology and a virtual array technology.
Referring to
In operation 220, the object identification apparatus 100 generates a first image and a second image. The first image corresponds to each of the polarized waves, and includes the first location information and the second location information. The second image corresponds to each of the polarized waves, and includes the first location information and the motion information.
For example, when the polarimetric RADAR signal includes a vertical/vertical (V/V) polarization signal, a vertical/horizontal (V/H) polarization signal, a horizontal/vertical (H/V) polarization signal, and a horizontal/horizontal (H/H) polarization signal, the object identification apparatus 100 generates four first images for each of the polarized waves. In this example, each of the four first images has a single channel, instead of three channels, for example, RGB channels. The object identification apparatus 100 synthesizes velocity information with each of the four first images. As a result, four second images of two channels, that is, a channel of image information and a channel of velocity information are generated.
In an example, when the first location information is range information and when the second location information is angle information, the first image is a range-angle image and the second image is range-velocity image. In this example, the first image is an RAR image.
In another example, when the first location information is vertical distance information and when the second location information is horizontal distance information, the first image is a vertical distance-horizontal distance image, and the second image is a vertical direction-velocity image. In this example, the first image is an SAR image.
In operation 230, the object identification apparatus 100 generates first composite images each corresponding to each of the polarized waves by combining the first image and the second image.
In operation 250, the object identification apparatus 100 identifies the object using a neural network based on the first composite images.
The object identification apparatus 100 generates a second composite image by combining the first composite images. For example, the object identification apparatus generates a second composite image of eight channels by combining four first composite images of two channels. The object identification apparatus 100 inputs the second composite image to the neural network and identifies the object.
In operation 240, the object identification apparatus 100 generates differential images between the first composite images or cross-correlation images between the first composite images. The object identification apparatus 100 generates a third composite image by combining differential images. The object identification apparatus 100 inputs the third composite image to the neural network and identifies the object.
The object identification apparatus 100 generates a fourth composite image by combining cross-correlation images. The object identification apparatus 100 inputs the fourth composite image to the neural network and identifies the object. The neural network has a structure to receive a multi-channel image generated by synthesizing or combining polarization information, velocity information, or additional information.
Referring to
In operation 320, the object identification apparatus 100 determines whether a signal-to-noise ratio (SNR) of the polarimetric RADAR signal satisfies a predetermined criterion. For example, the object identification apparatus 100 determines whether an SAR image with a high accuracy is required or a range-angle image that allows a fast detection is required.
The object identification apparatus 100 generates an SAR image or a range-angle image based on situation information including an object. For example, the situation information includes a propagation environment or a number of targets, and the propagation environment is represented by the SNR of the polarimetric RADAR signal. The SAR image and the range-angle image are different from each other in a resolution and a processing speed. The object identification apparatus 100 uses a range-angle image-based algorithm to obtain a fast result, and uses an SAR image to obtain an accurate result.
In operation 331, in response to the SNR satisfying the criterion, the object identification apparatus 100 generates a range-angle image and a range-velocity image. The range-angle image corresponds to each of the polarized waves and includes range information and angle information that each correspond to each of the polarized waves. The range-velocity image corresponds to each of the polarized waves and includes the range information and the motion information.
The range-angle image is, for example, an RAR image. When the SNR satisfies the criterion, the range information is the first location information, the angle information is the second location information, the range-angle image is generated as a first image, and the range-velocity image is generated as a second image. The criterion is used to determine whether the SNR is less than a threshold. In operation 341, the object identification apparatus 100 generates first composite images each corresponding to each of the polarized waves by combining the range-angle image and the range-velocity image. In operation 351, the object identification apparatus 100 generates differential images or cross-correlation images between the first composite images. In operation 361, the object identification apparatus 100 identifies the object using a neural network based on the first composite images. For example, the object identification apparatus 100 inputs the first composite images to the neural network, and identifies the object.
In operation 333, in response to the SNR not satisfying the criterion, the object identification apparatus 100 generates a vertical distance-horizontal distance image and a vertical direction-velocity image. The vertical distance-horizontal distance image corresponds to each of the polarized waves, and includes vertical distance information and horizontal distance information that each correspond to each of the polarized waves. The vertical direction-velocity image corresponds to each of the polarized waves, and includes the vertical distance information and the motion information. The vertical distance-horizontal distance image is, for example, an SAR image. When the SNR does not satisfy the criterion, the vertical distance information is the first location information, the horizontal distance information is the second location information, the vertical distance-horizontal distance image is generated as a first image, and the vertical direction-velocity image is generated as a second image. The criterion is used to determine whether the SNR is less than a threshold.
In operation 343, the object identification apparatus 100 generates first composite images each corresponding to each of the polarized waves by combining the vertical distance-horizontal distance image and the vertical direction-velocity image. In operation 353, the object identification apparatus 100 generates differential images or cross-correlation images between the first composite images. In operation 363, the object identification apparatus 100 identifies the object using a neural network based on the first composite images. For example, the object identification apparatus 100 inputs the first composite images to the neural network, and identifies the object.
Referring to
Referring to
Although the first composite images 431, 433 and 435 are expressed in 2D as shown in
An object identification apparatus generates a cross-correlation image between composite images. A cross-correlation is a degree of a correlation between different signals. The cross-correlation image is an image generated by setting a correlation between pixels corresponding to two different images as a new pixel value. The object identification apparatus generates a cross-correlation image between composite images corresponding to polarization pairs.
The object identification apparatus generates a differential image between composite images. A difference between pixel values of two composite images is set as a new pixel value, and the differential image is generated. For example, the object identification apparatus generates a differential image between composite images corresponding to each of polarization pairs.
Referring to
The object identification apparatus 100, the transmission antenna 121, the reception antenna 123, the processor 110 and other apparatuses, units, modules, devices, and other components described herein with respect to
The methods illustrated in
Instructions or software to control a processor or computer to implement the hardware components and perform the methods as described above are written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the processor or computer to operate as a machine or special-purpose computer to perform the operations performed by the hardware components and the methods as described above. In one example, the instructions or software include machine code that is directly executed by the processor or computer, such as machine code produced by a compiler. In another example, the instructions or software include higher-level code that is executed by the processor or computer using an interpreter. Programmers of ordinary skill in the art can readily write the instructions or software based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions in the specification, which disclose algorithms for performing the operations performed by the hardware components and the methods as described above.
The instructions or software to control a processor or computer to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, are recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access programmable read only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, blue-ray or optical disk storage, hard disk drive (HDD), solid state drive (SSD), flash memory, a card type memory such as multimedia card micro or a card (for example, secure digital (SD) or extreme digital (XD)), magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and providing the instructions or software and any associated data, data files, and data structures to a processor or computer so that the processor or computer can execute the instructions.
While this disclosure includes specific examples, it will be apparent to one of ordinary skill in the art that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10043268, | Jan 27 2015 | Toshiba Medical Systems Corporation | Medical image processing apparatus and method to generate and display third parameters based on first and second images |
4468656, | Jun 24 1981 | GENERAL FIBER COMMUNICATIONS, INC | Emergency signalling unit and alarm system for rescuing endangered workers |
5093649, | Aug 28 1990 | The Boeing Company | Bessel beam radar system using sequential spatial modulation |
5187687, | Jun 20 1985 | Kontron Instruments Holding N.V. | Production of images |
5552787, | Oct 10 1995 | The United States of America as represented by the Secretary of the Navy; NAVY, UNITED STATES OF AMERICA AS REPRESENTED BY THE SECRETARY OF THE, THE | Measurement of topography using polarimetric synthetic aperture radar (SAR) |
5656932, | Jan 12 1994 | Advantest Corporation | Non-contact type wave signal observation apparatus |
6750805, | Dec 20 2002 | The Boeing Company; Boeing Company, the | Full polarization synthetic aperture radar automatic target detection algorithm |
7345625, | Sep 28 2005 | Lockheed Martin Corporation | Radar polarization calibration and correction |
7751595, | Jul 12 2001 | Apple Inc | Method and system for biometric image assembly from multiple partial biometric frame scans |
7948429, | May 05 2008 | Raytheon Company | Methods and apparatus for detection/classification of radar targets including birds and other hazards |
8125370, | Apr 16 2007 | The United States of America as represented by the Secretary of the Navy; UNITED STATES OF AMERICA, REPRESENTED BY SEC OF NAVY | Polarimetric synthetic aperture radar signature detector |
8217368, | Nov 05 2010 | UNITED STATES OF AMERICA REPRESENTED BY THE SECRETARY OF THE ARMY | System and method for determining three-dimensional information from photoemission intensity data |
8913149, | Nov 30 2010 | KBR WYLE SERVICES, LLC | Apparatus and techniques for enhanced resolution imaging |
20030126448, | |||
20040032361, | |||
20050156659, | |||
20050264813, | |||
20060036353, | |||
20070024489, | |||
20070205936, | |||
20070274575, | |||
20090224993, | |||
20110169943, | |||
20110237939, | |||
20120075432, | |||
20120112096, | |||
20140275986, | |||
20150198703, | |||
20150281587, | |||
20160003935, | |||
20160025839, | |||
20160084955, | |||
20160349363, | |||
20170010353, | |||
20180106898, | |||
20180335518, | |||
20190041493, | |||
20200072764, | |||
JP2010197337, | |||
JP2010230462, | |||
JP2012063196, | |||
JP201263196, | |||
JP2013096807, | |||
WO2012044619, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 02 2018 | KIM, BYUNG KWAN | SAMSUNG ELECTRONICS CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 044653 | /0876 | |
Jan 18 2018 | Samsung Electronics Co., Ltd. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jan 18 2018 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Apr 22 2024 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Nov 17 2023 | 4 years fee payment window open |
May 17 2024 | 6 months grace period start (w surcharge) |
Nov 17 2024 | patent expiry (for year 4) |
Nov 17 2026 | 2 years to revive unintentionally abandoned end. (for year 4) |
Nov 17 2027 | 8 years fee payment window open |
May 17 2028 | 6 months grace period start (w surcharge) |
Nov 17 2028 | patent expiry (for year 8) |
Nov 17 2030 | 2 years to revive unintentionally abandoned end. (for year 8) |
Nov 17 2031 | 12 years fee payment window open |
May 17 2032 | 6 months grace period start (w surcharge) |
Nov 17 2032 | patent expiry (for year 12) |
Nov 17 2034 | 2 years to revive unintentionally abandoned end. (for year 12) |