A method of determining a direction of a target in a ground referential, the method including: acquiring an image of a scene including the target and a control object using a camera; obtaining position data of the camera and control object using a geo-spatial positioning system; determining a control direction from the camera to the control object in the ground referential using the position data; estimating a camera attitude in the ground referential using the control direction; determining the target direction from the camera to the target using the estimated camera attitude and a pixel position of the target in the image.
|
1. A method of determining a direction of a target in a ground referential, the method comprising:
acquiring an image of a scene including the target and a control object using a camera;
obtaining position data of the camera and control object using a geo-spatial positioning system;
determining a control direction from the camera to the control object in the ground referential using said position data;
estimating a camera attitude in the ground referential using the control direction;
determining the target direction from the camera to the target using the estimated camera attitude and a pixel position of the target in the image.
12. A surveying module for determining a direction of a target in a ground referential, the surveying module comprising:
an image input unit configured for receiving an image of a scene including the target and a control object, the image being acquired using a camera;
a position data input unit configured for receiving position data of the camera and of the control object from a geo-spatial positioning system; and
a target direction processing unit configured for:
determining a control direction from the camera to the control object in the ground referential using said position data;
estimating a camera attitude in the ground referential using the control direction; and
determining the target direction from the camera to the target using the estimated camera attitude and a pixel position of the target in the image.
2. The method according to
3. The method according to
obtaining additional position data regarding the at least one additional control object using the geo-spatial positioning system;
determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data;
and wherein estimating the camera attitude is performed using the at least one additional control direction.
4. The method according to
estimating the at least one additional control direction based on the estimated camera attitude and on a pixel position of the additional control object on the image;
comparing the at least one estimated additional control direction to the at least one additional control direction determined based on the additional position data to obtain a direction disagreement vector;
computing a correction rotation matrix to minimize the direction disagreement vector.
5. The method according to
6. The method according to
capturing a plurality of overlapping elementary images by modifying an orientation of the camera so as to scan a predetermined area of the scene, and
forming a panoramic image by mosaicing of the elementary images.
7. The method according to
8. The method according to
obtaining additional position data regarding the at least one additional control object using the geo-spatial positioning system;
determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data;
and wherein:
estimating the camera attitude is performed using the at least one additional control direction, and
the at least one additional control object and the control object belong to different elementary images.
9. The method according to
10. The method according
11. The method according to
13. The surveying module according to
determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data;
and wherein estimating the camera attitude is performed using the at least one additional control direction.
14. A surveying system comprising:
a camera including a geo-spatial position receiver configured for receiving position signals from a geo-spatial positioning system and a geo-spatial position processor for determining camera position data from said position signals; and
a surveying module according to
15. The surveying system of
|
The present disclosure relates in general to surveying methods. More particularly, the present disclosure relates to a system and method for determining a direction from a camera to a target in a ground referential.
Target geolocation refers generally to the problem of determining the coordinates of a target in a predefined referential (reference frame) such as the World Geodetic System (WGS84). Surveyors generally use theodolites for target geolocation. In operation, the surveyor places the theodolite at a reference position and points the theodolites at the target by visually locating the target through an optical system. Thereafter, the surveyor measures an angle between the target and a reference direction as well as a distance between the target and the theodolite. Typically, the reference direction and position of the theodolite are preliminarily determined by the theodolite observing a set of points, the locations of which are accurately known within the predefined referential. Using, the coordinates of the locations, the reference direction of the theodolite in the predefined referential can be determined with a predetermined level of accuracy. The general process for target geolocation comprises therefore two main stages: a set-up stage in which the reference position and direction of the theodolite is determined and a measurement stage in which the relative angle between the reference direction and the target direction and the distance between the reference position and the target position are measured.
Theodolites need to incorporate sophisticated mechanical components to allow accurate angular measurements and further require a setup stage which make them difficult to use in a stand alone environment where a set of accurately known points is not available. In certain situations, it is preferable to have an autonomous instrument and not to rely on mechanical components while maintaining a satisfactory accuracy.
The Applicant has found that the use of a camera combined with satellite positioning technologies for georeferencing the camera provides a simple system without the need for sophisticated mechanical components while maintaining a satisfactory accuracy and improving possibilities of use in stand alone environments.
It is noted that target location accuracy is dependent on several factors among which are self location accuracy of the instrument, angular measurement accuracy from a reference direction to the target direction and range measurement accuracy from the instrument to the target. However, a significant factor influencing accuracy is angular accuracy because this is dependent on the range of the target, and the other factors mentioned above generally cause only bias deviation.
Further, angular accuracy generally comprises both accuracy of a referencing of the instrument, for example determination of a Line Of Sight (LOS) to the north and leveling, and accuracy of the target direction, for example the angle measurement to the target relative to LOS to the north.
The accuracy of measuring relative angles using mechanical theodolites (employing high quality gears) is generally of about a few tens of micro radians (around 25 micro-radians). The total accuracy is also dependent on the accuracy of the absolute direction finding (“north-finding”). Standard (non-theodolite) high-end mechanical pan tilt units (PTU) have an accuracy of a few hundred micro-radians for measuring relative angles, but still require a gear. When adding the accuracy of the north finding using “simple” systems (e.g., compass), typical total accuracies of such PTU units are in the area of 500-5000 micro-radians. In the following, a satisfactory total accuracy is understood as a total accuracy from tens to several hundreds of micro-radians, for example 200 or 500 micro-radians. The proposed method and system notably allow reaching such a satisfactory accuracy without requiring a gear.
It is further noted that, although satellite positioning accuracy such as GPS (especially when using single frequency receivers as suggested below) is in the range of meters, a relative measurement between two close points is accurate to the centimeter level. The below statement is especially valid when analyzing the carrier phase of the GPS signal. Indeed, most of the errors in GPS positioning are common errors (including the ionosphere) which cause a bias but keep the relative direction accurate. Furthermore, error accuracy is not dependent on the actual distance between the GPS receivers, especially when the distance is in the range of hundreds of meters. When apprehended from the point of view of direction accuracy, one centimeter accuracy for example at one hundred meter distance turns into 100 micro-radians accuracy. Therefore, it is expected that determining a reference attitude of an instrument based on GPS receiver(s) positioned at hundred(s) of meters of the instrument can be done in the order of 50 to 500 micro-radians.
Furthermore, a calibrated camera may generally provide angle measurement with internal accuracy (meaning a measurement of an angle between two pixels) at single digit magnitude of micro-radians and, furthermore, when enlarging the field of view with a panorama by mosaicing techniques, the accuracy of tens of micro-radians may be expected.
Therefore, the present disclosure provides a method of determining a direction of a target in a ground referential, the method comprising: acquiring an image of a scene including the target and a control object using a camera; obtaining position data of the camera and control object using a geo-spatial positioning system; determining a control direction from the camera to the control object in the ground referential using said position data; estimating a camera attitude in the ground referential using the control direction; determining the target direction from the camera to the target using the estimated camera attitude and a pixel position of the target in the image.
In some embodiments, the method further comprises leveling of the camera using a level instrument and wherein estimating the camera attitude is performed using the leveling of the camera.
In some embodiments, the image further includes at least one additional control object, and the method further comprises: obtaining additional position data regarding the at least one additional control object using the geo-spatial positioning system; determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data. The step of estimating the camera attitude is performed using the at least one additional control direction.
In some embodiments, estimating the camera attitude using the at least one additional control direction comprises refining the estimated camera attitude using the at least one additional control direction by: estimating the at least one additional control direction based on the estimated camera attitude and on a pixel position of the additional control object on the image; comparing the at least one estimated additional control direction to the at least one additional control direction determined based on the additional position data to obtain a direction disagreement vector; computing a correction rotation matrix to minimize the direction disagreement vector.
In some embodiments, estimating the camera attitude using the at least one additional control direction comprises solving the equations provided by the camera model at the two control objects.
In some embodiments, the step of acquiring the image comprises: capturing a plurality of overlapping elementary images by modifying an orientation of the camera so as to scan a predetermined area of the scene, and forming a panoramic image by mosaicing of the elementary images.
In some embodiments, the control object and the target belong to different elementary images.
In some embodiments, the at least one additional control object and the control object belong to different elementary images.
In some embodiments, the ground referential is a East, North, Up referential centered on the camera position.
In some embodiments, the geo-spatial positioning system is a Global Navigation Satellite System (GNSS).
In some embodiments, the position data are provided without using differential correction and/or without using real time kinematic procedures.
In another aspect, the present disclosure provides a surveying module for determining a direction of a target in a ground referential. The surveying module comprises an image input unit configured for receiving an image of a scene including the target and a control object, the image being acquired using a camera; a position data input unit configured for receiving position data of the camera and of the control object from a geo-spatial positioning system; and a target direction processing unit configured for: determining a control direction from the camera to the control object in the ground referential using said position data; estimating a camera attitude in the ground referential based on the control direction; and determining the target direction from the camera to the target using the estimated camera attitude and a pixel position of the target in the image.
In some embodiments, the target direction processing unit is further configured for, when the image received by the image input unit further includes at least one additional control object and the position data received by the position data input unit further includes additional position data of the at least one additional control object: determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data; and wherein estimating the camera attitude is performed using the at least one additional control direction.
According to another aspect, the present disclosure provides a surveying system comprising a camera including a geo-spatial position receiver configured for receiving position signals from a geo-spatial positioning system and a geo-spatial position processor for determining camera position data from said position signals; and a surveying module as previously described.
In some embodiments, the surveying system further comprises a control object including a geo-spatial position receiver configured for receiving position signals of the control object from the geo-spatial positioning system and a communication unit configured for transferring said position signals to the geo-spatial position processor, the geo-spatial position processor being further configured for determining position data of the control object from said control object position signals.
In order to better understand the subject matter that is disclosed herein and to exemplify how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
Described herein are some examples of systems and methods useful for determining a direction of a target in a ground referential.
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the subject matter. However, it will be understood by those skilled in the art that some examples of the subject matter may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the description.
As used herein, the phrase “for example,” “such as”, “for instance” and variants thereof describe non-limiting examples of the subject matter.
Reference in the specification to “one example”, “some examples”, “another example”, “other examples, “one instance”, “some instances”, “another instance”, “other instances”, “one case”, “some cases”, “another case”, “other cases” or variants thereof means that a particular described feature, structure or characteristic is included in at least one example of the subject matter, but the appearance of the same term does not necessarily refer to the same example.
It should be appreciated that certain features, structures and/or characteristics disclosed herein, which are, for clarity, described in the context of separate examples, may also be provided in combination in a single example. Conversely, various features, structures and/or characteristics disclosed herein, which are, for brevity, described in the context of a single example, may also be provided separately or in any suitable sub-combination.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “generating”, “determining”, “providing”, “receiving”, “using”, “transmitting”, “performing”, “forming”, “analyzing”, or the like, may refer to the action(s) and/or process(es) of any combination of software, hardware and/or firmware. For example, these terms may refer in some cases to the action(s) and/or process(es) of a programmable machine, that manipulates and/or transforms data represented as physical, such as electronic quantities, within the programmable machine's registers and/or memories into other data similarly represented as physical quantities within the programmable machine's memories, registers and/or other such information storage, transmission and/or display element(s).
Further, it should be understood that the term “position data” may encompass positioning information provided by any geo-spatial positioning system using for example radio frequency signals transmitted from a network of transmitters (i.e. GNSS or other systems which allow the user to determine its location such that an accuracy error between two close points measurements is mainly bias). In the below description, the term satellite position data is used without prejudice to alternative embodiments such as beacon signals or the like.
Furthermore, it is noted that the term camera is used hereby to generally refer to an imaging device comprising a pixel matrix sensor.
Further, with reference to
Parameters of the camera model may be preliminary known (or solved during the process). In the following, details are provided for a given camera model. It is understood that the present disclosure can be extended to other types of camera models.
The considered camera model provides a relation between an object X and an image of the object X as follows:
Wherein:
CX is a pixel column location of an image of the object X;
RX is a pixel row location of the image of the object X;
K is a camera calibration matrix, as detailed hereinbelow:
R (roll, pitch, yaw) is the rotation matrix between the reference coordinate system and the camera coordinate system as defined with reference to
ElX, AzX are the Elevation and Azimuth of object X as previously defined with reference to
The camera calibration matrix K may be expressed as follows:
Wherein:
fc is a focal of the camera along the column axis;
fr is a focal of the camera along the row axis;
s is a skewness of the camera;
c0 is a column coordinate of the focal center in the image coordinate system;
r0 is row coordinate of the focal center in the image coordinate system.
K is also referred to as internal parameter or internal orientation of an image.
Referring now to
The satellite receivers 22, 32, 42 of the camera 2, control object 3 and additional control object 4 may be respectively configured for receiving from a satellite positioning system, such as a GNSS, positioning signals enabling to determine position data of the camera 2, the control object 3 and the additional control object 4 in a ground referential such as WGS84. The camera 2, the control object 3 and the additional control object 4 may transfer to the surveying module 1 the positioning signals enabling to determine their respective position data or may transfer the respective position data directly. It is appreciated that the position data may be determined without using long-baseline differential correction techniques, without using real time kinematic procedures and without using double frequency receivers.
The satellite receiver 22 of the camera 2 may be mounted on the camera 2 and calibrated so that an antenna center of the satellite receiver 22 is calibrated with an optical center of the camera 2. The camera 2 may be mounted on tripod units configured for positioning the camera at a predetermined position. The control object 3 and/or the additional control object 4 may also be mounted on tripod units. The tripod unit may include a level instrument, such as a spirit level or a bubble level, configured for leveling the camera i.e. orienting the camera in a plane parallel to the XCY plane in the (X, Y, Z) referential defined on
The surveying module 1 may comprise an image input unit 10, a position data input unit 12 and a target direction processing unit 14. The image input unit 12 may be configured for receiving an image from the camera 2. The image may be a digital image and may include an image of a target, the control object 3 and optionally the additional control object 4. The position data input unit 12 may be configured for receiving position data from the camera 2, the control object 3 and the additional control object 4. In some embodiments, the position data is not computed internally by respectively the camera 2, the control object 3 and the additional control object 4. In these embodiments, the satellite positioning signals from the camera 2, the control object 3 and the additional control object 4 may be transferred to the surveying module 1 and the position data input unit 12 may be configured for computing the position data from the positioning signals. The target direction processing unit 14 may be configured to determine a direction (azimuth and elevation) of a target appearing in the image according to the method described in more details with reference to
These equations enable to compute a pitch and roll values and therefore provide an estimation of the camera attitude. It is noted that when the image is a panorama, a global rotation matrix may be obtained by multiplying the above rotation matrix by a rotation matrix representing the controlled modification of the camera orientation that created the panorama. In some embodiments, the rotation matrix from the camera attitude acquiring one image to the camera attitude acquiring another image can be determined using image processing, for example by determining tie points common in two adjacent images to obtain the transformations between two images.
In a fifth step S140, the target direction from the camera to the target T may be computed using the camera model, the camera attitude and a pixel position of the target in the image by inverting the system provided by the camera model:
In some embodiments, the image may also include an additional control object and the method may further comprise the steps of receiving additional position data regarding the at least one additional control object using the satellite positioning system; determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data; refining the camera attitude using the at least one additional control direction; and refining the target direction from the camera to the target using the refined camera attitude. The refining of the camera attitude may comprise the steps of: estimating the at least one additional control direction based on the estimated camera attitude and on a pixel position of the additional control object on the image; comparing the estimated at least one additional control direction to the at least one additional control direction determined based on the additional position data to obtain a direction disagreement vector; and computing a correction rotation matrix to minimize the direction disagreement vector.
With an additional control object B it may also be possible to determine the camera attitude directly from the equations provided by the camera model at the two control objects without speculating on the value of one of the Euler angles:
Indeed, the camera model thereby provides four equations for solving three parameters which can therefore be determined. It is noted that more control objects may be useful to improve the accuracy by for example trying to correct certain deficiencies in the camera model parameters. For example, it may enable to solve optical aberrations (s, c0, r0) of the camera model.
In the case the camera is provided with a level instrument, the yaw and pitch of the camera can be determined without calculation.
Advantageously, it is noted that no GPS base stations are required for implementing the described method. Therefore, assuming that GPS (more generally GNSS) is available, the described method is autonomous. The method provides an absolute line of sight determination of targets and has an accuracy in the range from tens to 500 micro-radians (depending on the equipment) using simple tools (tripod, camera, single frequency GPS receiver) with no sophisticated mechanical components. It is possible to get even better accuracies. For example when the distance between the GNSS receivers is increased and/or when the camera has a large amount of pixels etc.) The set up of the camera can be done using measurement(s) within a hundred meters from the camera.
Advantageously, after set-up, the camera can be handheld if dealing with remote targets because the angular change would be negligible.
While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
It will be appreciated that the embodiments described above are cited by way of example, and various features thereof and combinations of these features can be varied and modified.
While various embodiments have been shown and described, it will be understood that there is no intent to limit the invention by such disclosure, but rather, it is intended to cover all modifications and alternate constructions falling within the scope of the invention, as defined in the appended claims.
It will also be understood that the system according to the presently disclosed subject matter can be implemented, at least partly, as a suitably programmed computer. Likewise, the presently disclosed subject matter contemplates a computer program being readable by a computer for executing the disclosed method. The presently disclosed subject matter further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing the disclosed method.
Zalmanson, Garry Haim, Bar Hillel, Gil
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
4866626, | Sep 18 1987 | HONEYWELL INC , HONEYWELL PLAZA, MINNEAPOLIS 55408 A CORP OF DE | Navigation by a video-camera sensed ground array |
5471218, | Jul 01 1993 | Trimble Navigation Limited | Integrated terrestrial survey and satellite positioning system |
6035254, | Oct 14 1997 | Trimble Navigation Limited | GPS-aided autolock in a robotic total station system |
7623224, | Dec 16 2003 | Trimble Jena GmbH | Calibration of a surveying instrument |
9564175, | Apr 02 2013 | International Business Machines Corporation | Clustering crowdsourced videos by line-of-sight |
9679382, | Jul 24 2013 | ISRAEL AEROSPACE INDUSTRIES LTD | Georeferencing method and system |
20010010542, | |||
20030179361, | |||
20040246468, | |||
20090125223, | |||
20090222237, | |||
20110007154, | |||
20120063668, | |||
20120173053, | |||
20150126223, | |||
EP1503176, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 14 2014 | ISRAEL AEROSPACE INDUSTRIES LTD. | (assignment on the face of the patent) | / | |||
Dec 24 2014 | ZALMANSON, GARRY HAIM | ISRAEL AEROSPACE INDUSTRIES LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 038056 | /0905 | |
Dec 24 2014 | ZALMANSON, GARRY HAIM | ISRAEL AEROSPACE INDUSTRIES LTD | CORRECTIVE ASSIGNMENT TO CORRECT THE NAME OF THE FIRST INVENTOR PREVIOUSLY RECORDED ON REEL 038056 FRAME 0905 ASSIGNOR S HEREBY CONFIRMS THE NAME OF THE FIRST INVENTOR | 039849 | /0132 | |
Dec 25 2014 | HILLEL, GIL BAR | ISRAEL AEROSPACE INDUSTRIES LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 038056 | /0905 | |
Dec 25 2014 | BAR HILLEL, GIL | ISRAEL AEROSPACE INDUSTRIES LTD | CORRECTIVE ASSIGNMENT TO CORRECT THE NAME OF THE FIRST INVENTOR PREVIOUSLY RECORDED ON REEL 038056 FRAME 0905 ASSIGNOR S HEREBY CONFIRMS THE NAME OF THE FIRST INVENTOR | 039849 | /0132 |
Date | Maintenance Fee Events |
Aug 20 2021 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 20 2021 | 4 years fee payment window open |
Aug 20 2021 | 6 months grace period start (w surcharge) |
Feb 20 2022 | patent expiry (for year 4) |
Feb 20 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 20 2025 | 8 years fee payment window open |
Aug 20 2025 | 6 months grace period start (w surcharge) |
Feb 20 2026 | patent expiry (for year 8) |
Feb 20 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 20 2029 | 12 years fee payment window open |
Aug 20 2029 | 6 months grace period start (w surcharge) |
Feb 20 2030 | patent expiry (for year 12) |
Feb 20 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |