A method of determining a direction of a target in a ground referential, the method including: acquiring an image of a scene including the target and a control object using a camera; obtaining position data of the camera and control object using a geo-spatial positioning system; determining a control direction from the camera to the control object in the ground referential using the position data; estimating a camera attitude in the ground referential using the control direction; determining the target direction from the camera to the target using the estimated camera attitude and a pixel position of the target in the image.

Patent
   9897445
Priority
Oct 06 2013
Filed
Sep 14 2014
Issued
Feb 20 2018
Expiry
Feb 23 2035
Extension
162 days
Assg.orig
Entity
Large
0
16
currently ok
1. A method of determining a direction of a target in a ground referential, the method comprising:
acquiring an image of a scene including the target and a control object using a camera;
obtaining position data of the camera and control object using a geo-spatial positioning system;
determining a control direction from the camera to the control object in the ground referential using said position data;
estimating a camera attitude in the ground referential using the control direction;
determining the target direction from the camera to the target using the estimated camera attitude and a pixel position of the target in the image.
12. A surveying module for determining a direction of a target in a ground referential, the surveying module comprising:
an image input unit configured for receiving an image of a scene including the target and a control object, the image being acquired using a camera;
a position data input unit configured for receiving position data of the camera and of the control object from a geo-spatial positioning system; and
a target direction processing unit configured for:
determining a control direction from the camera to the control object in the ground referential using said position data;
estimating a camera attitude in the ground referential using the control direction; and
determining the target direction from the camera to the target using the estimated camera attitude and a pixel position of the target in the image.
2. The method according to claim 1, further comprising leveling of the camera using a level instrument and wherein estimating the camera attitude is performed using the leveling of the camera.
3. The method according to claim 1, wherein the image further includes at least one additional control object, the method further comprises:
obtaining additional position data regarding the at least one additional control object using the geo-spatial positioning system;
determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data;
and wherein estimating the camera attitude is performed using the at least one additional control direction.
4. The method according to claim 3, wherein estimating the camera attitude using the at least one additional control direction comprises refining the estimated camera attitude using the at least one additional control direction by:
estimating the at least one additional control direction based on the estimated camera attitude and on a pixel position of the additional control object on the image;
comparing the at least one estimated additional control direction to the at least one additional control direction determined based on the additional position data to obtain a direction disagreement vector;
computing a correction rotation matrix to minimize the direction disagreement vector.
5. The method according to claim 3, wherein estimating the camera attitude using the at least one additional control direction comprises solving the equations provided by the camera model applied at the two control objects.
6. The method according to claim 1, wherein acquiring the image comprises:
capturing a plurality of overlapping elementary images by modifying an orientation of the camera so as to scan a predetermined area of the scene, and
forming a panoramic image by mosaicing of the elementary images.
7. The method according to claim 6, wherein the control object and the target belong to different elementary images.
8. The method according to claim 6, wherein the image further includes at least one additional control object and the method further comprises:
obtaining additional position data regarding the at least one additional control object using the geo-spatial positioning system;
determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data;
and wherein:
estimating the camera attitude is performed using the at least one additional control direction, and
the at least one additional control object and the control object belong to different elementary images.
9. The method according to claim 1, wherein the ground referential is a East, North, Up referential centered on the camera position.
10. The method according claim 1, wherein the geo-spatial positioning system is a Global Navigation Satellite System (GNSS).
11. The method according to claim 10, wherein the position data are provided without using differential correction and/or without using real time kinematic procedures.
13. The surveying module according to claim 12, wherein the target direction processing unit is further configured for, when the image received by the image input unit further includes at least one additional control object and the position data received by the position data input unit further includes additional position data of the at least one additional control object:
determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data;
and wherein estimating the camera attitude is performed using the at least one additional control direction.
14. A surveying system comprising:
a camera including a geo-spatial position receiver configured for receiving position signals from a geo-spatial positioning system and a geo-spatial position processor for determining camera position data from said position signals; and
a surveying module according to claim 12.
15. The surveying system of claim 14, further comprising a control object including a geo-spatial position receiver configured for receiving position signals of the control object from the geo-spatial positioning system and a communication unit configured for transferring said position signals to the geo-spatial position processor, the geo-spatial position processor being further configured for determining position data of the control object from said control object position signals.

The present disclosure relates in general to surveying methods. More particularly, the present disclosure relates to a system and method for determining a direction from a camera to a target in a ground referential.

Target geolocation refers generally to the problem of determining the coordinates of a target in a predefined referential (reference frame) such as the World Geodetic System (WGS84). Surveyors generally use theodolites for target geolocation. In operation, the surveyor places the theodolite at a reference position and points the theodolites at the target by visually locating the target through an optical system. Thereafter, the surveyor measures an angle between the target and a reference direction as well as a distance between the target and the theodolite. Typically, the reference direction and position of the theodolite are preliminarily determined by the theodolite observing a set of points, the locations of which are accurately known within the predefined referential. Using, the coordinates of the locations, the reference direction of the theodolite in the predefined referential can be determined with a predetermined level of accuracy. The general process for target geolocation comprises therefore two main stages: a set-up stage in which the reference position and direction of the theodolite is determined and a measurement stage in which the relative angle between the reference direction and the target direction and the distance between the reference position and the target position are measured.

Theodolites need to incorporate sophisticated mechanical components to allow accurate angular measurements and further require a setup stage which make them difficult to use in a stand alone environment where a set of accurately known points is not available. In certain situations, it is preferable to have an autonomous instrument and not to rely on mechanical components while maintaining a satisfactory accuracy.

The Applicant has found that the use of a camera combined with satellite positioning technologies for georeferencing the camera provides a simple system without the need for sophisticated mechanical components while maintaining a satisfactory accuracy and improving possibilities of use in stand alone environments.

It is noted that target location accuracy is dependent on several factors among which are self location accuracy of the instrument, angular measurement accuracy from a reference direction to the target direction and range measurement accuracy from the instrument to the target. However, a significant factor influencing accuracy is angular accuracy because this is dependent on the range of the target, and the other factors mentioned above generally cause only bias deviation.

Further, angular accuracy generally comprises both accuracy of a referencing of the instrument, for example determination of a Line Of Sight (LOS) to the north and leveling, and accuracy of the target direction, for example the angle measurement to the target relative to LOS to the north.

The accuracy of measuring relative angles using mechanical theodolites (employing high quality gears) is generally of about a few tens of micro radians (around 25 micro-radians). The total accuracy is also dependent on the accuracy of the absolute direction finding (“north-finding”). Standard (non-theodolite) high-end mechanical pan tilt units (PTU) have an accuracy of a few hundred micro-radians for measuring relative angles, but still require a gear. When adding the accuracy of the north finding using “simple” systems (e.g., compass), typical total accuracies of such PTU units are in the area of 500-5000 micro-radians. In the following, a satisfactory total accuracy is understood as a total accuracy from tens to several hundreds of micro-radians, for example 200 or 500 micro-radians. The proposed method and system notably allow reaching such a satisfactory accuracy without requiring a gear.

It is further noted that, although satellite positioning accuracy such as GPS (especially when using single frequency receivers as suggested below) is in the range of meters, a relative measurement between two close points is accurate to the centimeter level. The below statement is especially valid when analyzing the carrier phase of the GPS signal. Indeed, most of the errors in GPS positioning are common errors (including the ionosphere) which cause a bias but keep the relative direction accurate. Furthermore, error accuracy is not dependent on the actual distance between the GPS receivers, especially when the distance is in the range of hundreds of meters. When apprehended from the point of view of direction accuracy, one centimeter accuracy for example at one hundred meter distance turns into 100 micro-radians accuracy. Therefore, it is expected that determining a reference attitude of an instrument based on GPS receiver(s) positioned at hundred(s) of meters of the instrument can be done in the order of 50 to 500 micro-radians.

Furthermore, a calibrated camera may generally provide angle measurement with internal accuracy (meaning a measurement of an angle between two pixels) at single digit magnitude of micro-radians and, furthermore, when enlarging the field of view with a panorama by mosaicing techniques, the accuracy of tens of micro-radians may be expected.

Therefore, the present disclosure provides a method of determining a direction of a target in a ground referential, the method comprising: acquiring an image of a scene including the target and a control object using a camera; obtaining position data of the camera and control object using a geo-spatial positioning system; determining a control direction from the camera to the control object in the ground referential using said position data; estimating a camera attitude in the ground referential using the control direction; determining the target direction from the camera to the target using the estimated camera attitude and a pixel position of the target in the image.

In some embodiments, the method further comprises leveling of the camera using a level instrument and wherein estimating the camera attitude is performed using the leveling of the camera.

In some embodiments, the image further includes at least one additional control object, and the method further comprises: obtaining additional position data regarding the at least one additional control object using the geo-spatial positioning system; determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data. The step of estimating the camera attitude is performed using the at least one additional control direction.

In some embodiments, estimating the camera attitude using the at least one additional control direction comprises refining the estimated camera attitude using the at least one additional control direction by: estimating the at least one additional control direction based on the estimated camera attitude and on a pixel position of the additional control object on the image; comparing the at least one estimated additional control direction to the at least one additional control direction determined based on the additional position data to obtain a direction disagreement vector; computing a correction rotation matrix to minimize the direction disagreement vector.

In some embodiments, estimating the camera attitude using the at least one additional control direction comprises solving the equations provided by the camera model at the two control objects.

In some embodiments, the step of acquiring the image comprises: capturing a plurality of overlapping elementary images by modifying an orientation of the camera so as to scan a predetermined area of the scene, and forming a panoramic image by mosaicing of the elementary images.

In some embodiments, the control object and the target belong to different elementary images.

In some embodiments, the at least one additional control object and the control object belong to different elementary images.

In some embodiments, the ground referential is a East, North, Up referential centered on the camera position.

In some embodiments, the geo-spatial positioning system is a Global Navigation Satellite System (GNSS).

In some embodiments, the position data are provided without using differential correction and/or without using real time kinematic procedures.

In another aspect, the present disclosure provides a surveying module for determining a direction of a target in a ground referential. The surveying module comprises an image input unit configured for receiving an image of a scene including the target and a control object, the image being acquired using a camera; a position data input unit configured for receiving position data of the camera and of the control object from a geo-spatial positioning system; and a target direction processing unit configured for: determining a control direction from the camera to the control object in the ground referential using said position data; estimating a camera attitude in the ground referential based on the control direction; and determining the target direction from the camera to the target using the estimated camera attitude and a pixel position of the target in the image.

In some embodiments, the target direction processing unit is further configured for, when the image received by the image input unit further includes at least one additional control object and the position data received by the position data input unit further includes additional position data of the at least one additional control object: determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data; and wherein estimating the camera attitude is performed using the at least one additional control direction.

According to another aspect, the present disclosure provides a surveying system comprising a camera including a geo-spatial position receiver configured for receiving position signals from a geo-spatial positioning system and a geo-spatial position processor for determining camera position data from said position signals; and a surveying module as previously described.

In some embodiments, the surveying system further comprises a control object including a geo-spatial position receiver configured for receiving position signals of the control object from the geo-spatial positioning system and a communication unit configured for transferring said position signals to the geo-spatial position processor, the geo-spatial position processor being further configured for determining position data of the control object from said control object position signals.

In order to better understand the subject matter that is disclosed herein and to exemplify how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:

FIGS. 1A-1C illustrate certain notions useful in the present disclosure.

FIG. 2 illustrates a surveying system according to embodiments of the present disclosure.

FIG. 3 illustrates a method of determining a target direction according to embodiments of the present disclosure.

FIG. 4 illustrates a camera and a panoramic image of a scene including a control object and an additional control object as well several targets according to some embodiments of the present disclosure.

Described herein are some examples of systems and methods useful for determining a direction of a target in a ground referential.

In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the subject matter. However, it will be understood by those skilled in the art that some examples of the subject matter may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the description.

As used herein, the phrase “for example,” “such as”, “for instance” and variants thereof describe non-limiting examples of the subject matter.

Reference in the specification to “one example”, “some examples”, “another example”, “other examples, “one instance”, “some instances”, “another instance”, “other instances”, “one case”, “some cases”, “another case”, “other cases” or variants thereof means that a particular described feature, structure or characteristic is included in at least one example of the subject matter, but the appearance of the same term does not necessarily refer to the same example.

It should be appreciated that certain features, structures and/or characteristics disclosed herein, which are, for clarity, described in the context of separate examples, may also be provided in combination in a single example. Conversely, various features, structures and/or characteristics disclosed herein, which are, for brevity, described in the context of a single example, may also be provided separately or in any suitable sub-combination.

Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “generating”, “determining”, “providing”, “receiving”, “using”, “transmitting”, “performing”, “forming”, “analyzing”, or the like, may refer to the action(s) and/or process(es) of any combination of software, hardware and/or firmware. For example, these terms may refer in some cases to the action(s) and/or process(es) of a programmable machine, that manipulates and/or transforms data represented as physical, such as electronic quantities, within the programmable machine's registers and/or memories into other data similarly represented as physical quantities within the programmable machine's memories, registers and/or other such information storage, transmission and/or display element(s).

Further, it should be understood that the term “position data” may encompass positioning information provided by any geo-spatial positioning system using for example radio frequency signals transmitted from a network of transmitters (i.e. GNSS or other systems which allow the user to determine its location such that an accuracy error between two close points measurements is mainly bias). In the below description, the term satellite position data is used without prejudice to alternative embodiments such as beacon signals or the like.

Furthermore, it is noted that the term camera is used hereby to generally refer to an imaging device comprising a pixel matrix sensor.

FIGS. 1A-1C illustrate generally notations useful for the description of the present disclosure. Generally, the present disclosure relates to a method of determining a direction of a target in a ground referential based on an image of the target acquired by a camera and on position data of the camera and of at least one control object present in the image. It is noted that the position data may be provided in a geodesic referential—such as WGS84 or NAD83—by a satellite positioning system such as a Global Navigation Satellite System (GNSS). With reference to FIG. 1A, such a geodesic referential is noted (X0, Y0, Z0). The ground referential in which the direction of the target is determined may be a East/North/Up referential centered on the camera C position and noted (X, Y, Z) on FIG. 1A. It is noted that conversion from the (X0, Y0, Z0) coordinates to the (X, V, Z) coordinates is well known for the man skilled in the art and is not described herein for the sake of conciseness. Further, as represented on FIG. 1B, a direction towards an object A in (X, Y, Z) may be defined by an Azimuth and Elevation of the object A also noted in the following (Az, Elv)A. According to embodiments of the present disclosure, GNSS data (which may include code and phase observations of at least one RF frequency channel) of the camera C and of a control object A may be obtained and used to determine a relative vector CA=(Dx, Dy, Dz) in the (X, Y, Z) referential and thereby provide the Azimuth and Elevation of A (also referred to as the control object direction) as follows:

Azimuth = tan - 1 D x D y Elevation = tan - 1 D z D x 2 + D y 2

Further, with reference to FIG. 1C, a digital camera 2 may acquire an image 100 on which two control objects A and B appear. The camera 2 may include a pixel matrix sensor and the pixel positions on the image 100 may be referred to using two coordinates indicating the column and row of a given pixel. The pixel positions of the control objects A. B may be referred to in the following as (CA, RA) and (CB, RB). The camera 2 may be provided with a camera referential (Xc, Yc, Zc) (also referenced 200) centered on the optical center of the camera 2. The origin of the camera referential may be defined at the camera perspective center located at a focal distance before the focal plane, the Zc axis may be aligned with an optical axis and pointing outward, the plane formed by the axis Xc, Yc may be parallel to a focal plane with the Xc axis pointing along image columns and the Yc axis completing a right-hand system (for rectangular arrays thereby pointing along image rows). An attitude of the camera 2 may be defined with reference to the (X, Y, Z) referential previously described to represent an orientation of the camera 2. As illustrated on FIG. 1C, the attitude can be defined as a rotation which brings the camera referential 200 (coordinate system) into conformity with the (X, Y, Z) ground referential. For example, the attitude may be represented by Euler angles i.e. yaw, pitch and roll around the axis of the camera coordinate system. As will be explained in more detail below, the Euler angles can be computed from the position data of two control objects A, B by determining the directions (Az, Elv)A and (Az, Elv)B from the camera to the control objects in the ground referential and using a camera model for referencing the orientation of the camera. The camera principal point illustrated on FIG. 1C may refer to the point on the focal plane formed by an orthogonal projection of the camera center onto the focal plane.

Parameters of the camera model may be preliminary known (or solved during the process). In the following, details are provided for a given camera model. It is understood that the present disclosure can be extended to other types of camera models.

The considered camera model provides a relation between an object X and an image of the object X as follows:

[ cos ( Elv X ) sin ( Az X ) cos ( Elv X ) cos ( Az X ) sin ( Az X ) ] R ( roll , pitch , yaw ) * K - 1 [ C X R X 1 ]

Wherein:

CX is a pixel column location of an image of the object X;

RX is a pixel row location of the image of the object X;

K is a camera calibration matrix, as detailed hereinbelow:

R (roll, pitch, yaw) is the rotation matrix between the reference coordinate system and the camera coordinate system as defined with reference to FIG. 1C;

ElX, AzX are the Elevation and Azimuth of object X as previously defined with reference to FIG. 1B.

The camera calibration matrix K may be expressed as follows:

K = [ f c s c 0 0 f r r 0 0 0 1 ]

Wherein:

fc is a focal of the camera along the column axis;

fr is a focal of the camera along the row axis;

s is a skewness of the camera;

c0 is a column coordinate of the focal center in the image coordinate system;

r0 is row coordinate of the focal center in the image coordinate system.

K is also referred to as internal parameter or internal orientation of an image.

Referring now to FIG. 2, in some embodiments of the present disclosure, a surveying system may comprise a surveying module 1, a camera 2, a control object 3 and optionally an additional control object 4. The camera 2, the control object 3 and the additional control object 4 may respectively comprise a camera satellite receiver 22, a control object satellite receiver 32 and an additional control object satellite receiver 42.

The satellite receivers 22, 32, 42 of the camera 2, control object 3 and additional control object 4 may be respectively configured for receiving from a satellite positioning system, such as a GNSS, positioning signals enabling to determine position data of the camera 2, the control object 3 and the additional control object 4 in a ground referential such as WGS84. The camera 2, the control object 3 and the additional control object 4 may transfer to the surveying module 1 the positioning signals enabling to determine their respective position data or may transfer the respective position data directly. It is appreciated that the position data may be determined without using long-baseline differential correction techniques, without using real time kinematic procedures and without using double frequency receivers.

The satellite receiver 22 of the camera 2 may be mounted on the camera 2 and calibrated so that an antenna center of the satellite receiver 22 is calibrated with an optical center of the camera 2. The camera 2 may be mounted on tripod units configured for positioning the camera at a predetermined position. The control object 3 and/or the additional control object 4 may also be mounted on tripod units. The tripod unit may include a level instrument, such as a spirit level or a bubble level, configured for leveling the camera i.e. orienting the camera in a plane parallel to the XCY plane in the (X, Y, Z) referential defined on FIG. 1A. It is noted that when the camera 2 is provided with an accurate leveling instrument, the additional control object 4 may not be required. Indeed, as will be explained hereinafter, a control object with a known Elevation and Azimuth provides two equations from the camera model and therefore enables to determine two Euler angles of the camera attitude. Therefore, if the camera is oriented in a predefined plane, the determination of two Euler angles provides sufficient data to completely determine the attitude of the camera. Indeed, when the camera is leveled to the ground there is no need to solve the yaw angle (i.e. an angle about the optical axis causing the columns vector Xc not to be leveled) The tripod unit may enable to rotate around three perpendicular axes intersecting at the optical center of the camera (the rotation axes alternatively not be on the optical center—in which case the position of the rotation axes may be preliminarily determined/calibrated). The camera may comprise a pixel matrix sensor and an optical system configured for forming images of objects on the pixel matrix sensor.

The surveying module 1 may comprise an image input unit 10, a position data input unit 12 and a target direction processing unit 14. The image input unit 12 may be configured for receiving an image from the camera 2. The image may be a digital image and may include an image of a target, the control object 3 and optionally the additional control object 4. The position data input unit 12 may be configured for receiving position data from the camera 2, the control object 3 and the additional control object 4. In some embodiments, the position data is not computed internally by respectively the camera 2, the control object 3 and the additional control object 4. In these embodiments, the satellite positioning signals from the camera 2, the control object 3 and the additional control object 4 may be transferred to the surveying module 1 and the position data input unit 12 may be configured for computing the position data from the positioning signals. The target direction processing unit 14 may be configured to determine a direction (azimuth and elevation) of a target appearing in the image according to the method described in more details with reference to FIG. 3. An output device (not shown) may be configured for outputting the target direction as computed by the target direction processing device 14. For example the output device may comprise a display screen. In some embodiments, the output device may be the camera 2 and the computed target direction may be displayed on a display screen of the camera 2.

FIG. 3 illustrates steps of a method of determining a target direction in a ground referential according to embodiments of the present disclosure. In a first step S100, an image of a scene including a target T and a control object A is acquired using a camera. Optionally, an additional control object (or several additional control objects) may be in the acquired scene. The step S100 of acquiring an image may optionally comprise the steps of: capturing a plurality of overlapping elementary images by controllably modifying an orientation (attitude) of the camera so as to scan a predetermined area of the scene and of forming a panoramic image by mosaicing of the elementary images. The modification of the orientation of the camera may be performed by moving the camera using a tripod. In some embodiments, the control object, the target and the additional control object may belong to different elementary images. In a second step S110, position data of the camera and of the control object (and optionally of the additional control object) may be retrieved from a satellite positioning system. In some embodiments, the camera and control object may be configured to receive positioning signals from the satellite positioning system and may either extract the position data from the positioning signals and transmit the position data to a surveying module implementing the method or transmit directly the positioning signals to the surveying module which may therefore be configured to extract position data from the positioning signals. It is appreciated that the architecture may also comprise other intermediate elements. It is also noted that the position data may be determined without using differential correction techniques, without using real time kinematic procedures and without using double frequency receivers. It is noted that the position data may be originally obtained in a first geodesic referential such as WGS84. In a third step S120, a control direction (also referred to as Line Of Sight or LOS) from the camera to the control object may be computed in the ground referential (X, Y, Z) as defined with reference to FIG. 1A. The control direction may be computed by determining the coordinates of a relative vector in the first geodesic referential and by converting said coordinates into coordinates in the ENU referential. The relative vector is illustrated on FIG. 1A by CA=(Dx, Dy, Dz). FIG. 1B illustrates how the control direction (Ely, Az)A can be obtained from the relative vector CA. It is noted that even though the position data from the satellite positioning system may have an accuracy in the range of a few meters, the coordinates of the relative vector CA have an accuracy in the range of a few millimeters to a few centimeters It is to be noted that the accuracy can even be improved by for example using two GNSS frequencies, etc. In a fourth step S130, an estimation of the camera attitude in the ground referential may be performed based on the control direction, on the camera model and on the pixel position of the control object in the image. In the fourth step S130, two Euler angles of the camera attitude can be estimated by speculating about the value of the third Euler angle. For example, the third Euler angle (yaw) may be considered equal to zero. The equations provided by the camera model expressed on the control object are:

[ cos ( Elv A ) sin ( Az A ) cos ( Elv A ) cos ( Az A ) sin ( Az A ) ] , pitch , 0 ) * K - 1 [ C A R A 1 ]

These equations enable to compute a pitch and roll values and therefore provide an estimation of the camera attitude. It is noted that when the image is a panorama, a global rotation matrix may be obtained by multiplying the above rotation matrix by a rotation matrix representing the controlled modification of the camera orientation that created the panorama. In some embodiments, the rotation matrix from the camera attitude acquiring one image to the camera attitude acquiring another image can be determined using image processing, for example by determining tie points common in two adjacent images to obtain the transformations between two images.

In a fifth step S140, the target direction from the camera to the target T may be computed using the camera model, the camera attitude and a pixel position of the target in the image by inverting the system provided by the camera model:

[ cos ( Elv T ) sin ( Az T ) cos ( Elv T ) cos ( Az T ) sin ( Az T ) ] , pitch , 0 ) * K - 1 [ C T R T 1 ]

In some embodiments, the image may also include an additional control object and the method may further comprise the steps of receiving additional position data regarding the at least one additional control object using the satellite positioning system; determining at least one additional control direction from the camera to the at least one additional control object in the ground referential using said additional position data; refining the camera attitude using the at least one additional control direction; and refining the target direction from the camera to the target using the refined camera attitude. The refining of the camera attitude may comprise the steps of: estimating the at least one additional control direction based on the estimated camera attitude and on a pixel position of the additional control object on the image; comparing the estimated at least one additional control direction to the at least one additional control direction determined based on the additional position data to obtain a direction disagreement vector; and computing a correction rotation matrix to minimize the direction disagreement vector.

With an additional control object B it may also be possible to determine the camera attitude directly from the equations provided by the camera model at the two control objects without speculating on the value of one of the Euler angles:

[ cos ( Elv A ) sin ( Az A ) cos ( Elv A ) cos ( Az A ) sin ( Az A ) ] , pitch , yaw ) * K - 1 [ C A R A 1 ] [ cos ( Elv B ) sin ( Az B ) cos ( Elv B ) cos ( Az B ) sin ( Az B ) ] , pitch , yaw ) * K - 1 [ C B R B 1 ]

Indeed, the camera model thereby provides four equations for solving three parameters which can therefore be determined. It is noted that more control objects may be useful to improve the accuracy by for example trying to correct certain deficiencies in the camera model parameters. For example, it may enable to solve optical aberrations (s, c0, r0) of the camera model.

In the case the camera is provided with a level instrument, the yaw and pitch of the camera can be determined without calculation.

FIG. 4 illustrates an exemplary embodiment in which the previously described method may be used to acquire a panorama 100 including two control objects A, B and three targets T1, T2, T3. As illustrated, an advantage of the method may be that the targets can be at a further distance than the control objects. For example, the control objects can be at a hundred meters from the camera and the targets may be at several kilometers. Furthermore, as illustrated, the target and the control objects may be widespread (not in the FOV of one image) in the field of view of the camera because the panorama enables to increase the area seen by the camera. The method is hereinbelow repeated using different words for trying to clarify certain aspects. The method may comprise installing a GPS receiver on the camera and calibrating between the antenna center of the receiver and the optical center of the camera. The method may further comprise installing the camera on a tripod and setting up a second receiver near by (for example tens of meters) and acquiring a first picture of the second receiver. The method may comprise setting up a third receiver and acquiring a second picture of the third receiver. The first and second picture may overlap. A rotation matrix between the camera attitude acquiring the first picture and the camera attitude acquiring the second picture may be determined by image processing. Thereafter, using the GPS measurement from the three receivers, a line of sight (direction) from the camera to the second receiver and a line of sight from the camera to the third receiver may be determined and an absolute line of sight of the panorama can be computed. The panorama may further be enlarged to include targets which would not be already present in the image. The targets may be kilometers away from the camera. The pixel location of the target in the panorama may be determined and using the line of sight of the panorama and the pixel location of the target, the line of sight of the target may be computed.

Advantageously, it is noted that no GPS base stations are required for implementing the described method. Therefore, assuming that GPS (more generally GNSS) is available, the described method is autonomous. The method provides an absolute line of sight determination of targets and has an accuracy in the range from tens to 500 micro-radians (depending on the equipment) using simple tools (tripod, camera, single frequency GPS receiver) with no sophisticated mechanical components. It is possible to get even better accuracies. For example when the distance between the GNSS receivers is increased and/or when the camera has a large amount of pixels etc.) The set up of the camera can be done using measurement(s) within a hundred meters from the camera.

Advantageously, after set-up, the camera can be handheld if dealing with remote targets because the angular change would be negligible.

While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.

It will be appreciated that the embodiments described above are cited by way of example, and various features thereof and combinations of these features can be varied and modified.

While various embodiments have been shown and described, it will be understood that there is no intent to limit the invention by such disclosure, but rather, it is intended to cover all modifications and alternate constructions falling within the scope of the invention, as defined in the appended claims.

It will also be understood that the system according to the presently disclosed subject matter can be implemented, at least partly, as a suitably programmed computer. Likewise, the presently disclosed subject matter contemplates a computer program being readable by a computer for executing the disclosed method. The presently disclosed subject matter further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing the disclosed method.

Zalmanson, Garry Haim, Bar Hillel, Gil

Patent Priority Assignee Title
Patent Priority Assignee Title
4866626, Sep 18 1987 HONEYWELL INC , HONEYWELL PLAZA, MINNEAPOLIS 55408 A CORP OF DE Navigation by a video-camera sensed ground array
5471218, Jul 01 1993 Trimble Navigation Limited Integrated terrestrial survey and satellite positioning system
6035254, Oct 14 1997 Trimble Navigation Limited GPS-aided autolock in a robotic total station system
7623224, Dec 16 2003 Trimble Jena GmbH Calibration of a surveying instrument
9564175, Apr 02 2013 International Business Machines Corporation Clustering crowdsourced videos by line-of-sight
9679382, Jul 24 2013 ISRAEL AEROSPACE INDUSTRIES LTD Georeferencing method and system
20010010542,
20030179361,
20040246468,
20090125223,
20090222237,
20110007154,
20120063668,
20120173053,
20150126223,
EP1503176,
/////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Sep 14 2014ISRAEL AEROSPACE INDUSTRIES LTD.(assignment on the face of the patent)
Dec 24 2014ZALMANSON, GARRY HAIMISRAEL AEROSPACE INDUSTRIES LTDASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0380560905 pdf
Dec 24 2014ZALMANSON, GARRY HAIMISRAEL AEROSPACE INDUSTRIES LTDCORRECTIVE ASSIGNMENT TO CORRECT THE NAME OF THE FIRST INVENTOR PREVIOUSLY RECORDED ON REEL 038056 FRAME 0905 ASSIGNOR S HEREBY CONFIRMS THE NAME OF THE FIRST INVENTOR 0398490132 pdf
Dec 25 2014HILLEL, GIL BARISRAEL AEROSPACE INDUSTRIES LTDASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0380560905 pdf
Dec 25 2014BAR HILLEL, GILISRAEL AEROSPACE INDUSTRIES LTDCORRECTIVE ASSIGNMENT TO CORRECT THE NAME OF THE FIRST INVENTOR PREVIOUSLY RECORDED ON REEL 038056 FRAME 0905 ASSIGNOR S HEREBY CONFIRMS THE NAME OF THE FIRST INVENTOR 0398490132 pdf
Date Maintenance Fee Events
Aug 20 2021M1551: Payment of Maintenance Fee, 4th Year, Large Entity.


Date Maintenance Schedule
Feb 20 20214 years fee payment window open
Aug 20 20216 months grace period start (w surcharge)
Feb 20 2022patent expiry (for year 4)
Feb 20 20242 years to revive unintentionally abandoned end. (for year 4)
Feb 20 20258 years fee payment window open
Aug 20 20256 months grace period start (w surcharge)
Feb 20 2026patent expiry (for year 8)
Feb 20 20282 years to revive unintentionally abandoned end. (for year 8)
Feb 20 202912 years fee payment window open
Aug 20 20296 months grace period start (w surcharge)
Feb 20 2030patent expiry (for year 12)
Feb 20 20322 years to revive unintentionally abandoned end. (for year 12)