A scanning apparatus and method for generating computer models of three-dimensional objects comprising means for scanning the object to capture data from a plurality of points on the surface of the object so that the scanning means may capture data from two or more points simultaneously, sensing the position of the scanning means, generating intermediate data structures from the data, combining intermediate data structures to provide the model; display, and manually operating the scanning apparatus. The signal generated is structured light in the form of a stripe or an area from illumination sources such as a laser diode or bulbs which enable data for the position and color of the surface to be determined. The object may be on a turntable and may be viewed in real time as rendered polygons on a monitor as the object is scanned.
|
0. 56. A method for generating three-dimensional data of the object, comprising:
irradiating light onto the object from a scanner;
detecting light reflected from a surface of the object by the scanner;
outputting information indicated by the detected light from the scanner;
detecting relative position of the object to the scanner at a timing related to a timing at which the reflected light is detected; and
generating three-dimensional data of the object using the detected relative position and the outputted information.
14. A scanner mountable on a multiply-jointed arm for movement therewith to capture data from a plurality of points on a surface of an object, the scanner having a housing enclosing:
a light source operable to emit light onto the object surface;
a light detector operable to detect light reflected from the object surface and to generate electrical image data signals in dependence upon the detected light; and
a data processor operable to process the electrical image data signals to generate digital image data,
the data processor being connectable to a bus to transmit the digital image data.
30. A laser scanner mountable on a multiply-jointed arm for movement therewith to capture data from a plurality of points on a surface of an object, the laser scanner having a housing enclosing:
a laser to emit a laser stripe onto the object surface;
a camera operable to generate images of laser light reflected from the object surface; and
a data processor operable to process the images generated by the camera to generate processed data defining a position of the laser stripe in the images, the data processor being connectable to a data communication link to transmit the processed data therealong.
22. A scanner mountable on a multiply-jointed arm for movement therewith to capture data from a plurality of points on a surface of an object, the scanner having a housing enclosing:
a light source operable to emit light onto the object surface;
a light detector operable to detect light reflected from the object surface and to generate electrical image data signals in dependence upon the detected light; and
a data processor operable to process the electrical image data signals to generate data defining coordinate measurements of the surface of the object, and to transmit the generated data on a physical data path.
0. 48. An apparatus comprising:
a scanner including a light source which emits light to an object so that the light reflects off the object, and a light detector which detects the reflected light, wherein the scanner outputs first information indicated by the detected light;
a processor unit which communicates with the scanner and which generates three-dimensional data of the object using the first information outputted by the scanner and second information relative to a position of at least one of the object and the scanner at a timing corresponding to a timing at which the light detector detects the reflected light.
35. A laser scanner mountable on a multiply-jointed arm for movement therewith to capture data from a plurality of points on a surface of an object, the laser scanner having a housing enclosing:
a laser to emit at least one laser stripe onto the object surface;
a camera operable to generate images of laser light reflected from the object surface, each image comprising a plurality of pixels; and
a data processor operable to process the images generated by the camera to perform measurements to sub-pixel accuracy, the data processor being connectable to a data communication link to transmit results of the measurements therealong.
0. 59. A method for generating three-dimensional data of an object, comprising:
irradiating light onto the object from a scanner;
detecting light reflected from a surface of the object by the scanner;
outputting information indicated by the detected light from the scanner;
generating a trigger pulse synchronized with a timing at which the sensor detects light;
processing position information relative to the position of the scanner;
outputting the position information in response to the trigger pulse; and
generating three-dimensional data of the object using the position data and the information output from the sensor.
7. A scanner mountable on a multiply-jointed arm for movement therewith to capture data from a plurality of points on a surface of an object, the scanner having a housing enclosing:
a light source operable to emit light onto the object surface;
a light detector operable to detect light reflected from the object surface and to generate electrical image data signals in dependence upon the detected light; and
a data processor operable to process the electrical image data signals to generate processed data of reduced quantity, the data processor being connectable to a data communication link to transmit the processed data therealong.
0. 43. An apparatus comprising:
a scanner including a light source which emits light to an object so that the light reflects off the object, and a light detector which detects the reflected light, wherein the scanner outputs information indicated by the detected light;
a position detector detecting position of at least one of the object and the scanner at a timing corresponding to a timing at which the light detector detects the reflected light; and
a processor determining a relative position of the object to the scanner using the information output by the scanner, and generating three-dimensional data of the object using the determined relative position and the information output by the position detector.
0. 44. An apparatus comprising:
a scanner including a light source which emits light to an object so that the light reflects off the object, and a light detector which detect the reflected light, wherein the scanner outputs first information indicated by the detected light;
a position detector which processes second information related to position of at least one of the object and the scanner at a timing related to a timing at which the light detector detects the reflected light;
a processor which communicates with the position detector and the scanner, and which generates three-dimensional data of the object using the first information outputted by the scanner and the second information outputted by the position detector.
11. A scanning apparatus, comprising:
a multiply-jointed arm having a plurality of arm segments;
a scanner mounted on an arm segment of the multiply-jointed arm for movement therewith to capture data from a plurality of points on a surface of an object, the scanner having a housing enclosing:
(a) a light source operable to emit light onto the object surface;
(b) a light detector operable to detect light reflected from the object surface and to generate electrical image data signals in dependence upon the detected light; and
(c) a data processor operable to process the electrical image data signals to generate digital image data; and
a bus connected to the data processor of the scanner to transmit the digital image data.
0. 36. An apparatus comprising:
a scanner including a light source which emits light to an object so that the light reflects off the object, and a light detector which detects the reflected light, wherein the scanner outputs information indicated by the detected light;
a position detector detecting position of at least one of the object and the scanner at a timing corresponding to a timing at which the light detector detects the reflected light;
a processor determining a relative position of the object to the scanner using the information output by the scanner; and
a three dimensional data generator generating three-dimensional data of the object using the determined relative position and the information output by the position detector.
0. 42. An apparatus comprising:
a scanner including a light source which emits light to an object so that the light reflects off the object, and a light detector which detects the reflected light, so that the scanner thereby scans the object, wherein the scanner outputs information indicated by the detected light; and
a processor which, as the object is being scanned by the scanner, determines a changing relative positional relationship between the object and the scanner at a timing corresponding to a timing at which the light detector detects the reflected light, and
a three dimensional data generator generating three-dimensional data of the object using the information output by the scanner and the determined change in positional relationship.
26. A laser scanning apparatus, comprising:
a multiply-jointed arm having a plurality of arm segments and a data communication link to transmit data; and
a laser scanner mounted on an arm segment of the multiply-jointed arm for movement therewith to capture data from a plurality of points on a surface of an object, the laser scanner having a housing enclosing:
(a) a laser to emit a laser stripe onto the object surface;
(b) a camera operable to generate images of laser light reflected from the object surface; and
(c) a data processor operable to process the images generated by the camera to generate processed data defining a position of the laser stripe in the images, the data processor being connected to the data communication link to transmit the processed data therealong.
16. A coordinate measuring machine, comprising:
a multiply-jointed arm having a plurality of arm segments and a physical data path to transmit data; and
a scanner mounted on an arm segment of the multiply-jointed arm for movement therewith to capture data from a plurality of points on a surface of an object, the scanner having a housing enclosing:
a light source operable to emit light onto the object surface;
a light detector operable to detect light reflected from the object surface and to generate electrical image data signals in dependence upon the detected light; and
a data processor operable to process the electrical image data signals to generate data defining coordinate measurements of the surface of the object, and to transmit the generated data on the physical data path.
1. A scanning apparatus, comprising:
a multiply-jointed arm having a plurality of arm segments and a data communication link to transmit data; and
a scanner mounted on an arm segment of the multiply-jointed arm for movement therewith to capture data from a plurality of points on a surface of an object, the scanner having a housing enclosing:
(a) a light source operable to emit light onto the object surface;
(b) a light detector operable to detect light reflected from the object surface and to generate electrical image data signals in dependence upon the detected light; and
(c) a data processor operable to process the electrical image data signals to generate processed data of reduced quantity, the data processor being connected to the data communication link to transmit the processed data therealong.
32. A laser scanning apparatus, comprising:
a multiply-jointed arm having a plurality of arm segments and a data communication link to transmit data; and
a laser scanner mounted on an arm segment of the multiply-jointed arm for movement therewith to capture data from a plurality of points on a surface of an object, the laser scanner having a housing enclosing:
(a) a laser to emit at least one laser stripe onto the object surface;
(b) a camera operable to generate images of laser light reflected from the object surface, each image comprising a plurality of pixels; and
(c) a data processor operable to process the images generated by the camera to perform measurements to sub-pixel accuracy, the data processor being connected to the data communication link to transmit results of the measurements therealong.
2. A scanning apparatus according to
3. A scanning apparatus according to
4. A scanning apparatus according to
6. A scanning apparatus according to
8. A scanner according to
9. A scanner according to
10. A scanner according to
12. A scanning apparatus according to
13. A scanning apparatus according to
17. A coordinate measuring machine according to
18. A coordinate measuring machine according to
19. A coordinate measuring machine according to
20. A coordinate measuring machine according to
21. A coordinate measuring machine according to
23. A scanner according to
24. A scanner according to
25. A scanner according to
27. A laser scanning apparatus according to
the camera is arranged to generate images comprising a plurality of pixels; and
the data processor is arranged to process the images generated by the camera to generate processed data defining a position of the laser stripe in the images to sub-pixel accuracy.
28. A laser scanning apparatus according to
29. A laser scanning apparatus according to
31. A laser scanner according to
the camera is arranged to generate images comprising a plurality of pixels; and
the data processor is arranged to process the images generated by the camera to generate processed data defining a position of the laser stripe in the images to sub-pixel accuracy.
33. A laser scanning apparatus according to
34. A laser scanning apparatus according to
0. 37. An apparatus comprising according to claim 36, wherein
the timing at which the light detector detects the reflected light is at least one of different timings defined by a synchronization signal; and
the apparatus further comprising a trigger pulse generator receiving the synchronization signal and, in response thereto, outputting trigger pulses to the position detector, which indicate the timing at which the position detector is to detect the position.
0. 38. An apparatus according to claim 37, wherein timing at which the trigger pulse generator outputs trigger pulses is a predetermined time behind timing at which the trigger pulse generator receives the synchronization signal.
0. 39. An apparatus comprising according to claim 36, wherein the position detector is a remote position sensor.
0. 40. An apparatus comprising according to claim 36, wherein the position detector calculates the position to thereby detect the position.
0. 41. An apparatus comprising according to claim 36, further comprising:
a probe which captures data from individual points on the object touched by the probe.
0. 45. An apparatus according to claim 44, wherein
the timing at which the light detector detects the reflected light is at least one of different timings defined by a synchronization signal; and
the apparatus further comprising a trigger pulse generator receiving the synchronization signal and, in response thereto, outputting trigger pulses, of which each indicates a timing at which the position detector is to detect the position, to the position detector.
0. 46. An apparatus according to claim 45, wherein the timing at which the trigger pulse generator outputs trigger pulses is a predetermined time behind a timing at which the trigger pulse generator receives the synchronization signal.
0. 47. An apparatus according to claim 46, wherein the position detector calculates the position to thereby detect the position.
0. 49. An apparatus according to claim 48 further comprising:
a position detector which detects relative position between the scanner and the object at the timing corresponding to the timing at which the light detector detects the reflected light.
0. 50. An apparatus according to claim 48, wherein
the timing at which the light detector detects the reflected light is at least one of different timings defined by a synchronization signal; and
the apparatus further comprising a trigger pulse generator receiving the synchronization signal and, in response thereto, outputting trigger pulses, of which each indicates a timing at which the position detector is to detect the position, to the position detector.
0. 51. An apparatus according to claim 49, wherein
the timing at which the light detector detects the reflected light is at least one of different timings defined by a synchronization signal; and
the apparatus further comprising a trigger pulse generator receiving the synchronization signal and, in response thereto, outputting trigger pulses, of which each indicates a timing at which the position detector is to detect the position, to the position detector.
0. 52. An apparatus according to claim 50, wherein a timing at which the trigger pulse generator outputs trigger pulses is a predetermined time behind a timing at which the trigger pulse generator receives the synchronization signal.
0. 53. An apparatus according to claim 51, wherein a timing at which the trigger pulse generator outputs trigger pulses is a predetermined time behind a timing at which the trigger pulse generator receives the synchronization signal.
0. 54. An apparatus according to claim 52, wherein the position detector calculates the position to thereby detect the position.
0. 55. An apparatus according to claim 53, wherein the position detector calculates the position to thereby detect the position.
0. 57. A method according to claim 56, further comprising:
generating a synchronization signal from the scanner; and
generating a trigger pulse, in response to receiving the synchronization signal at a trigger pulse generator,
wherein the relative position of the object to the scanner is detected at a timing at which the trigger pulse is detected by the position sensor.
0. 58. A method according to claim 57, wherein the trigger pulse is output at a timing which is a predetermined time behind a timing at which the trigger pulse generator receives the synchronization signal.
|
This application
The transformation matrix Tam can be found in several ways. Now referring to
The above method can be encapsulated in the main scanning software provided with the scanning system or in a separate program. This has the advantage that much time is saved over an alternative of the user calculating Tam manually from arm positions output by the arm manufacturer's software and manually inputting the resulting Tam into the main scanning system software.
The probe side mount 214 is integral to the probe and does not move relative to the probe coordinate system. The transformation matrix Tmp is provided by the probe supplier with the calibration data for the probe.
The direct calculation of Tap using the arm and probe coordinate systems but without involving an intermediate mount can be carried out in many ways. Most of the ways involve using the probe mounted on the arm to capture data from one or more geometrical objects. The problem has proven to be very difficult, since many of the standard methods produce inaccurate results in either the orientation or position components often due to inherent instabilities triggered by relatively small errors. One way is disclosed, by example, for an S stripe probe:
The handedness of the coordinate systems of the arm 1 and the probe 3 would be known. The relationship between the normals of the surfaces on the alignment calibration object 230 could be specified. One way of doing this is by labeling the three faces 231, 232, 233 and specifying the order in which the three faces must be scanned.
The main advantages of the above apparatus and its method of aligning the probe are (1) that it involves a single alignment calibration object that is cheap to manufacture to the required geometrical tolerance and is relatively light and compact; (2) that the method is robust, simple to carry out from written instructions and quick; (3) that the processing can be encapsulated in the main scanning software provided with the scanning system or in a separate program; (4) that there is no need to have any preliminary geometric information about the orientation and position of the probe relative to the tip of the arm at the start of this method—for example, the probe could be slung on the underside of the arm pointing backwards and the method would work; and (5) that if the probe is knocked or damaged such that Tmp changes but the calibration is still valid, then this method of alignment will still work.
In using scanning systems to provide data for 3D applications software, the need for specific 3D reference points in addition to 3D surfaces became apparent. Some applications for which 3D surfaces are required that also require 3D reference points are animations involving joint movements where a joint is to be specified in the context of the 3D model. In this case, the joint can be quickly defined from one or more 3D reference points. A new method of using the scanning system is to use the probe 3 to scan the surface and to use the tip reference point 51 to capture individual 3D points by contact. An alternative method is to project a calibrated crosshair onto the object and use an optical method of picking up individual points. This can be used in both stripe and area systems. The calibrated crosshair is usually switched on just during the period in which individual points are captured. There could be two modes—in the first mode individual points are captured each time a button is clicked; and in the second mode, a stream of individual points are captured from when a button is first pressed until it is pressed again. The second mode is commonly used for tracing out important feature lines, such as style lines or patch boundaries. In the case of a stripe sensor, instead of projecting a crosshair, it may only be necessary to project a second stripe at the same time as the main stripe. The crosshairs may be calibrated by the probe supplier using a three-axis computer controlled machine, a known calibration object, and standard image processing techniques.
The scanning apparatus 100 is operable to scan an object and thereby generate a computer model of the object's surface using an intermediate data structure for efficiently storing points on the surface of the object during scanning, creating an instance of the intermediate data structure for the particular object; and controlling the storage of the scanned points in the intermediate data structures during scanning with an operator control system.
Three examples of these intermediate data structures may be points or encoded stripes or range images.
Points have the disadvantage of being unorganized and much information obtained from the structure of the probe and the method of its use is lost if the 3D data is reduced to points.
In the case of stripe probes, much information may be retained to improve the speed and quality of construction of a model from intermediate data if an encoded stripe intermediate data structure is used. Such a structure stores data from one stripe at a time. The stripes are stored in the order of capture. The time of capture of each stripe is recorded. The orientation of the probe is recorded for each stripe. The raw data points from the stripe may be processed before storing in the data structure to determine jump and break flags and to sample or chordally tolerance the raw data points to reduce the size of the intermediate data structure without losing any significant information.
In the case of area probes, the advantages of a range image as an intermediate data structure are well known. These advantages include a data structure that relates well to the area based data capture method and the efficiency of storage in an image in which only Z values are stored.
An intermediate data structure can be used in which the surface of an object is described by means of a finite number of linear and cylindrical range images that are, to some extent, characterized by the shape of the object.
A linear range image 70 is illustrated with reference to
Cylindrical range images 71, 72 are described in
Referring now to
The range image-placing algorithm is simple and quick, but it is indiscriminate, often placing points incorrectly in range images and relying upon them being overwritten by a nearer point. If the range image is very dense, but populated with few values, then up to half the points populated could be incorrect because the surface normal of the point is incorrect. This can restrict successful scanning to coarse range images.
The range image-placing algorithm is improved upon with the surface normal extension. The range image-placing algorithm does not have an estimate of the surface normal of the point to be placed. Also, it does not take into account the orientation of the probe when the stripe is captured. To improve the range image placing, the fact that most stripes are scanned in sequence and have near predecessor and near successor stripes is used. For example, as illustrated in
A number of range images that are positioned in the object coordinate system must be defined. The range images have specific mathematical definitions. Two basic types of range image are used—linear and cylindrical—as discussed above. A range image has direction and a zero position. The range image can only store points that are in front of its zero position. If there are two or more surfaces of the object in line with a point in the range image, then the surface that is nearest to the range image's zero position is represented in the range image. A range image can be constrained in size or unconstrained in size. The range image can be one image of fixed density or comprise a patchwork of a number of adjoining images of different densities. Each grid position in the range image is single-valued. The range image will typically use 4 bytes to store a depth value Z, from 1 to 4 bytes to store the gray scale or color value 1, and from 1 to 3 bytes to store the orientation N. This is illustrated with reference to
Now referring to
With objects with deep external features, such as the inside of an ear, it may not be possible or practical to scan all parts of the external surface, but it is possible to represent them theoretically.
The number and position of the range images used in the process are such that they are sufficient to be able to store enough of the surface of the object to enable a computer model of the desired accuracy and detail to be generated.
In a manual process, the number and position of all the range images may be defined by the operator before scanning. Alternatively, just one may be defined by the operator before scanning begins, followed by the definition of others at any point during scanning. The operator has a choice of several strategies. He can define range images and scan range one at a time. He can define a number of range images and scan simultaneously. He can define some range images and scan followed by defining more range images and then scanning. If a point is scanned that does not fit onto any defined range image, then it is rejected. Alternatively, such rejected points could be automatically saved for placing into any new range images that the operator may subsequently define.
A typical number of range images varies from 1 to 20. Some range images need only be very small in size—small enough to cover a part of the object that is otherwise hidden from recording on other range images. The density of each range image can vary. For instance, a large, smooth part of the object does not need a high point density; but a small, finely detailed ornament may require a high point density. Each range image has a direction.
The operator may select the most suitable set of predefined range images from a library of range image sets. He can then edit the set to suit his object. Each new set is then stored in the library. A set can be thought of as a set of templates. As an example, for a human form there could be a range image set consisting of five cylindrical range images for the limbs and the trunk, together with five linear range images for the top of the head/shoulders, hands, and feet. For a car, one cylindrical range image for the car's body and two linear range images at each end of the car could be enough. It is important to note that the axis of a cylindrical range image must lie within the object or part of the object being scanned.
A range image is manually defined by the operator by first selecting the appropriate range image type—cylindrical or linear—and second, placing the probe to give the desired position and orientation of the range image and selecting it using the operator control system. For a cylindrical range image, the probe could be positioned to first give the position and direction of the axis and then to give the maximum radius.
Now referring to
The inference method is particularly used when an additional range image is added at a late stage in the scanning process or if range images are defined/scanned one at a time. The method enables surface areas that are nearly orthogonal to the range image, i.e., are almost vertical walls, to be well defined from data stored in the other range images. This provides a better set of points for carrying out the polygonization of one range image resulting in a more accurate polygonal network and simplifying the polygonization process.
The probe 3 provides data that is displayed on the display monitor 7 as a rendered polygonal surface 13 in real-time or with an acceptable delay such that the user can watch the display monitor 7 and use the feedback of the rendered surface to guide his movement of the probe 3. Real-time is defined in the context of visualization as an operation reacting with a delay small enough to be acceptable to an operator in normal use. The probe 3 could be a stripe probe or an area probe. Where the probe captures 3D and color information, then the color information can be mapped onto the 3D model to texture it, as discussed below.
The surface to be displayed is calculated for stripe probes one additional stripe at a time. Referring now to
If color has been recorded for a polygon then the color information can be mapped onto the polygon. The precise mapping algorithm depends on the format of the raw color information, which depends on the design of the probe. The raw color information may comprise point, line, or area samples. The raw color information may be adjusted before mapping using calorimetric calibration and intensity calibration data. During the mapping process, the color information may be adjusted for the probe to polygon distance at point of color capture and polygon orientation to probe at point of capture. The basis for the adjustments is a set of calibration procedures carried out for each individual probe.
The viewpoint for the surface displayed can have a constant position, zoom, and orientation in the world coordinate system of the object such that, as the probe is moved, the surface displayed increases where the data is captured. The viewpoint is set before scanning starts, either with an input device (such as buttons) on the arm, foot pedals, a mouse, and a keyboard, or by using the probe to determine the viewpoint. Alternatively, the viewpoint can have a constant position, zoom, and orientation in the probe coordinate system such that, as the probe moves, the surface is completely re-rendered at regular intervals, each time with the new surface displayed where the data has been captured, with the regular intervals being at an acceptable real-time rate, such as 25 displays per second or less often. Alternatively, the viewpoint can have a constant position, zoom, and orientation in the world coordinate system where the surface displayed increases where the data is captured that is completely updated to that of the probe coordinate system on operator demand such as by the depressing of a button or foot pedal or at regular time intervals, such as every 10 seconds. The different methods for updating the viewpoint provide different advantages, depending on the size and type of the object being scanned and the speed of the computer in recalculating the surface display from a different viewpoint.
Referring again to
Referring again to
As computing power becomes faster and more compact, it will be possible to encapsulate the computer 4 in the probe 3 as well as having the display 7 mounted on the probe. The probe might have memory 262, which could be both dynamic memory and magnetic memory, such as a CDROM or digital video disk (DVD). The probe might have a local power source 260, such as batteries. This would be the case with one or more remote position sensors 261 mounted inside the probe. Although one remote position sensor is sufficient, more accuracy is obtained by averaging the positions coming from three or more remote position sensors. Another benefit of three or more sensors is that when a spurious position is output by one or more sensors, this can be detected and the data ignored. Detection of incorrect positions is by means of comparing the positions output by the three sensors to their physical locations within the probe to see if the variation is larger than the combined, acceptable error of the sensors. Since remote position sensor technology is likely to remain much less accurate than multiply jointed arm technology, it is preferable that probes with remote sensors use array scanning means rather than stripe scanning means. With a single array scan, all the data in the array (i.e., a range image) is accurately registered to each other, but with stripes there are position errors been any two sequential stripes. It is possible to use an iterative closest point (ICP) algorithm on overlapping range images to substantially reduce the errors caused by the remote position sensors; but this is not possible with stripes.
A number of different technologies exist for area probes including binary stereo, photometric stereo, texture gradients, range from focus, range from motion, time of flight, Moire interferometric, and patterned structured light systems. The most common systems in use in industrial applications are time of flight, Moire, and patterned structured light. Different area probe technologies have different advantages and disadvantages for manual scanning.
Time of flight systems use a modulated laser spot to measure a scene by the phase shift between outgoing and reflected beams, which is proportional to the range of the object point. A complete range image is captured by scanning the whole region of interest. For a small area, this technique is advantageous since it is line of sight, although the accuracy is generally of the order of 1-2 mm unless multiple measurements are taken at each point, thus reducing scanning speed significantly. It is thus too slow.
Moire systems use gratings in front of projection and viewing optics to produce an interference pattern that varies according to local changes in height on the object. Absolute measurements and measurements across discontinuities are only possible by taking several measurements with different grating configurations or from different project angles. For relative height measurement, these systems offer high accuracy. It is thus too problematic to obtain absolute measurements.
A depth from focus range area sensor has recently been demonstrated that allows the real-time determination of range from pairs of single images from synchronized cameras, albeit with the use of relatively complex hardware. It is thus too complex to use at this point in the development of the technology.
Referring now to
The simultaneous projection of color-coded light stripes overcomes the disadvantages of the previously described systems and is the preferred area embodiment of this invention. Each stripe is one color. Each color may be a discrete wavelength, such as provided by a number of different laser diodes or a subset of a spectrum range of color generated from a white light source. Either all of the colors may be unique or a small number of colors may repeat. The repetition of a small number of colors can lead to ambiguity if stripes of the same colors are not sufficiently separated.
The probe encapsulation would have advantages in terms of cost reduction and complete flexibility in freedom of use because even cables may not be required and the only limits would be the range and accuracy of the remote position sensor.
If an arm is being used as the position sensor, the probe with a display mounted on it might receive its power along a cable that may follow the path of the arm, and the computer may be situated in the base of the arm, which would reduce the weight of the probe and reduce operator fatigue.
Referring again to
There are several ways of automatically polygonizing intermediate data to form a 3D polygonal model. Two ways are described—strip polygonization and range image polygonization.
The strip polygonization of intermediate data to automatically create a polygonal model is described for a stripe scanner. The following description is by means of an example and comprises the following steps:
1. Take the intermediate data in the order in which it is scanned, including the probe orientation for each stripe. For a stripe probe, this will typically consist of a number of neighboring stripes with occasional discontinuities, such as when the scanning process is paused or a turntable is turned or the direction of scanning is reversed. The intermediate data is preferably in an encoded stripe form as described above.
2. Group the data into stripes of similar probe orientations and no discontinuities. An acceptable variation of the probe orientation in a group of data may be ten degrees. The average normal for each set of stripes is specified. A new group is started each time a discontinuity appears or when the probe orientation varies unacceptably.
3. If not already done in the intermediate data, filter the stripes in each group using a chordal tolerancing routine to reduce the quantity of points and maintain the positions of the break and jump flags.
4. Use a 2.5D polygonization method to polygonize each group. This will result in a number of 2.5D polygonal meshes. There may be holes in any of the meshes. The method eliminates occluded surfaces behind surfaces resulting from variations in the probe orientation within the group.
5. Use a polygon mesh integration method such as an implicit surface method to integrate the 2.5D polygonal meshes into a computer model comprising one or more 3D polygonal meshes.
6. If required, use the known base plane of the object specified during the scanning setup to automatically close the bottom of the model where the object could not be scanned because it was resting on a table or turntable.
7. If required, use a general closing function to automatically close all holes in the model.
8. If required, use a smoothing function set such that features created by known levels of inaccuracy in the 3D scanning process are smoothed out and features greater in size than the inaccuracy of the system are maintained.
9. Convert the internal polygon format into an output file of a commonly used polygon file format, such as DXF.
The range image polygonization of intermediate data to automatically create a polygonal model is similar to strip polygonization. Each range image is effectively a group of stripes with the same surface normal. Steps 1 and 2 above are, therefore, not needed. There are two ways of carrying out the equivalent of step 3 above. Range image data may be chordal toleranced as a series of stripes as described in step 3, and the polygonization process continued with steps 4 to 9, as required. In the second way, given the greater structure of a range image over a group of stripes, steps 3 and 4 may be combined and a range image tolerancing algorithm combined with a 2.5D polygonization algorithm and the polygonization process continued with steps 5 to 9, as required.
Area scanners usually output range images. In general, range image polygonization is better suited to area scanners and strip polygonization is better suited to stripe scanners. If the intermediate data structure is range images then the range image polygonization will work whether each range image relates to a particular data capture instant or is part of a defined range image structure that is characterized by the shape of the object.
The combining of color data onto the 3D model is known as texture mapping.
Before raw color data in the form of color images can be texture mapped onto the 3D model, it must first be corrected by means of various calibrations.
An important calibration is the geometric calibration of the color camera and finding the alignment transform of the color camera to the calibrated 3D measurement system in the probe. Without these calibrations/alignments, neighboring color samples when mapped together will produce visible errors. The objective of these calibrations is to get the geometric errors much smaller than those of the arm accuracy. The first geometric calibration is to take out lens distortion. Standard means are used for this based on imaging geometric objects of known size and extracting pixel coordinates using standard image processing techniques. The second is to create the camera model. A simple pinhole model can be used or a more complex model. Standard means are used for this based on imaging geometric objects of known size from different distances and extracting pixel coordinates using standard image processing techniques. The third is generating the alignment transform. A method has been developed based on 3D and color imaging geometric objects of known size using the probe. For all three methods, a three-axis computer controlled machine is used to ensure precise distances. The probe engineering must be geometrically stable enough such that this transform will only be recalculated rarely such as after the probe has been dropped or damaged.
Much of the effect of distance from the probe to the object on recorded light intensity can be calibrated out. A diffuse, flat, white surface is imaged normal to the camera axis at a number of different distances from the probe to the surface. The distances are chosen to cover the whole scanning range from closest point to furthest point. The variations in mean intensity recorded in the camera are used to calibrate the probe with distance. This calibration data is used to correct the color data recorded when scanning an object such that all color data is corrected to a known distance equivalent.
Much of the effect of tilt of the surface from the camera axis on the color quality can be removed, but the effectiveness of this depends on at least the surface reflectance for each color. A diffuse, flat, white surface is imaged at various angles to the camera axis at a fixed distance from the probe to the surface. The angles are chosen to the point at which there is significant deviation from the Lambertian model. The variations in mean intensity recorded in the camera are used to calibrate the probe intensity with relative surface angle to the probe. This calibration data is used to correct the color data recorded when scanning an object such that all color data is corrected to a normal equivalent.
A standard calorimetric calibration is carried out using reference colors, such as Macbeth charts that are mounted normal to the color camera axis at a known distance from the probe. Corrections are made to a commonly used color standard, such as to the CIE. Individual pixels in the camera may be color- and intensity-corrected.
Some of the above calibrations vary little among probes manufactured to the same design. This is probably due to tight manufacturing tolerances. The calibration information can be incorporated into the software as, for example, constants or tables or equations for the probe design. Others calibrations are carried out once on the setup of each probe after manufacture. Other calibrations could be carried out each time the scanning system is used—for example, the scanning of a white surface at a known distance will set the lamp intensity relative to the intensity when the bulbs were new.
Referring now to
1. Each color image 320 is corrected using calibration and geometric data.
2. For each surface element 321, the color image whose normal 323 is closest in orientation to the normal 322 of the surface element 321 is selected (the master image) and the texture map coordinates for that surface element go to the mapping of that surface element onto that master image. The closest image normal is that of 320a in this case.
3. The other color images that map onto the surface element are then processed. If the surface normal difference between the surface element and a color image is above a certain tolerance, then that image is ignored. This is because the color quality obtained in the image degrades significantly as the surface orientation of the object relative to the image becomes very steep. The part of the master image on which the surface element maps is then improved by a weighted average of all the color image mapped parts. The basis of the weighting is the cosine of the difference in surface normal between the surface element and the color image.
The apparatus and methods disclosed above each singly produce an improved color “copy” of the 3D model and a significant commercial advantage.
Ways of improving the scanning timing and consequently reducing geometrical errors are disclosed.
Where no electrical triggering is possible, to reduce the inaccuracy caused by the time difference between the recording of the arm position and the capturing of the frame, the following method is employed:
1. With reference now to
2. A frame is requested.
3. When the frame has been captured C, the time t2 is recorded. There is a known delay T/2 with little variability from the middle of the frame capture to this time t2, which is largely dependent on the shutter time open T.
4. The arm position after A is recorded and the time t3 of this is recorded.
5. The arm position in the middle of the frame is estimated by interpolating in six degrees of freedom between the two arm positions B,A using the time (t2-T/2) at the middle of the frame capture as the interpolation weighting between t1 and t3.
6. In the case of a long interrupt, if the difference between t1 and t3 is significantly large, then the data is deleted.
This interpolation method can increase the accuracy of a non-triggered system by a large amount and is extremely significant in the quest to obtain geometrically accurate data.
In addition, the operating system under which the interpolation software runs may be set to prioritize the interpolation software as high priority so that the introduction of delays due to other software being executed is minimized. Even if another software function interrupts this process, the validity of the process is not impaired unless the interrupting process is of extraordinarily long duration. Prioritization is not essential, but will contribute to reduced timing error where prioritizing is available in the operating system.
In the case where triggering is possible, there are many methods of carrying it out. One method is, with reference now to
The operator interface means alone—not including the standard computer means such as mouse and keyboard—can be used to control the scanning and computer model generation process and the functionality of the options that can be actuated. The operator interface means include means for navigating menus such as buttons, foot pedals, joysticks, trackballs, and the position-sensing means—arm or remote position sensor.
Using any of the above means, the operator can simply select the required operations and operating parameters, which could include, for example, being able to:
Complex surfaces can be created from marked surface patch boundaries. Referring now to
Referring now to
This invention is a general 3D model-making device and has wide-ranging applicability. The application industries for this invention include design stylists who need to turn clay objects into computer models quickly and accurately; games developers and animators who need to convert new characters into 3D data sets for animation; shoe manufacturers who need to make custom shoes; automotive manufacturers who need to model the actual cable and pipe runs in confined spaces; and medical applications that include radiotherapy and wound treatment. Altogether, some 200 applications have been identified for this invention.
Referring now to
There is a need by automobile manufacturers to identify the actual route of pipes and cables in confined areas, such as an engine department. Automobile manufacturers are trying to model in 3D CAD all aspects of a car. They need some way of scanning pipes and cables in the car reference system so that high level 3D models of the pipes and cables are output that can be introduced into the CAD system for identifying actual routing and potential interferences. In the scanning of pipes and cables, for instance, in confined spaces, if there is a problem with black or shiny items not being scannable, these can be first dusted with a white powder that is easily removed after scanning.
Referring now to
The intermediate data structure in which the stripe sections are collated could be the standard stripe section structure 303, but includes the changes in mode and the orientation of the probe for each section. In scanning pipes and cables, panel sections along which the pipes and cables run are also captured 342a, 342d. Where there is no contact between the pipe and the panel, there is a jump or break in the stripe section. These can be flagged in the data structure with jump flags 305 and break flags 304.
To be useful to an automobile manufacturer, a high level model should be created and output from this data. A polygonization or surfacing method joins the sections together and can handle the joining of pipes, panels, etc. The result is high level models 350 to 352. If more information is known about the pipe or cable, such as its section if it is constant or its form even if the form's dimensions change, e.g., circular but varying diameter, the model 351 can be automatically expanded to 353. Alternatively, two scanned sides of the same pipe can be automatically joined. This gives the automobile manufacturer the high level model that he needs.
As will be understood to persons skilled in the art, there are various modifications within the scope of the present invention. For example, the color camera does not need to be included. A single camera could be utilized for both color and position sensing. The fitter in the probe could be a narrow band pass filter or a red high band pass filter, as required. The system is adaptable to many types of model generation not just those discussed herein. The data collected by the probe could be used for other applications and could be stored for dissemination elsewhere—for example, by electronic mail. The probe can be a stripe or an area probe. The display can be mounted anywhere depending upon the application requirements.
While the preferred embodiment of the invention has been illustrated and described, it will be appreciated that various changes can be made therein without departing from the spirit and scope of the invention.
Patent | Priority | Assignee | Title |
10089415, | Dec 19 2013 | Faro Technologies, Inc. | Three-dimensional coordinate scanner and method of operation |
10122997, | May 03 2017 | LOWE'S COMPANIES, INC. | Automated matrix photo framing using range camera input |
8576411, | Nov 10 2010 | Yazaki Corporation | Component position measurement method |
8596525, | Feb 06 2012 | Oracle International Corporation | Topographic spot scanning for a storage library |
8613386, | Feb 29 2012 | Oracle International Corporation | Contrast spot scanning for a storage library |
Patent | Priority | Assignee | Title |
4628469, | Mar 30 1981 | Banner Engineering Corporation | Method and apparatus for locating center of reference pulse in a measurement system |
4649504, | May 22 1984 | CAE Electronics, Ltd. | Optical position and orientation measurement techniques |
4691446, | Sep 05 1985 | Ferranti plc | Three-dimensional position measuring apparatus |
4825391, | Jul 20 1987 | Intel Corporation | Depth buffer priority processing for real time computer image generating systems |
4982102, | Jun 27 1989 | Mitsubishi Denki Kabushiki Kaisha | Apparatus for detecting three-dimensional configuration of object employing optical cutting method |
4993835, | Jun 16 1989 | MITSUBISHI DENKI KABUSHIKI KAISHA, | Apparatus for detecting three-dimensional configuration of object employing optical cutting method |
5008555, | Sep 18 1987 | EATON LEONARD ROBOLIX, INC | Optical probe with overlapping detection fields |
5090811, | May 31 1989 | General Electric Company | Optical radius gauge |
5168528, | Aug 20 1990 | ITT Corporation | Differential electronic imaging system |
5189291, | May 01 1989 | Symbol Technologies, Inc. | Bar code reader operable as remote scanner or with fixed terminal |
5191642, | Apr 09 1987 | Intel Corporation | Method for efficiently allocating computer resource for real time image generation |
5193120, | Feb 27 1991 | Mechanical Technology Incorporated | Machine vision three dimensional profiling system |
5198877, | Oct 15 1990 | BANK OF MONTREAL | Method and apparatus for three-dimensional non-contact shape sensing |
5251296, | Mar 16 1990 | Hewlett-Packard Company | Methods and apparatus for generating arbitrarily addressed, arbitrarily shaped tiles in computer graphics systems |
5255096, | Apr 10 1992 | Video time code synchronized robot control apparatus | |
5264678, | Sep 26 1991 | Science Applications International Corporation | Weld-bead profilometer |
5268996, | Dec 20 1990 | Intel Corporation | Computer image generation method for determination of total pixel illumination due to plural light sources |
5319445, | Sep 08 1992 | OPTON CO , LTD | Hidden change distribution grating and use in 3D moire measurement sensors and CMM applications |
5349378, | Dec 21 1992 | Rudolph Technologies, Inc | Context independent fusion of range and intensity imagery |
5357599, | Jul 30 1992 | International Business Machines Corporation | Method and apparatus for rendering polygons |
5362970, | Apr 30 1979 | LMI TECHNOLOGIES INC | Method and apparatus for electro-optically determining the dimension, location and attitude of objects |
5402582, | Feb 23 1993 | XENON RESEARCH, INC | Three dimensional coordinate measuring apparatus |
5413454, | Jul 09 1993 | Mobile robotic arm | |
5424835, | Dec 30 1991 | Kreon Industrie | High-resolution compact optical sensor for scanning three-dimensional shapes |
5611147, | Feb 23 1993 | Faro Technologies Inc | Three dimensional coordinate measuring apparatus |
5784282, | Jun 11 1993 | Bertin & Cie; Sollac | Method and apparatus for identifying the position in three dimensions of a movable object such as a sensor or a tool carried by a robot |
5812710, | Feb 07 1996 | Fujitsu Limited | Apparatus and method for optical equalization and amplification |
5886703, | Feb 01 1995 | VIRTUS ENTERTAINMENT, INC ; NXVIEW TECHNOLOGIES, INC | Perspective correct texture mapping system and methods with intelligent subdivision |
6611617, | Jul 26 1995 | 3D Scanners Limited | Scanning apparatus and method |
7313264, | Jul 26 1995 | 3D Scanners Limited | Scanning apparatus and method |
20030191603, | |||
DE3938714, | |||
EP159187, | |||
EP328443, | |||
EP348247, | |||
EP550300, | |||
EP589750, | |||
EP750175, | |||
EP750176, | |||
FR2629198, | |||
FR2685764, | |||
GB2264601, | |||
GB2264602, | |||
GB2288249, | |||
JP6186025, | |||
JP6229741, | |||
WO9100115, | |||
WO9000090, | |||
WO9501994, | |||
WO9107511, | |||
WO9008939, | |||
WO9207233, | |||
WO9208103, | |||
WO9606325, | |||
WO9610205, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 24 2009 | 3D Scanners Limited | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jun 03 2015 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Aug 12 2019 | REM: Maintenance Fee Reminder Mailed. |
Jan 27 2020 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jan 01 2016 | 4 years fee payment window open |
Jul 01 2016 | 6 months grace period start (w surcharge) |
Jan 01 2017 | patent expiry (for year 4) |
Jan 01 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 01 2020 | 8 years fee payment window open |
Jul 01 2020 | 6 months grace period start (w surcharge) |
Jan 01 2021 | patent expiry (for year 8) |
Jan 01 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 01 2024 | 12 years fee payment window open |
Jul 01 2024 | 6 months grace period start (w surcharge) |
Jan 01 2025 | patent expiry (for year 12) |
Jan 01 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |