A method and system of merging a pair of images to form a seamless panoramic image including the following steps. A set of feature points are extracted along the edges of the images, each feature point defining an edge orientation. A set of registration parameters is obtained by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images. A seamless panoramic image is rendered using the first and second images with the set of registration parameters.
|
1. A method of merging a different pair of images to form a seamless panoramic image comprising:
extracting a set of feature points along the edges of the images, each feature point defining an edge orientations orientation, wherein extracting the set of feature points along the edges of the images includes applying wavelet transforms to the images;
obtaining a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images; and
rendering a seamless panoramic image using the first and second images with the set of registration parameters.
0. 8. An apparatus for merging a different pair of images to form a seamless panoramic image comprising:
means for extracting a set of feature points along the edges of the images, one or more feature point defining an edge orientations, wherein said means for extracting the set of feature points along the edges of the images includes means for applying wavelet transforms to the images;
means for obtaining a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images; and
means for rendering a seamless panoramic image using the first and second images with the set of registration parameters.
0. 18. A system for merging a pair of images to form a panoramic image comprising:
a memory capable of storing one or more images acquired from an image device; and
at least one processor coupled to said memory, said processor being capable of:
extracting a set of feature points along the edges of the images by applying wavelet formations to the images, one or more feature point defining an edge of orientation;
obtaining a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images; and
rendering a seamless panoramic image using the first and second images with the set of registration parameters.
6. A system for merging a pair of images to form a panoramic image comprising:
an image device which, in operation, acquires a series of the images;
a storage for storing the series of images:
a memory which stores computer code; and
at least one processor which executes the computer code to:
extract different sets of feature points along the edges of each input image, each feature point defining an edge orientation;
extract a set of feature points along the edges of the images by applying wavelet formations to the images, each feature point defining an edge of orientation;
obtain a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images; and
render a seamless panoramic image using the first and second images with the set of registration parameters.
0. 13. An article of manufacture comprising a storage medium having instructions stored thereon that, if executed, result in merging of a different pair of images to form a seamless panoramic image by:
extracting a set of feature points along the edges of the images, each feature point defining an edge orientations, wherein extracting the set of feature points along the edges of the images includes applying wavelet transforms to the images;
obtaining a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images; and
rendering a seamless panoramic image using the first and second images with the set of registration parameters.
2. The method of
calculating the edge orientation of each feature point;
comparing the an orientation difference between the matching pair;
calculating the a value of correlation of the matching pair; and
comparing the value of correlation with a predefined threshold.
3. The method of
calculating an edge correlation for each image;
locating the a feature point whose edge response is the a maxima within a window;
comparing the maxima with a predefined threshold.
4. The method of
determining an initial set of matching pairs for registration;
calculating a quality value for the initial set of matching pairs;
updating the matching result according to the quality value of the match;
imposing an angle consistency constraint to filter out impossible matches; and
using a voting technique to obtain the registration parameters.
5. The method of
dynamically adjusting the intensity differences between adjacent images; and
properly blending the an intensity difference between consecutive images.
7. The system of
calculating an edge correlation for each image;
locating the a feature point whose edge response is the a maxima within a window;
comparing the maxima with a predefined threshold.
0. 9. An apparatus as claimed in claim 8, wherein said obtaining means for obtaining a set of registration parameters by determining an initial set of feature points comprises:
means for calculating the edge orientation of one or more feature points;
means for comparing the orientation difference between the matching pair;
means for calculating the value of correlation of the matching pair; and
means for comparing the value of correlation with a predefined threshold.
0. 10. An apparatus as claimed in claim 8, wherein said obtaining means comprises:
means for calculating an edge correlation for one or more images;
means for locating the feature point whose edge response is a maxima within a window; and
means for comparing the maxima with a predefined threshold.
0. 11. An apparatus as claimed in claim 8, wherein said obtaining means comprises:
means for determining an initial set of matching pairs for registration;
means for calculating a quality value for the initial set of matching pairs;
means for updating the matching result according to the quality value of the match;
means for imposing an angle consistency constraint to filter out impossible matches; and
means for using a voting technique to obtain the registration parameters.
0. 12. An apparatus as claimed in claims 8, wherein said rendering comprises:
means for dynamically adjusting the intensity differences between adjacent images; and
means for properly blending the intensity difference between consecutive images.
0. 14. An article of manufacture as claimed in claim 13, wherein the instructions, if executed, further result in merging of a different pair of images to form a seamless panoramic image by:
calculating the edge orientation of one or more feature points;
comparing the orientation difference between the matching pair;
calculating the value of correlation of the matching pair; and
comparing the value of correlation with a predefined threshold.
0. 15. An article of manufacture as claimed in claim 13, wherein the instructions, if executed, further result in merging of a different pair of images to form a seamless panoramic image by:
calculating an edge correlation for each image;
locating the feature point whose edge response is a maxima within a window; and
comparing the maxima with a predefined threshold.
0. 16. An article of manufacture as claimed in claim 13, wherein the instructions, if executed, further result in merging of a different pair of images to form a seamless panoramic image by:
determining an initial set of matching pairs for registration;
calculating a quality value for the initial set of matching pairs;
updating the matching result according to the quality value of the match;
imposing an angle consistency constraint to filter out impossible matches; and
using a voting technique to obtain the registration parameters.
0. 17. An article of manufacture as claimed in claims 13, wherein the instructions, if executed, further result in merging of a different pair of images to form a seamless panoramic image by:
dynamically adjusting the intensity differences between adjacent images; and
properly blending the intensity difference between consecutive images.
0. 19. The system as claimed in claim 18, wherein said processor is further capable of:
calculating an edge correlation for each image;
locating the feature point whose edge response is a maxima within a window; and
comparing the maxima with a predefined threshold.
0. 20. The system as claimed in claim 18, wherein said processor is further capable of:
determining an initial set of matching pairs for registration;
calculating a quality value for the initial set of matching pairs;
updating the matching result according to the quality value of the match;
imposing an angle consistency constraint to filter out impossible matches; and
using a voting technique to obtain the registration parameters.
|
The present invention relates to an apparatus, algorithm, and method for stitching different pieces of images of a scene into a panoramic environment map.
The most common way to electronically represent the real world is with image data. Unlike traditional graph-based systems, there are systems which use panoramic images to construct a virtual world. The major advantage of a system which uses panoramic images is that very vivid and photo-realistic rendering results can be obtained even when using PCs. In addition, the cost of constructing the virtual world is independent of scene complexity. In such systems, panoramic images are stitched together into a panoramic map from several individual images which are acquired by rotating a camera horizontally or vertically. This panoramic map can be used in different applications such as movie special effects, the creation of virtual reality, or games. A typical problem is how to stitch the different pieces of a scene into a larger picture or map. One approach to address this problem is to manually establish correspondences between images to solve unknown parameters of their relative transformation. Because manual methods are tedious for large applications, automatic schemes are preferably used for generating a seamless panoramic image from different pieces of images.
One proposed approach uses a nonlinear minimization algorithm for automatically stitching panoramic images by minimizing the discrepancy in intensities between images. This approach has the advantage of not requiring easily identifiable features. However, this technique does not guarantee finding the global minimum if the selection of starting points is not proper. Further because the optimization process is time-consuming, the approach is inefficient. In this invention, the domain of images under consideration is panoramic images.
The invention allows users to generate panoramic images from a sequence of images acquired by a camera rotated about its optical center. In general, the invention combines feature extraction, correlation, and relaxation techniques to get a number of reliable and robust matching pairs used to derive registration parameters. Based on the obtained registration parameters, different pieces of consecutive images can-be stitched together to obtain a seamless panoramic image.
In a first aspect, a method of merging a pair of images to form a seamless panoramic image includes the following steps. A set of feature points along the edges of the images is extracted, each feature point defining an edge orientation. A set of registration parameters is obtained by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images. A seamless panoramic image is rendered using the first and second images with the set of registration parameters.
The invention provides a feature-based approach for automatically stitching panoramic images acquired by a rotated camera and obtaining a set of matching pairs from a set of feature points for registration. Since the feature points are extracted along the edges, each feature point specifies an edge orientation. Because the orientation difference between two panoramic images is relatively small, the difference between edge orientations of two feature points is also small if they are good matches. Based on this assumption, edge information of feature points can be used to eliminate in advance many false matches by checking their orientation difference. Moreover, many unnecessary calculations involving cross-correlation can be screened in advance, thereby significantly reducing the search time needed for obtaining correct matching pairs. After checking, by calculating the value of correlation of the remaining matching pairs, a set of possible matches can be selected with a predefined threshold. The set of possible matches are further verified through a relaxation scheme by calculating the quality of their matches. Once all of the correct matching pairs are found, they are then used to derive registration parameters. In this invention, an iterative scheme is applied to increase the reliability in providing matching results. Since only three iteration or fewer are needed and only a few feature points are involved in the matching pairs, the whole procedure can be accomplished very efficiently. Also, as discussed above, because the orientation difference of two feature points is checked in advance (before matching). Many calculations involving cross-correlation are not required and the efficiency of stitching is significantly improved. Compared with conventional algorithms, the proposed scheme offers improved efficiency and reliability for stitching images.
Embodiments of this aspect of the invention may include one or more of the following features. In one embodiment, a set of feature points are first extracted through wavelet formations. Among other advantages, the invention uses wavelet transforms to obtain a number of feature points with edge orientations. Such edge information can speed up the entire registration process by eliminating many impossible matches in advance and avoiding many unnecessary calculations of correlation. The method determines a number of reliable and robust matching pairs through relaxation. The method also measures the quality of a matching pair, imposes angle consistency constraint for improving the robustness of registration, and uses a voting concept to get the desired solution from the set of final matching results.
In other embodiments, the method forms the final panoramic image with the help of the registration results. In particular, the method adjusts the intensity differences between consecutive input images and blends the intensities of adjacent images to obtain a seamless panoramic image. The final panoramic images can then be used to build a virtual world.
Still other embodiments may include one or more of the following features:
For example, the method selects a number of feature points through wavelet transforms. Each feature point is associated with an edge orientation so that the speed of the registration process is increased.
The method uses an angle constraint to construct a set of matching pairs, which are used to obtain reliable matching results through a relaxation and a voting technique. The set of matching results are then used to form the final seamless panoramic image.
Constructing an initial set of matching pairs for registration includes comparing the edge orientation differences of feature points in one image and its corresponding feature points in another, calculating the values of correlation of each possible matching pair, and thresholding them with a predefined threshold.
Getting reliable matching results through relaxation and a voting technique includes calculating the quality of a matching pair, imposing angle consistency constraint to filter out impossible matching pairs, updating matching results through relaxation, and using the voting technique to obtain the reliable registration parameters. In addition, it refines the final registration results by using the correlation technique with a proper starting point.
Forming the final panoramic images includes dynamically adjusting and properly blending the intensity differences between adjacent images.
In another aspect, the invention features a system for merging pairs of images to form a panoramic image. The system includes an imaging device which, in operation, acquires a series of images, a storage for storing a series of images, a memory which stores computer code, and at least one processor which executes computer code to extract a set of feature points along the edges of the images, each feature point defining an edge orientation and to obtain a set of registration parameters by determining an initial set of feature points from a first one of the images which matches a set of feature points of a second one of the images, and to render a seamless panoramic image using the first and second images with the set of registration parameters.
Other advantages and features of the invention will become apparent from the following description, including the claims and the drawings.
Referring first to the flow diagram of
In general, it is difficult to seamlessly stitch two adjacent images together to form a panoramic image due to perspective distortion introduced by a camera. To remove the effects of this distortion, these images are preferably reprojected onto a simple geometry, e.g., a cube, a cylinder, or a sphere. In many applications, a cylindrical geometry is preferable since its associated geometrical transformation is simple. In this example, the cylindrical geometry is used.
and
Moreover, since the radius r is equal to f, Equations (1) and (2) can be rewritten as follows:
u=f tan−1x/f
and
Based on Eqs. (3) and (4), input images are provided and then (step 10) warped into a cylindrical map for further registration to construct a complete panoramic scene (step 11).
Referring again to
and
Let
and
At each scale 2j, the 2-D wavelet transform of a function I(x,y) in L2(R2) can be decomposed into two independent directions as follows:
W2
Basically, these two components are equivalent to the gradients of I(x,y) smoothed by S(x,y) at scale 2j in the x and y directions. At a specific scale s=2i, the modulus of the gradient vector of f(x,y) can be calculated as follows:
M2
If the local maximum of M2
where n is a positive integer indicating the number of scales involved in the multiplication, and j represents the initial scale for edge correlation. Rn reveals a peak whenever a true edge exists and is suppressed by the multiplication process if a point at location (x,y) is not a true edge. Thus, using the relationship for Rn(j, x, y) above, the noise in an image can be suppressed while the true edges can be retained. In one embodiment of the invention, the number of scales for multiplication is chosen to be 2. In order to conserve the energy level, Rn(j, x, y) should be normalized as follows:
where
and
During the feature point selection process, an edge point is recognized as a candidate if its corresponding normalized edge correlation R2(j, x, y) is larger than its corresponding modulus value. Basically, the above mentioned process is equivalent to detecting an edge point with the strongest edge response in a local area. The three conditions which will be used to judge whether a point P(x,y) is a feature point or not are as follows:
where Np is the neighborhood of P(x,y) (step 34).
Referring to
Arg(W2
However, the above representation can be very sensitive to noise. Therefore, an edge tracking technique plus a line-fitting model is used to solve the noise problem (step 41).
Let P be a feature point and Ne be its neighborhood. Since P is an edge point, there should exist an edge line passing through it. By considering P as a bridge point, an edge line passing through P can be determined by searching in all the directions from P. All the edge points on the edge are then used as candidates for determining the orientation of the edge line. During the searching process, the edge connection constraint and the direction consistency constraint are used to restrict the searching domain. The edge connection constraint means that if Ne contains another edge l but l does not pass P, all edge points in l will not be included in estimating the orientation of P. In certain cases, however, there will exist more than one edge line passing through P. In these cases, the first line detected is adopted to estimate the orientation. Let l1 denote this line. The direction consistency constraint means all the edge points along other edge lines whose orientations are inconsistent with l1 are not included to estimate the orientation of P. In this way, a set of edge points can be selected and then used to estimate the orientation of P using a line-fitting model. In other embodiments, other edge tracking technique can also be applied to provide a better estimation.
After the orientation is estimated, all feature points u will associate with an edge orientation A(u). For a feature point pi in the set of points FPI
θij=A(qj)−A(pi).
In fact, if pi and qi provide a good match, the value of θij will be small since the orientation of image Ia is similar to that of Ib. Assuming this is the case, step 14 (
For a feature point pi in FPI
|A(pi)−A(qi)|<10°. Condition 1:
Adding this criterion will significantly speed up the search time. On the other hand, if pi and qi form a good match, the similarity degree between pi and qi should be larger. A cross-correlation which can be used to measure the similarity degree between pi and qj (step 46) and is defined as follows (step 32):
where ui and uiuj are the local means of pi and qj, respectively, σi and σi are the local variances of pi and qj, respectively; and (2M+1)2 represents the area of matching window. Based on this correlation measure, a pair {piqj} is qualified as a possible matching pair if the following conditions are satisfied (step 33):
and
Ci
Condition 2 means that given a feature point pi it is desired to find a point qiεFPI
Referring to
where
with the two predefined thresholds
The contribution of a pair {nk1nk2} in NPp
Referring to
then the relaxation procedure can be formulated as follow:
Iterate {
-Compute the quality for each candidate match
-Choose the best possible candidates for minimizing F
according to the quality value GI
} until F converges.
There are several strategies for updating the matching candidates. In one application an update strategy, referred to here as “some-looser-take-nothing” is used to update the matching candidates (step 52). First, according to the quality value of GI
On the other hand, in order to make the matching results more reliable, the method includes an “angle consistency” constraint (step 53) within the first iteration of the relaxation process to further eliminate impossible matching pairs. That is, if {piqi} and {pjqj} are well matched, the angle between {right arrow over (pipj)} and {right arrow over (qiqj)} must be close to zero. In this case, during this first iteration, for each element {piqi} in MPI
After applying the relaxation process, a set of reliable matches is obtained. Referring to
vi=ui+T, for i=1,2,3 . . . , Ne,
where T is the desired solution. However, in real cases, different pairs {uivi} will lead to different offsets Ti. Therefore, a voting technique is used to measure the quality of different solutions Ti. Let S(i) denote a counter which records the number of solutions Tk consistent with Ti. Two solutions Ti and Tk are said to be consistent if the distance between Ti and Tk is less than a predefined threshold. Since there are Ne elements in CPI
Note that due to noise and image quality, the positions of feature points will not be precisely located and the accuracy of T is affected. In practical implementation, the y-component
be the horizontal gradient of the point (ux,k) in Ih, where ux is the x-coordinate of u. Instead of using the starting point u directly, we use another starting point ū for refining the desired offset
In general, when stitching two adjacent images, discontinuities of intensity exist between their common areas. Therefore, step 17 of
of Ia and Ib, that is,
where A is the overlapping area of Ia and Ib, |A| is the number of pixels in A, p(i) is a pixel in Ia, and q(i) is its corresponding pixel in Ib.
In particular, referring to
and
After this adjusting step, the intensities of Ia and Ib in
The second stage uses a ray-casting method to blend different pixel intensities together. Referring to
Where da is the distance between pi and Ia, dh the distance between qi and Ih, and t is an adjustable parameter. Using Equation (10), the intensities in Ia are gradually changed to approach the intensities of pixels in Ih such that the final composite image I looks very smooth. In fact, if the blending area is chosen too large, a “ghostlike” effect will occur, particularly when moving objects in the common overlapping area exist between Ia and Ib. However, since the intensities of Ia and Ib have been adjusted, the blending width can be chosen small such that the so-called ghostlike effect is significantly reduced. In one preferred embodiment, the blending width is chosen as one-third of the original width of the overlapping area.
Referring to
Referring to
Referring to
Hsieh, Jun-Wei, Chiang, Cheng-Chin, Way, Der-Lor
Patent | Priority | Assignee | Title |
9047692, | Dec 20 2011 | GOOGLE LLC | Scene scan |
9076238, | Aug 21 2013 | Seiko Epson Corporation | Intelligent weighted blending for ultrasound image stitching |
9530076, | Jan 14 2009 | A9.COM, INC. | Method and system for matching an image using normalized feature vectors |
Patent | Priority | Assignee | Title |
5613013, | May 13 1994 | Reticula Corporation | Glass patterns in image alignment and analysis |
5625408, | Jun 24 1993 | Canon Kabushiki Kaisha | Three-dimensional image recording/reconstructing method and apparatus therefor |
5850352, | Mar 31 1995 | CALIFORNIA, THE UNIVERSITY OF, REGENTS OF, THE | Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images |
5953076, | Jun 16 1995 | DISNEY ENTERPRISES, INC | System and method of real time insertions into video using adaptive occlusion with a synthetic reference image |
5963664, | Jun 22 1995 | Sarnoff Corporation | Method and system for image combination using a parallax-based technique |
5987164, | Aug 01 1997 | Microsoft Technology Licensing, LLC | Block adjustment method and apparatus for construction of image mosaics |
6009190, | Aug 01 1997 | Microsoft Technology Licensing, LLC | Texture map construction method and apparatus for displaying panoramic image mosaics |
6011581, | Nov 16 1992 | Reveo, Inc. | Intelligent method and system for producing and displaying stereoscopically-multiplexed images of three-dimensional objects for use in realistic stereoscopic viewing thereof in interactive virtual reality display environments |
6044168, | Nov 25 1996 | Texas Instruments Incorporated | Model based faced coding and decoding using feature detection and eigenface coding |
6075905, | Jul 17 1996 | Sarnoff Corporation | Method and apparatus for mosaic image construction |
6078701, | Aug 01 1997 | Sarnoff Corporation | Method and apparatus for performing local to global multiframe alignment to construct mosaic images |
6349153, | Sep 03 1997 | HANGER SOLUTIONS, LLC | Method and system for composition images |
6393162, | Jan 09 1998 | Olympus Optical Co., Ltd. | Image synthesizing apparatus |
6393163, | Nov 14 1994 | SRI International | Mosaic based image processing system |
6411339, | Oct 04 1996 | Nippon Telegraph and Telphone Corporation | Method of spatio-temporally integrating/managing a plurality of videos and system for embodying the same, and recording medium for recording a program for the method |
6434276, | Sep 30 1997 | Sharp Kabushiki Kaisha | Image synthesis and communication apparatus |
6466262, | Jun 11 1997 | Hitachi, LTD | Digital wide camera |
6473536, | Sep 18 1998 | SANYO ELECTRIC CO , LTD | Image synthesis method, image synthesizer, and recording medium on which image synthesis program is recorded |
6486908, | May 27 1998 | Transpacific IP Ltd | Image-based method and system for building spherical panoramas |
6516099, | Aug 05 1997 | Canon Kabushiki Kaisha | Image processing apparatus |
EP249644, | |||
EP257129, | |||
EP415648, | |||
TW300369, | |||
TW350183, | |||
TW376670, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 28 2006 | TransPacific IP Ltd. | (assignment on the face of the patent) | / | |||
Nov 24 2006 | Industrial Technology Research Institute | Transpacific IP Ltd | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025542 | /0601 |
Date | Maintenance Fee Events |
Nov 07 2014 | ASPN: Payor Number Assigned. |
Feb 23 2016 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 21 2015 | 4 years fee payment window open |
Aug 21 2015 | 6 months grace period start (w surcharge) |
Feb 21 2016 | patent expiry (for year 4) |
Feb 21 2018 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 21 2019 | 8 years fee payment window open |
Aug 21 2019 | 6 months grace period start (w surcharge) |
Feb 21 2020 | patent expiry (for year 8) |
Feb 21 2022 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 21 2023 | 12 years fee payment window open |
Aug 21 2023 | 6 months grace period start (w surcharge) |
Feb 21 2024 | patent expiry (for year 12) |
Feb 21 2026 | 2 years to revive unintentionally abandoned end. (for year 12) |