An omnidirectional video camera captures images of the environment while moving along several intersecting paths forming an irregular grid. These paths define the boundaries of a set of image loops within the environment. For arbitrary viewpoints within each image loop, a 4D plenoptic function may be reconstructed from the group of images captured at the loop boundary. For an observer viewpoint, a strip of pixels is extracted from an image in the loop in front of the observer and paired with a strip of pixels extracted from another image on the opposite side of the image loop. A new image is generated for an observer viewpoint by warping pairs of such strips of pixels according to the 4D plenoptic function, blending each pair, and then stitching the resulting strips of pixels together.
|
1. A method of gathering image data of an environment, comprising:
recording images of the environment at each of various points of along intersecting paths within the environment with a camera being conveyed at a fixed height to each of the various points along the intersecting paths, the intersecting paths forming an irregular grid in only two dimensions within the environment, at least a portion of the intersecting paths defining boundaries of an image loop; and
determining a three-dimensional position and orientation relative to the environment corresponding to each image.
2. The method of
a hemispherical field-of-view omnidirectional camera with an effective center-of-projection.
3. The method of
|
This application is a divisional of application Ser. No. 10/122,337, filed on Apr. 16, 2002, now U.S. Pat. No. 6,831,643 the entire contents of which are hereby incorporated by reference and for which priority is claimed under 35 U.S.C. §120; and this application claims the benefit of U.S. Provisional Application No. 60/283,998 filed Apr. 16, 2001 and U.S. Provisional Application No. 60/294,061 filed May 29, 2001, the contents of which are hereby incorporated by reference in their entireties.
The present invention relates to the generation of virtual environments for interactive walkthrough applications, and more particularly, to a system and process for capturing a complex real-world environment and reconstructing a 4D plenoptic function supporting interactive walkthrough applications.
Interactive walkthroughs are types of computer graphics applications, where an observer moves within a virtual environment. Interactive walkthroughs require detailed three-dimensional (3D) models of an environment. Traditionally, computer-aided design (CAD) systems are used to create the environment by specifying the geometry and material properties. Using a lighting model, the walkthrough application can render the environment from any vantage point. However, such conventional modeling techniques are very time consuming. Further, these techniques fail to adequately recreate the detailed geometry and lighting effects found in most real-world scenes.
Computer vision techniques attempt to create real-world models by automatically deriving the geometric and photometric properties from photographic images of the real-world objects. These techniques are often based on the process of establishing correspondences (e.g., stereo matching), a process that is inherently error prone. Further, stereo matching is not always reliable for matching a sufficient number of features from many images, in order to create detailed models of complex scenes.
Image-based rendering (IBR) has been used to create novel views of an environment directly from a set of existing images. For each new view, IBR reconstructs a continuous representation of a plenoptic function from a set of discrete image samples, thus avoiding the need to create an explicit geometric model.
A seven-dimensional (7D) plenoptic function was introduced by E. H. Adelson and J. Bergen in “The Plenoptic Function and the Elements of Early Vision,” Computational Models of Visual Processing, MIT Press, Cambridge, Mass., 3–20, 1991, which is hereby incorporated by reference in its entirety. The 7D plenoptic function describes the light intensity as viewed according to the following dimensions: viewpoint (quantified in three dimensions); direction (two dimensions); time; and wavelength. By restricting the problem to static scenes captured at one point in time, at fixed wavelengths (e.g., red, green, and blue), Adelson and Bergen's 7D plenoptic function can be reduced to five dimensions. In practice, all IBR techniques generate a plenoptic function, which is a subset of the complete 7D plenoptic function.
A 2D plenoptic function can be obtained by fixing the viewpoint and allowing only the viewing direction and zoom factor to change. Further, many 2D plenoptic function examples exist of IBR methods that stitch together both cylindrical and spherical panoramas from multiple images. Two such examples are described in the following articles: Chen S. E, “QuickTime VR—An Image-Based Approach to Virtual Environment Navigation”, Computer Graphics (SIGGRAPH '95), pp. 29–38, 1995; and R. Szeliski and H. Shum, “Creating full view panoramic image mosaics and texture-mapped models”, Computer Graphics (SIGGRAPH '97), pp. 251–258, 1997.
A method using concentric mosaics, which is disclosed in “Rendering with Concentric Mosaics” by H. Shum and L. He, Computer Graphics (SIGGRAPH '99), pp. 299–306, 1999, captures an inside-looking-out 3D plenoptic function by mechanically constraining camera motion to planar concentric circles. Subsequently, novel images are reconstructed from viewpoints, restricted within the circle and having horizontal-only parallax. Similar to the above-mentioned 2D plenoptic modeling, the viewpoint is severely restricted in this method.
The Lumigraph and Lightfield techniques reconstruct a 4D plenoptic function for unobstructed spaces, where either the scene or the viewpoint is roughly constrained to a box. The Lumigraph technique is described in the paper, “The Lumigraph,” by S. Gortler et al. Computer Graphics (SIGGRAPH '96), pp. 43–54, 1996, and in U.S. Pat. No. 6,023,523, issued Feb. 8, 2000. The Lightfield technique is disclosed in a paper entitled “Light Field Rendering,” by M. Levoy and P. Hanrahan, Computer Graphics (SIGGRAPH '96), pp. 171–80, 1996, and in U.S. Pat., No. 6,097,394, issued Aug. 1, 2000. Each of these methods captures a large number of images from known positions in the environment, and creates a 4D database of light rays. The recorded light rays are retrieved from the database when a new viewpoint is rendered. These Lumigraph and Lightfield methods of IBR allow for large models to be stitched together, but their capture is very time-consuming, and the models, so far, are restricted to small regular areas.
A technique for reconstructing a 5D plenoptic function, using images supplemented with depth values, has been described by L. McMillan and G. Bishop in “Plenoptic Modeling: An Image-Based Rendering System,” Computer Graphics (SIGGRAPH '95), pp. 39–46 1995. McMillan and Bishop's technique formulates an efficient image warping operation that uses a reference image to create images for a small nearby viewing area. For real-world environments, depth is computed by manually establishing feature correspondences between two cylindrical projections captured with a small baseline. In this method, expansion to larger environments involves sampling many images from pre-set, closely-spaced viewpoints. Other IBR techniques using 5D functions are also known in the art.
The full 5D plenoptic function, in theory, can reconstruct large, complex environments. However, such 5D representations require the difficult task of recovering depth, which is often error prone and not robust enough to reconstruct detailed complex scenes.
The present invention provides an IBR system and method for performing “Plenoptic Stitching,” which reconstructs a 4D plenoptic function suitable for walkthrough applications. The present invention reduces the 7D plenoptic function to a 4D function by fixing time, wavelength, and restricting viewpoints to lie within a common plane.
In the present invention, the set of images required for rendering a large, complex real-world environment can be captured very quickly, usually in a manner of minutes. Further, after initialization of the system, all of the processing is performed automatically, and does not require user intervention.
According to an exemplary embodiment of the present invention, images of the environment are acquired by moving a video camera along several intersecting paths. The motion of the camera is restricted to a plane. The intersecting paths form an irregular grid consisting of a group of tiled image loops.
At run-time, a virtual user moves within any of these image loops, and the Plenoptic Stitching process of the invention generates an image representing the user's view of the environment. The image is generated by “stitching” together pixel data from images captured along the boundaries of the loop.
The user is able to move freely from one image loop to the next, thereby exploring the environment. By tiling an environment with relatively constant-sized image loops, an environment of any size and shape can be reconstructed using a memory footprint whose size remains approximately constant throughout processing.
The present invention will become more fully understood from the detailed description given below and the accompanying drawings, which are given for purposes of illustration only, and thus do not limit the present invention.
Overview
The present invention is directed to a system and method for implementing an interactive walkthrough application, where the view of an observer walking through an unobstructed area of a complex environment is rendered according to a reconstructed 4D plenoptic function.
Image Loop Definition
The paths along which the images are captured in step 10 form an irregular grid over the observer plane (x, y). An example of an irregular grid is illustrated in
Plenoptic Stitching Details
An image of a novel view for an arbitrary viewpoint inside the loop 70 is generated by stitching together pixel data from the captured images of the loop according to a 4D plenoptic function constructed for the interior of the loop. The 4D plenoptic function represents light rays in the environment according to the parameters (x, y, u, v). Specifically, these parameters identify the intersection of light beams with the observer plane (x, y) and a secondary surface (u, v). The observer plane is typically, but not necessarily, parallel to the ground plane (i.e., horizontal). Further, the observer plane is set at a fixed height, which typically, but not necessarily, corresponds to the eye-height of an arbitrary observer who is “walking through” the environment 1. One possible way of defining the secondary surface is to sweep a line perpendicular to the observer plane around the perimeter 60 of the environment 1.
The Plenoptic Stitching algorithm of the present invention reconstructs a novel image of the environment on an imaging surface, which may be planar, cylindrical, or representative of any other type of projection, which will be readily apparent to those of ordinary skill in the art. The imaging surface spans a particular field-of-view (FOV) in front of the observer and is created by stitching together strips of pixels from the images of the surrounding image loop. To do this, the algorithm defines a viewing frustum as a generalized cone radiating from the observer's viewpoint 110 to the convex hull of the imaging surface.
The algorithm orthographically projects the viewing frustum onto the observer plane, and defines a set of Stitching lines that lie in the observer plane, extending through the viewpoint 110, and spanning the extent of projection of the viewing frustum. Each line corresponds to one strip of the imaging surface. The strips, when stitched together, form the reconstructed image. The remainder of the algorithm is concerned with computing, for each Stitching line, a strip of pixels, which sample the environment. As will be readily apparent to those skilled in the art, other methods exist to obtain a set of lines similar to the Stitching lines. Such variations are not to be regarded as a departure from the spirit and scope of the present invention.
It should be noted that for non-convex image loops 70, a line segment 3 might intersect the loop at more than two points. When this occurs, the two closest intersecting points from opposite sides of the loop 70 are used.
Each strip of pixels for viewpoint 110 is constructed using the images captured at each of the two intersection points of the corresponding line segment 3 with image loop 70. For example, images captured at points 300a and 300b of
In planar and cylindrical image projections, each Stitching line corresponds to a column of pixels in the reconstructed image. Accordingly, these types of images are generated column-by-column. However, for other types of projections (e.g., spherical), the Stitching lines do not necessarily correspond to columns.
Omnidirectional Camera
In an exemplary embodiment, the Plenoptic Stitching algorithm illustrated in
Plenoptic Stitching Using Omnidirectional Images
In an exemplary embodiment in which an omnidirectional camera is used, the Plenoptic Stitching algorithm extracts a radial line of pixels parallel to the corresponding Stitching line from each of the omnidirectional images at or close to the intersection between the Stitching line and the boundaries of the image loop 70. If no single radial line of pixels is exactly parallel to the Stitching line, the two radial lines that are nearest to parallel to the Stitching lines are used. Each radial line of pixels contains a set of coefficients (pixel values) that sample a strip of the environment perpendicular to the observer plane (e.g., for a horizontal observer plane, each radial line of pixels samples a vertical strip of the environment).
The present invention constructs a new radial line of pixels for each line segment 3. This radial line of pixels is constructed from the pair of radial lines extracted from images at two sides of the loop 70. The new radial lines of pixels correspond to samplings of the environment that are perpendicular to the observer plane, and are viewed from the (new) observer viewpoint 110. To construct the new radial line, visible and easily discernible environment features (e.g., corners, edges, patterns) common to the extracted pair of radial lines are mapped to their location within the new radial line. This mapping is then used to warp the radial lines of pixels from the captured images to the imaging surface of the observer viewpoint. This process of constructing each radial line of pixels for the new image is described in more detail in the Image Reconstruction section below.
In an exemplary embodiment using planar or cylindrical image projections, each image column (or strip) is generated by warping the pixels of the associated pair of radial lines using the aforementioned mapping, and then blending the warped pixels together. The image columns are then “stitched together” i.e., displayed side by side in order to show the overall reconstructed image. A more detailed description of this process will be discussed below.
Image Capture
In the embodiment illustrated in
Each captured omnidirectional image has a substantially single center-of-projection (COP), which yields simple transformations for obtaining several types of projections (planar, cylindrical, etc.). In a further exemplary embodiment, the camera 217 installed in front of the paraboloidal mirror 215 may comprise a 3-CCD color video camera, or any other appropriate digitizing camera, as will be contemplated by those ordinarily skilled in the art.
Omnidirectional camera 210 is conveyed along the image paths 50 by a conveyance system 240. The camera 210 is set on top of the conveyance system 240 at a fixed height. Omnidirectional camera 210 is connected to a frame grabbing unit 220, and storage device 230 for storing the captured images. A control device 260 is also connected to the frame grabbing system 220 and storage device 230 via a network 255. The system further includes two beacons 250a, 250b for estimating position and orientation of the omnidirectional camera 210.
In an exemplary embodiment, the conveyance system 240 is a motorized cart powered by a battery. The motorized cart may be radio-controlled, to prevent cables or a cart operator from appearing in the captured images. Preferably, the conveyance system 240 will travel at a speed predetermined according to the rate that images are captured and transferred to the data storage device 230, such that the sequence of captured images corresponding to a particular image path 50 will be recorded with small inter-image spacings. For instance, a capture rate equivalent to an observer walking at about one meter/second (m/s) capturing images at a standard video rate of approximately 30 Hz yields images that are spaced apart by a few centimeters. Other forms of conveyance, and methods for controlling the conveyance, may be used in the present invention to transport the camera 210 along the image paths 50.
In an exemplary embodiment, the frame-grabbing unit 220 may include a computer equipped with a frame-grabbing card. The frame-grabbing unit 220 is conveyed along with the omnidirectional camera 210 by conveyance system 240. In such an embodiment, the frame-grabbing unit 220 receives instructions from the control device 260 via network 255, which may comprise wireless Ethernet, or any other type of wired or wireless network. In an alternative embodiment, the frame-grabbing unit 220 is located remotely from the omnidirectional camera 210, and may be situated proximate to, or integrated with, the control device 260. In this embodiment, the frame-grabbing unit 220 captures images received from the camera 210 via the network connection.
The control device 260 preferably includes a computer executing software to control the frame-grabbing unit 220 to capture the images from the camera 210 and store them in the data storage device 230. The control device 260 may also be used to control the conveyance system 240 to travel along the determined image paths 50.
The data storage device 230 preferably allows rapid access and storage of data. Similar to the frame-grabbing unit 220, the data storage device 230 may be situated in the conveyance system 240 with the camera 210, or alternatively, it may be located nearby the control device, or in any other nearby area.
Processing System
As previously stated, in an exemplary embodiment of the Plenoptic Stitching algorithm, reconstructed images can be generated using either a planar or cylindrical image projection (among other types of possible image projections). In an exemplary embodiment using omnidirectional camera 210, each new column of pixels is reconstructed from a pair of radial lines of pixels extracted from the captured omnidirectional images. Therefore, the preprocessing and stitching phases are described below in terms of mapping and reconstructing image columns from radial lines of pixels.
Pose Estimation
In step 12 of
This pose estimation technique uses beacons, such as 250a and 250b, to compute camera pose according to a camera calibration scheme and beacon-based pose estimation algorithm for omnidirectional cameras. Each beacon 250a, 250b comprises a post equipped with small bright light bulbs, and these posts are placed in two corners of the trackable region 1. It should be noted that the beacons 250a, 250b are not limited to posts equipped with light bulbs, and may comprise any visibly discernible environment feature. The distance between the posts is measured, and the camera 210 is calibrated approximately parallel to the ground, at a chosen height. Before each sequence of images is recorded, an operator initializes the pose estimation procedure by identifying the projections of the light bulbs in the first captured image. Then, the pose estimation algorithm applies well-known image processing techniques to track the light bulbs from image to image and, using a triangulation technique, derives the camera position (x, y) and the orientation (ω). According to an exemplary embodiment, B-splines are fitted to the camera positions along the image paths in order to compensate for noise in the pose estimation. Then, the local average translation speeds are computed, and the camera positions are re-projected onto the spline curve.
While the above description provides an exemplary method of computing the camera position and orientation, it should be noted that the present invention is not limited to this particular method. Any method for calculating or estimating camera position and orientation, as contemplated by those ordinarily skilled in the art, may be used in connection with the present invention.
Creating Image Loops and Tracking Features
In step 14 of
As illustrated in step 14 of
Image Reconstruction
The reconstruction-processing device 270 orthographically projects the viewing frustum of the observer onto the observer plane, and a set of Stitching lines are defined through the viewpoint and spanning the extent of the viewing frustum projection, as indicated in step 404. The reconstruction-processing device 270 then selects the next unprocessed Stitching line in step 406, and extracts the two radial lines of pixels associated with the Stitching line in step 408.
In step 410, an optimization algorithm is performed on the pairing of extracted radial lines. In a practical system, both the camera pose estimation algorithm and the feature tracking algorithm introduce errors that cause a non-optimal pairing of radial lines. The optimization uses epipolar geometry to generate an array of rotation corrections to find an optimal set of pairings. The optimization algorithm of step 410 is described in more detail below in the section Rotation Optimization below.
In step 412, the reconstruction-processing device 270 generates a mapping between the extracted radial lines of pixels and warps the pixels to the imaging surface of the observer viewpoint (e.g., columns).
In step 414, another optimization is performed on each pair of warped columns. Misalignment between the two warped columns of pixels may be caused by various factors, including feature drifting, lack of sufficient features, and inaccurate distance estimation between omnidirectional images. In the absence of such errors, the pixel samples of the two warped columns computed from the mappings should be aligned. This optimization step compensates for inaccuracies by scaling one column of pixels relative to the other. A more detailed description of this step is provided in the section Column Correlation below.
The warped and scaled pixel columns are then blended in step 416 creating a column of pixels of the new reconstructed image. The blending step is well understood to those of ordinary skill in the art. Afterwards, decision block 418 determines whether there are more Stitching lines to be processed. If so, stitching returns to step 406, and if not, the reconstructed image is displayed in step 420 by “stitching together” these newly generated image columns on the imaging surface, i.e., by displaying these image columns side-by-side.
Reconstruction Details for One Strip of Pixels
Mapping a New Image Column from Tracked Features
Generally, a Stitching line segment 3 will not intersect the image loop 70 exactly at the COP of an omnidirectional image. When this occurs, two radial lines may be extracted for one intersection. For example, if none of the captured images has a COP at point A′ of
In addition, there is generally not a radial line in every captured image exactly parallel to line segment 3. Hence, in an exemplary embodiment, when no radial line is exactly parallel, the two radial lines that are nearest to being parallel to line segment 3 may be blended to form the extracted radial line for each captured image.
The algorithm uses the set of tracked image features to create a mapping between segments of each radial line pair. In general, tracked features, such as F, are not necessarily present in all radial lines of pixels. For the omnidirectional camera 210, the algorithm triangulates the tracked features in the omnidirectional image. The features used to map segments of one radial line to the other radial line of the pair are the intersections of the triangulation edges of the tracked features with the radial line of pixels.
For a paraboloidal mirror 215, as contemplated in an exemplary embodiment of the invention (shown in
The point of intersection (trx, try) in the canonical space of a radial line from the image origin to a point on the image border (rx, ry) with an arc-edge can be computed by solving for t in the following quadratic equation:
(rx2+ry2)t2−2(cxrx+cyry)t+(cx2+cy2−r2)=0 Eq. 3
It should be noted that for each pair of radial lines, mappings may be generated for both directions of the corresponding line segment 3 illustrated in
Rotation Optimization
According to the epipolar geometry of omnidirectional camera 210 and assuming no rotation errors, each feature moves along the arc of a circle that intersects the positive epipole, the negative epipole, and the feature itself. Equation 2 gives the trajectory for a feature in IMAGE A using each feature and an epipole to define the arc-edge. If the two omnidirectional images IMAGE A and IMAGE B were rotationally aligned, features C, D, and E would move along such arcs, and appear in the reconstructed image on the corresponding arcs. These expected trajectories of features C, D, and E are labeled trC, trD, and trE, respectively.
However,
The reconstruction-processing device 270 performs an optimization to minimize the total rotation error by calculating a rotation correction fi, one for every Stitching line. Specifically, for each Stitching line, the reconstruction-processing device 270 retrieves the corresponding omnidirectional images from the data storage device 230, and calculates a rotation correction fi for IMAGE B and stores it with IMAGE A. This is done for all Stitching lines through IMAGE A. Next, a smoothing operation may be performed over IMAGE A's correction values. This smoothing operation may be performed by convolving a low-pass convolution filter over the array of fi values of the entire image. Each time a Stitching line intersects an IMAGE A, the rotational correction of the corresponding radial line is added to IMAGE B's orientation prior to extraction of IMAGE B's radial line.
The reconstruction-processing device 270 calculates the array of rotation corrections fi using the method of least squares. Assume that each omnidirectional image includes n=1 . . . N tracked features F1 . . . FN. For each pair of retrieved omnidirectional images, one of the images is assumed to be aligned correctly (e.g., IMAGE A in
Column Correlation
To improve alignment for each pair of warped columns, one of the warped columns is scaled, while the other is left unchanged. One method for determining the scale factor is to iterate over various scale values and to correlate the resulting two columns using low-pass filtered and downsampled versions of the columns, as described in Three-Dimensional Computer Vision: A Geometric Viewpoint by O. D. Faugeras, MIT Press, Cambridge, Mass., 1993, which is hereby incorporated by reference in its entirety. The scale factor that generates the column pair with the highest correlation value is chosen. Subsequently, a smoothing operation may be performed on the scale values. This smoothing operation may be performed by convolving the array of scale values, one for each column, for the entire reconstructed image with a low-pass filter convolution kernel.
Blending
Once column correlation is performed for each pair of warped columns, the pixels are blended together to form a column of the new image, using processing techniques well known to those of ordinary skill in the art.
Display
The reconstruction-processing device 270 renders each view on a display device located at the user terminal 280. Such display device 270 may comprise a computer monitor, television display, projection display, virtual reality headgear, or any other display device suitable for walkthrough applications. The image is displayed by “stitching together” these newly generated image columns, i.e., by displaying these image columns side-by-side.
Preprocessing and Run-Time Processing
In order to obtain real-time performance in an exemplary embodiment, the stitching process may be partitioned into a preprocessing phase and a run-time phase.
During the preprocessing phase, some of the steps of
During the run-time phase, the reconstruction-processing device may load from storage device 230 the data structure storing the aforementioned mapping functions and perform the remaining reconstruction steps of
The preprocessing and run-time phase described above is by no means the only way the steps of
The reconstruction-processing device 270 may contain multiple processors 273. In that case, the preprocessing and run-time work can be distributed across the processors.
The captured image data and generated data structures may be compressed and stored in data storage device 230. In such an embodiment, the images may be compressed using JPEG compression, while Lempel-Ziv compression may be performed on the data structures. Both of these compression techniques are well known to those of ordinary skill in the art. For instance, in an exemplary embodiment of the invention, JPEG compression is efficiently used by re-projecting the captured images to cylindrical projections.
The reconstruction-processing device 270 may also include multiple caches 272 for dynamically loading the images and the reconstruction data structures. In an exemplary embodiment, two caches 272 are used for image data: one cache stores the compressed images already loaded from the data storage device 230, the other stores decompressed image subsets. The third cache is used dynamically load, decompress, and store the remaining data structures. All three caches may be set to use a least recently used (LRU) replacement policy.
Several suitable reconstruction-processing devices may be used. For instance, a Silicon Graphics (SGI) Onyx2 system or a personal computer (PC) equipped with the necessary hardware devices.
The reconstruction processing device 270 is in no way limited to this configuration, and may include any configuration of hardware, or combination of hardware and software, as will be contemplated by one of ordinary skill in the art for reconstructing 4D plenoptic functions and producing interactive walkthroughs according to the present invention.
In a further exemplary embodiment, view reconstruction may be accelerated during the execution of the interactive walkthrough by using fewer mappings than columns in the reconstructed image. In this embodiment, the pixels may be warped using lower-resolution information.
The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be readily apparent to one skilled in the art are intended to be included within the scope of the following claims.
Aliaga, Daniel G., Carlbom, Ingrid B.
Patent | Priority | Assignee | Title |
11354862, | Jun 06 2019 | Universal City Studios LLC | Contextually significant 3-dimensional model |
11508115, | Apr 12 2016 | Quidient, LLC | Quotidian scene reconstruction engine |
11875476, | May 02 2018 | Quidient, LLC | Codec for processing scenes of almost unlimited detail |
7359575, | Dec 07 2004 | ROADMAP GEO LP III, AS ADMINISTRATIVE AGENT | Dynamic warp map generation system and method |
7443402, | Nov 25 2002 | GOOGLE LLC | Method and apparatus for virtual walkthrough |
8832599, | Oct 03 2008 | Microsoft Technology Licensing, LLC | Assisted navigation in virtual environments |
9369679, | Nov 07 2006 | The Board of Trustees of the Leland Stanford Junior University | System and process for projecting location-referenced panoramic images into a 3-D environment model and rendering panoramic images from arbitrary viewpoints within the 3-D environment model |
Patent | Priority | Assignee | Title |
5710875, | Sep 09 1994 | Fujitsu Limited | Method and apparatus for processing 3-D multiple view images formed of a group of images obtained by viewing a 3-D object from a plurality of positions |
5920376, | Aug 30 1996 | THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT | Method and system for panoramic viewing with curved surface mirrors |
6023523, | Feb 16 1996 | Uber Technologies, Inc | Method and system for digital plenoptic imaging |
6097394, | Apr 30 1996 | INTERFLO TECHNOLOGIES, INC | Method and system for light field rendering |
6118474, | May 10 1996 | TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK, THE | Omnidirectional imaging apparatus |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 02 2002 | ALIAGA, DANIEL G | Lucent Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032401 | /0053 | |
May 02 2002 | CARLBOM, INGRID B | Lucent Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032401 | /0053 | |
May 07 2004 | Lucent Technologies Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jun 12 2007 | ASPN: Payor Number Assigned. |
Oct 05 2009 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Oct 07 2013 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Oct 02 2017 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Apr 11 2009 | 4 years fee payment window open |
Oct 11 2009 | 6 months grace period start (w surcharge) |
Apr 11 2010 | patent expiry (for year 4) |
Apr 11 2012 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 11 2013 | 8 years fee payment window open |
Oct 11 2013 | 6 months grace period start (w surcharge) |
Apr 11 2014 | patent expiry (for year 8) |
Apr 11 2016 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 11 2017 | 12 years fee payment window open |
Oct 11 2017 | 6 months grace period start (w surcharge) |
Apr 11 2018 | patent expiry (for year 12) |
Apr 11 2020 | 2 years to revive unintentionally abandoned end. (for year 12) |