A system for providing interactive views of 3-dimensional models with surface properties is disclosed. The system provides a compact representation of a 3d model and its surface features and provides for efficiently viewing and interacting with the model using dynamically switched texture maps. The compact representation is beneficial for transmission of the model across a network as well as for local storage of the model in the computer memory. The dynamically switched texture maps allow for more accurate surface details on the 3d model, as well as speedy interaction between a user and the 3d model.
|
1. A computer imaging system having at least one central processing unit (CPU), at least one memory, and at least one network interface to at least one network, the computer imaging system comprising:
a) at least one model stored in at least one of the memories, the model having geometric features and surface features; b) a means for creating a number of camera views around the models; c) a means for creating a 2d rendered image of the models for each camera view; and d) a means for storing the geometric features, the 2d rendered images, and the respective camera views in at least one of the memories.
19. A computer imaging system for producing an interactive image of a three-dimensional (3d) model, said computer imaging system comprising:
a means for applying a visibility algorithm to determine which portions of a 3d model's geometric features are fully visible, partially occluded, or backfacing in a rendering of the 3d model; a means for assigning at least one pixel of an empty region in the rendering to a color to represent the lack of texture image data for the viewpoint corresponding to said rendering; a means for mapping the fully visible portions of the 3d model into texture coordinates corresponding to a current user viewpoint; and a means for mapping the remaining geometry to the at least one assigned pixels.
3. A computer imaging system having at least one central processing unit (CPU), at least one memory, the computer imaging system comprising:
a) a means for receiving a geometric representation of a 3d model, at least one camera orientation and a 2d rendered image of the 3d model for each camera orientation; b) a means for determining which portion of the 3d model's geometric representation, in each of the 2d rendered images, are fully visible, not visible and partially visible from one or more of the respective camera orientations; c) a means for creating a "not visible" location indicator that will indicate "not fully visible"; d) a means for calculating, for each 2d rendered image, one or more texture coordinates of each corner of each portion of the 3d model's geometric representation and substituting the "not visible" location indicator for the portions of the 3d model's geometric representation that are not visible and partially visible, to create a texture map of a 3d textured object; and e) a display that displays the 3d textured object.
13. A computer-based interactive three-dimensional (3d) imaging method comprising the steps of:
a) storing one or more 3d models, the 3d models having geometric features and surface features; b) selecting a plurality of viewpoints on a 3d model taken from storage; c) producing a 2d rendering of the 3d model for each of said plurality of viewpoints; d) transmitting the geometry, the renderings, and respective viewpoints to an imaging means; e) producing an image of the 3d model for each respective viewpoint based on the geometric features and rendering on the imaging means; f) selecting an initial viewpoint and displaying its corresponding image on a display means; g) interacting with at least one user through an input means, said input means allowing the at least one user to change the at least one user's viewpoint on the 3d model; h) selecting which of said plurality of viewpoints approximates a user's current viewpoint; i) disabling shading and illumination at the imaging means when the surface features include shading; and j) displaying the image corresponding to the selected viewpoint on the display means.
20. A computer-based interactive three-dimensional (3d) imaging method, said method being tangibly embodied in a program of instructions readable and executable by a machine, comprising the steps of:
a) storing one or more 3d models, the 3d models having geometric features and surface features including shading; b) selecting a plurality of viewpoints on a 3d model taken from storage; c) producing a 2d rendering of the 3d model for each of said plurality of viewpoints; d) transmitting the geometry, the renderings, and respective viewpoints to an imaging means; e) producing an image of the 3d model for each respective viewpoint based on the geometric features and rendering on the imaging means; f) selecting an initial viewpoint and displaying its corresponding image on a display means; g) interacting with at least one user through an input means, said input means allowing the at least one user to change the at least one user's viewpoint on the 3d model; h) selecting which of said plurality of viewpoints approximates a user's current viewpoint; and i) disabling shading and illumination at the imaging means when the surface features include shading; and j) displaying the image corresponding to the selected viewpoint on the display means.
18. A computer imaging system for producing an interactive image of a three-dimensional (3d) model, said computer imaging system comprising:
a processing means for pre-processing at least one 2d rendering of a 3d model, a viewpoint corresponding to each of said at least one 2d rendering, and geometric features of the 3d model, said processing means including: a means for applying a visibility algorithm to determine, for each viewpoint, which portions of the 3d model's geometric features are fully visible, partially occluded, or backfacing; a means for mapping, for each viewpoint, the fully visible portions of the 3d model into texture coordinates corresponding to the viewpoint; and a means for mapping, for each viewpoint, the remaining geometry to an assigned pixel to represent the lack of texture image data for the viewpoint in the corresponding rendering; a display means for displaying the 3d model; an input means for at least one user to interact with the 3d model; a means for determining which viewpoint of said at least one viewpoint approximates a viewpoint of the at least one user; and a means for creating an image to display on the display means, said image created from the geometric features of a 3d model and the at least one 2d rendering and the mapping of the fully visible portions and remaining geometry corresponding to the viewpoint which approximates the viewpoint of the at least one user.
5. A computer imaging system for producing an interactive image of a three-dimensional (3d) model, comprising:
a storage means for storing 3d models, said 3d models having geometric features; a means for determining at least one viewpoint of one of the 3d models; a rendering means for rendering images of the 3d model from each of said viewpoints; a processing means for pre-processing the at least one rendering, the at least one viewpoint corresponding to said at least one rendering, and the geometric features of the 3d model, wherein the processing means includes: a means for applying a visibility algorithm to determine, for each viewpoint, which portions of the 3d model's geometric features are fully visible, partially occluded, or backfacing; a means for mapping, for each viewpoint, the fully visible portions of the 3d model into texture coordinates corresponding to the viewpoint; and a means for mapping, for each viewpoint, the remaining geometry to an assigned pixel to represent the lack of texture image data for the viewpoint in the corresponding rendering; a display means for displaying the 3d model; an input means for at least one user to interact with the 3d model for allowing the user to select a viewpoint of the 3d model; a means for determining which viewpoint approximates a viewpoint of the at least one user; and a means for creating an image to display on the display means, said image created from the geometric features of the 3d model, the rendering and the mapping of the fully visible portions and the remaining geometry corresponding to the view point which approximates the viewpoint of the at least one user.
2. A computer imaging system as in
4. A computer imaging system as in
6. The computer imaging system of
a compression and simplification means for compressing and simplifying said geometric features before the processing means pre-processes said geometric features.
7. The computer imaging system of
8. The computer imaging system of
a means to request from said rendering means additional renderings from additional camera viewpoints.
9. The computer imaging system of
10. The computer imaging system of
at least one server for preparing 3d model data to be sent to the client, comprising said means for determining at least one viewpoint and said rendering means; and at least one client for displaying the 3d model and interacting with at least one user, comprising said processing means, said displaying means, said input means, said means for determining which said at least one viewpoint corresponding to said at least one rendering approximates a viewpoint of the at least one user, and said means for creating an image to display on the display means.
11. The computer imaging system of
12. The computer imaging system of
14. The computer-based interactive three-dimensional (3d) imaging method of
compressing and simplifying said geometric features.
15. The computer-based interactive three-dimensional (3d) imaging method of
applying a visibility algorithm to determine which portions of the 3d model's geometric features are fully visible, partially occluded, or backfacing; assigning at least one pixel of an empty region in the rendering to a color to represent the lack of texture image data for the viewpoint corresponding to said rendering; mapping the fully visible portions of the 3d model into texture coordinates corresponding to the current at least one user viewpoint; and mapping the remaining geometry to the at least one assigned pixels.
16. The computer-based interactive three-dimensional (3d) imaging method of
performing step (c) when additional renderings are required.
17. The computer-based interactive three-dimensional (3d) imaging method of
|
1. Field of the Invention
This invention relates to computer-generated graphics and interactive display systems in general, and, in particular, to the generation of three-dimensional imagery using an efficient and compact form.
2. Description of the Related Art
The use of 3D imagery, whether displayed on a screen or in virtual reality goggles, is becoming increasingly important to the next generation of technology. Besides CAD/CAM (Computer Aided Design/Computer Aided Modelling), 3D imagery is used in videogames, simulations, and architectural walkthroughs. In the future, it may be used to represent filing systems, computer flowcharts, and other complex material. In many of these uses, it is not only the image of a 3D model that is required, but also the ability to interact with the image of the 3D model.
The process of creating and displaying three-dimensional (3D) objects in an interactive computer environment is a complicated matter. In a non-interactive environment, 3D objects require memory storage for the shape, or geometry, of the object depicted, as well as details concerning the surface, such as color and texture. However, in an interactive environment, a user has the ability to view the 3D object from many angles. This means that the system has to be able to generate the images of geometry and surface details of the 3D object from any possible viewpoint. Moreover, the system needs to generate images from these new viewpoints quickly, in order to maintain the sensation that the user is actually interacting with the 3D object.
As an example, consider viewing a 3D model of a house on a computer system with a monitor and a joystick. Although you may be viewing it on the flat computer screen, you will see the shape of the house, the black tiles on the roof, the glass windows, the brick porch, etc. But if you indicate "walk to the side" by manipulating the joystick, the viewpoint on the screen shifts, and you begin to see details around the side corner of the house, such as the aluminum siding. The system needs to keep up with your manipulations of the joystick, your "virtual movement," in order to provide a seamless, but continuously changing image of the 3D object, the house. So the system needs to redraw the same object from a different perspective, which requires a large amount of memory and a great deal of processing.
The problem of memory usage becomes even more acute when in a networked environment, where the image is being shown on a client system, while the original 3D object information is stored on a server. The client display system will have to receive the 3D object information over a communication link, thereby slowing down the process. In most cases, a compromise will have to be made between the interactive speed, how quickly the display will redraw based on user input, and the accuracy of the 3D object, how many details can be recalculated with each change in perspective. Although the following discussion will assume a networked environment, the problems addressed are also applicable to stand-alone computer systems, since an analogous situation exists between the hard drive (server) and the local memory (client) in a computer.
Before addressing the memory and processing problems, a brief discussion of 3D imaging is in order. As indicated above, the features of 3D objects can be separated into geometry, the shape and volume of the object, and surface details, such as texture, color, and shading. The first issue is how these two attributes, geometry and surface details, are stored and transmitted. The storage of geometry involves the description of the various edges and vertices of the object in 3D space. One way to speed the network time of transmitting and displaying a 3D object is to simplify the geometry, but this can make it hard to represent the original surface information on the reduced geometry, as the vast majority of simplification algorithms use only a geometric approximation and ignore the importance of surface details. A technique such as "texture mapping" [Blinn] maps an image of the rendered object onto the simplified geometry, but it is difficult to create and map images onto a surface of arbitrary topology. Furthermore, the corresponding texture coordinates would have to be sent along with the model, which would increase the transmission load.
Texture mapping is one of a class of techniques called "image-based rendering," where an image of the surface details are "rendered" for each new perspective on the 3D object. Every time the user moves around the 3D object, a new surface image must be "rendered" from the new perspective. The image of surface details is taken and stored in various ways. One approach, which is used in texture mapping, is to break the surface of the 3D object into polygons, render every surface polygon as a small image, and then assemble all the images into a large montage that acts as a single texture image for the entire surface [Cignoni98]. Each polygon would then appear fully rendered and without any occlusion somewhere in the montage image, and the corresponding texture coordinates of the polygon corners would be well-defined. Occlusion occurs when some surface details are blocked from the user's perspective. Also in this approach is a method that packs the images into the montage efficiently to minimize wasted space in the texture image.
There are other approaches to taking a surface detail image. A single texture image of an object can be created by doing a "flyover" of the 3D object. This is equivalent to passing a camera over every surface of the object and taking pictures either continuously, creating one large seamless image, or discretely, creating multiple images corresponding to coverage of the entire 3D object.
Some image-based rendering techniques use these surface detail images directly as hardware texture maps [Cignoni98, Rademacher, Foran98, Erdahl97, Jackson96]; in other words, they use one or more images to completely describe the texture of the 3D object from any perspective the user may take. Others rely on data structures and algorithms that are implemented in software [Harashima98, Oliveira98, Oliveira99]; meaning that new perspectives are created or interpolated from image data by computer processing. Furthermore, some approaches are optimized for creating as realistic view as possible from a finite set of images, while others seek a compromise between accuracy and interaction speed.
There are many problems of memory usage, data transmission and excessive processing when employing the above techniques. For instance, the image-based rendering techniques that rely extensively on software algorithms to perform elaborate operations during every change of viewpoint are inapplicable to CAD/CAM and related applications that demand both high performance and high accuracy. This is even more of a problem in client-server applications in which the client may be much less powerful than the server machine. For the rest of this application, we will focus on texture mapping as the best technique for a client-server application.
When using texture mapping over a client-server/networked system, there is the need to send additional information such as texture coordinates, details, and color along with the geometry. To reduce bandwidth requirements it is important to send as little data as possible while still allowing the client to reconstruct the scene. However, this is a problem with complex rendering processes that don't reduce to simple 2D renderings of a 3D object with a given viewpoint. Complex rendering requirements will more tightly couple the process that creates the viewable geometry with the process that renders the images, which is inappropriate, for example, when the creation process is performed on the server and the rendering process is performed on the client. Thus, in a client-server system, multiple 2D renderings or multiple texture maps of the 3D object are preferable.
When using the polygon method of texture mapping, multiple images are used to represent the surface of an object at the same time. In this situation, any difference in rendering parameters between images may result in a discontinuity between regions that can stand out as a disturbing artifact. Surfaces that do not appear seamless may be hard to interpret because the eye is drawn to the abrupt changes in surface characteristics rather than the actual features on the surface.
If, in a client-server system, the server performs the original rendering, and transmits it to the client, the client is stuck with the conditions of the original rendering. For instance, if a server renders the 3D object with a light source, resulting in shading, the client is unable to change the shading on the renderings it receives from the server. The inability of the client to view the object under different lighting and shading conditions may limit the applicability of the technique, particularly when multiple objects rendered under different conditions are collected in a single scene that has a different type of lighting.
When it is important to view the facets of a model, and yet have the client display simplified geometry, there is a fundamental problem in showing individual surface details of the original geometry using only simplified geometry. This is particularly a problem when showing shading on the facets, since if the lighting is done on the client side there is no way to subdivide a polygon into the polygons in the original model.
An object of this invention is to provide an improved system and method for representing a 3D model with surface features in a compact form.
Another object of this invention is to provide a system and a method for representing a 3D model with surface features in a compact form for rapid transmission over a network and/or for storage in computer memory.
Another object of this invention is to provide a system and a method for representing a 3D model with one surface texture at a time, thus eliminating discontinuities in surface features between regions of the 3D model.
Another object of the invention is to provide a system for interactively viewing a 3D model in an efficient manner using simplified geometry while displaying the appearance of the original model using the original surface features, including original connections and shading.
Another object of the invention is to enable a client to recreate a 3D model with a minimum of transmitted information across a network, particularly without the need to receive texture coordinates to map the images onto the geometry of the model.
To accomplish the above and other objects, a server renders a 3D model, described using geometry and surface features, from a number of viewpoints. These image renderings, or "camera views," contain the original detailed facets of the original model on the server. The data concerning the "camera position" of each viewpoint is also saved with each "camera view." The geometry of the original model is simplified to reduce the amount of information that needs to be sent across the network.
The simplified model, the rendered images, and the camera orientations are sent to a remote client that dynamically texture maps the best image based upon the current view direction. A pre-processing stage determines which portions of the model's geometry are occluded from a given view direction and colored accordingly, to indicate that the texture information for these portions are invalid. Because there are multiple texture maps for the same model, a given spot on the model is typically depicted in more than one texture map. As the object is rotated to different view directions, the texture map changes, giving the impression that the entire object has been texture mapped.
By separating the pure geometrical component of the model from all the data and surface features that contribute to its rendered appearance, the client need only be able to drape a texture onto a geometric mesh to recreate the rendered appearance on the client. The server need only create a few images of the model from a view camera direction, and the client need only receive the (possibly simplified) geometry of the model without any additional surface data or annotations that are represented in the texture images. The client is both thin and generic, because all the specifics of rendering a model remain the job of the server application.
The foregoing and other objects, aspects and advantages will be better understood from the following detailed description of preferred embodiments of the invention with reference to the following drawings:
In the following description, we assume a client-server system, although, as stated above, the ideas are applicable to a stand-alone computer system.
In a preferred embodiment, a server stores the original 3D object data, including geometry and surface features. The server also has a rendering function, which can be applied to the stored 3D object data to create texture maps from particular viewpoints. A network connects the server to the client. The client has a display and the ability to take input from a user in order to alter the display. The server sends data, including texture maps and geometry, about the 3D object over the network to the client, and the client stores and processes that data in order to provide interactivity with the user.
In the preferred embodiment, the steps involved are as follows:
First, on the server side, a number, N, of cameras based on the 3D object's shape and relevant features is generated. For example, on a cube, six views, one from each of the directions around a cube, may be generated, as shown in FIG. 2. Images of the model using each of the cameras are created. For the rest of this application, the term "camera" or "view camera" in relation to data will signify the information identifying each of these N cameras: location, direction of viewpoint, view angle, up direction, etc. If desired, the geometry of the model is simplified. The simplified geometry, the view cameras for the series of images, and the images themselves are sent to the client. Note that the view cameras contain a small amount of data and do not contribute significantly to the data that are sent.
Second, the client receives the simplified geometry, cameras, and images and performs a pre-processing step, once per camera. Based on the geometry and a given camera, a visibility algorithm decides which portions of the model's geometry are fully visible in the image and which are either backfacing or partially occluded by other geometry. The corners of the fully visible portions of geometry are mapped into texture coordinates corresponding to the current image and camera, while the corners of the remaining geometry are assigned a unique value corresponding to an empty region of the image. One or more of the pixels in the empty region of the image are assigned to a color to convey the lack of texture image data for this particular view direction. The pre-processing step runs once per camera and results in N sets of texture coordinates for the simplified geometry.
Third, in order to show the model in an interactive manner, the client display system chooses an initial view direction and determines the camera whose orientation most closely matches it. The system displays the simplified geometry using the texture image and coordinates fitting that camera. As the user rotates and interacts with the object, the system calculates which camera is the best match to the current view orientation. When the best matching camera is different from that of the current texture map, the new texture image and coordinates are swapped in.
From this overview, additional advantages of the techniques of the present invention are clear. When creating the original images of the 3D model on the server, rendering the surface details of the model's original geometry allows the client to view the original surface details on the surface of simplified geometry. This results in the client having a more accurate view of the model, even though only simplified geometry is being used on the client's computer.
In the above description, it is assumed the original rendering by the server was done with no shading, i.e. all surfaces were lit up equally. However, to provide the client with additional visual cues, the server can render the 3D model with shading turned on. As the client is interactively viewing the textured model, direct illumination can be turned off, thereby allowing the client to see the shading of the original model on the simplified geometry. This additional information may have a slight drawback in that the light on the client will move with the object as the object rotates, and will jump to a new position when the texture map changes. However, the result is a rotatable model with highly accurate shading that rotates quickly on the client side because it has simplified geometry. Such quick rotation was not possible with the prior art techniques that map the entire object to a single, complicated texture image, rather than the various camera views of the present invention.
When a 3D geometric model is texture mapped with a 2D image, the corners of each polygon in the model are assigned a 2D coordinate, often denoted (u,v), where 0≦u≦1, 0v≦1. The (u,v) coordinate represents the (x,y) location of the polygon corner relative to the lower left corner of the texture image, where u is the x location normalized to the image width and v is the y location normalized to the image height. The process of texture mapping an object involves creating images of the object and then calculating (u,v) for each of the polygon corners.
In order to texture map the image 502 onto the geometry 501 and accommodate occlusion, an unused region of 502 must be identified and a pixel assigned to represent the color of occluded polygons. In the rendered image 502, the lower left corner pixel 510 is designated as the pixel to be used for coloring occluded regions and will be used as the texture coordinate of polygons with occluded corners. The lower left corner pixel is black so the occluded regions will also be black. Alternatively, if no unused pixel is available or if the identification of an unused pixel is problematic, an additional row of pixels may be added to the image to create an unused region of the image for texture mapping the occluded and backfacing parts of the model.
With continued reference to
The preferred embodiment texture maps the entire polygon to a special pixel in the image that has been colored to indicate "no texture information available in this view," which produces a natural appearance of the model 505 similar to having a shadow cast on the object where no features are visible 504. The fact that one texture map is coloring the entire surface including the occluded and backfacing regions, and that the geometry on which the texture map is draped has been simplified, guarantees a highly interactive view of the model while maintaining surface detail comparable to the original model.
To reduce the size of the shadows caused by occlusion the client can subdivide the partially occluded polygon into fully visible and fully occluded polygons, as depicted in
If the rotation is continued further from that of
In the embodiment based on subdividing the regions occluded by edges of the model, the geometry must be processed for each view camera to create a new geometry containing all the subdivided polygons resulting from all the view occlusions. Then the new geometry is texture mapped as before, but this time no polygons are partially occluded; the polygons are either fully visible, fully occluded, or backfaced.
A rendering system then creates an image of the model for each of the N cameras at step 1206. Note that the rendering system need not be tightly integrated into the server process; it can be a standalone legacy application with the one requirement that it be able to render the model with a specified camera. The rendering system has its own customized or proprietary visualization techniques based on the data in the model, and the other components of the embodiment, in particular the client that displays the results, are able to present those visualizations without any knowledge of them. In other words, for purposes of this invention, the rendering system can be viewed as a "black box," which receives a 3D model and a camera view as input and outputs a 2D image. However, the rendering process occurs completely on the server side; and is only done for the selected camera views. The decoupling of the server rendering application from the client display process allows the client to be "thinner" and contain no code customized specifically for a particular server rendering application. The server rendering system is preferably a large monolithic legacy system, and, as long as it can create images with a specified camera, the client can drape the results onto the mesh without knowing exactly how the server created the images.
The simplified and compressed geometry, along with the texture images and cameras, are delivered to the client, when they become available, at step 1207. In the preferred embodiment the images are compressed at step 1208 individually and as a collection, to take advantage of the similarity between adjacent views that results in good interframe compression. Sending geometry concurrently with the images allows the client to display something useful to the viewer as soon as it is available. If the geometry arrives first, before any texture images, the geometry can be displayed as a shaded shape with a default color. As the textures arrive, they are draped on the surface based on the current view camera. On the other hand, if the texture image arrives before the geometry, the 2D texture image can be shown to give an immediate sense of the model's appearance prior to the availability of the shape itself.
A flowchart showing the client pre-processing algorithm of the preferred embodiment is depicted in FIG. 13. The process starts at step 1301, when results from the server process arrive at step 1302 and the streaming information is decompressed in order to load the geometry, N cameras and N images at step 1303. As described above, for each image, an unused pixel or region must be identified or created for coloring the occluded and backfacing polygons at step 1305. Also, for each camera the polygons that are occluded or backfacing in that view must be identified at step 1304. Optionally, the polygons can be subdivided further based on the occluding edge of the model at step 1306. At step 1307, the results from the last two steps 1304 and 1305 are used to create texture coordinates for each view camera, giving special treatment for polygons that are found occluded or backfacing at step 1304. The occluded and backfacing polygons should texture map to the region in the image assigned in step 1305 and colored to distinguish the polygons that lack texture information from those that are fully textured. Now that the texture coordinates are created, they are delivered to the display process at step 1308.
When the client displays the model locally there is an option of using the normals for smooth shading, faceted shading, or no shading at all. The best choice may depend on the model and the application area, but if the goal is to display the model with as few polygons as possible, yet convey the faceted shading of the original model, there is a fundamental problem in that the simplified model by definition does not contain the polygons corresponding to all the "facets" of the original model. If the model is intended to be displayed with smooth shading the problem is reduced, because the graduation of shading resulting from interpolating adjacent normals tends to smooth over the rougher edges created by simplification. But if the model pertains to CAD/CAM, for example, and the shading of the original facets is desired, this invention provides a unique solution to the problem as described below.
When a model is rendered by the server and displayed on the client, shading is done only once on the combined system. In most cases the server renders the model without shading and the client provides appropriate shading and lighting based on the normals on the model and user preference. This has the advantage that the shading will smoothly change as the 3D model is viewed from different angles. However, the shading may appear coarse, because the client is providing the shading based on simplified geometry. On the other hand, if the original, faceted shading of the original 3D model is desired, the server renders each image with shading enabled. When the client displays the 3D model, the client disables shading and illumination, thus conveying the original, faceted shading in the texture image itself. As the object rotates, the shading remains constant until a new texture camera is closer, at which point a different image is used for the texture map and there is an abrupt change in the shading. Although the abrupt changes in shading may appear jarring at first, the fact that they are linked to changes in view orientation makes it an acceptable sacrifice of quality in order to retain a view of the original model structure.
In many situation the model will have a well-defined shape and the number and location of cameras that provide optimal coverage will be obvious. Furthermore, if the model is being modified in small steps as its design evolves, the ideal set of cameras may remain fixed while the design changes. Note that the cameras need not be evenly spaced; they can be a mixture of uniformly distributed and special cameras located to provide optimal views of known regions of particular interest. In most cases, a non-model-specific algorithm for locating cameras will provide a good and interactive view of the geometry. It is also possible to allow the client to request additional renderings on demand from the server.
One way to guarantee that important views of a model have a good texture image associated with them is to allow the client to request additional texture images on demand. In the embodiment described above, most of the server process can run once per model, pre-rendering the images and compressing the geometry for later streaming to the client. But in addition the client could request new renderings on demand if a particular orientation is important and lacks good texture information on important parts of the model.
Finally, the client and server pieces can work to advantage on a single computer without a network, since the server process creates components that allow the client to view a 3D model with much higher performance while maintaining the appearance of the original model. The reverse is also true: the functions of server and client could be broken up onto different computers in a network. For instance, parallel processing could be used to make renderings of the original 3D model, which would mean different computers or processors would simultaneously perform the same server function. Or the 3D models could be stored on a distributed network and various servers and clients could be working to create images on various user displays.
While the present invention has been described with respect to certain preferred embodiments, it should be understood that the invention is not limited to these particular embodiments, but, on the contrary, the invention is intended to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
Horn, William P., Suits, Frank, Klosowski, James T.
Patent | Priority | Assignee | Title |
10055880, | Dec 06 2016 | ACTIVISION PUBLISHING, INC | Methods and systems to modify a two dimensional facial image to increase dimensional depth and generate a facial image that appears three dimensional |
10055893, | May 08 2017 | MAGNOLIA LICENSING LLC | Method and device for rendering an image of a scene comprising a real object and a virtual replica of the real object |
10097596, | Nov 11 2013 | Amazon Technologies, Inc. | Multiple stream content presentation |
10099140, | Oct 08 2015 | ACTIVISION PUBLISHING, INC ; Activision Publishing, Inc. | System and method for generating personalized messaging campaigns for video game players |
10118099, | Dec 16 2014 | ACTIVISION PUBLISHING, INC | System and method for transparently styling non-player characters in a multiplayer video game |
10121266, | Nov 25 2014 | AFFINE TECHNOLOGIES LLC | Mitigation of disocclusion artifacts |
10127722, | Jun 30 2015 | Matterport, Inc.; MATTERPORT, INC | Mobile capture visualization incorporating three-dimensional and two-dimensional imagery |
10137376, | Dec 31 2012 | ACTIVISION PUBLISHING, INC | System and method for creating and streaming augmented game sessions |
10139985, | Jun 22 2012 | MATTERPORT, INC | Defining, displaying and interacting with tags in a three-dimensional model |
10155152, | Jun 03 2008 | TweedleTech, LLC | Intelligent game system including intelligent foldable three-dimensional terrain |
10155156, | Jun 03 2008 | TweedleTech, LLC | Multi-dimensional game comprising interactive physical and virtual components |
10163261, | Mar 19 2014 | MATTERPORT, INC | Selecting two-dimensional imagery data for display within a three-dimensional model |
10179289, | Jun 21 2016 | Activision Publishing, Inc.; ACTIVISION PUBLISHING, INC | System and method for reading graphically-encoded identifiers from physical trading cards through image-based template matching |
10183212, | Sep 15 2011 | Tweedetech, LLC | Furniture and building structures comprising sensors for determining the position of one or more objects |
10185463, | Feb 13 2015 | Nokia Technologies Oy | Method and apparatus for providing model-centered rotation in a three-dimensional user interface |
10192362, | Oct 27 2016 | GoPro, Inc.; GOPRO, INC | Generating virtual reality and augmented reality content for a live event |
10213682, | Jun 15 2015 | ACTIVISION PUBLISHING, INC | System and method for uniquely identifying physical trading cards and incorporating trading card game items in a video game |
10226701, | Apr 29 2016 | Activision Publishing, Inc. | System and method for identifying spawn locations in a video game |
10226703, | Apr 01 2016 | Activision Publishing, Inc.; ACTIVISION PUBLISHING, INC | System and method of generating and providing interactive annotation items based on triggering events in a video game |
10232272, | Oct 21 2015 | Activision Publishing, Inc. | System and method for replaying video game streams |
10245509, | Oct 21 2015 | Activision Publishing, Inc. | System and method of inferring user interest in different aspects of video game streams |
10257266, | Nov 11 2013 | Amazon Technologies, Inc. | Location of actor resources |
10265609, | Jun 03 2008 | TweedleTech, LLC | Intelligent game system for putting intelligence into board and tabletop games including miniatures |
10275939, | Nov 05 2015 | GOOGLE LLC | Determining two-dimensional images using three-dimensional models |
10284454, | Nov 30 2007 | Activision Publishing, Inc. | Automatic increasing of capacity of a virtual space in a virtual world |
10286314, | May 14 2015 | Activision Publishing, Inc. | System and method for providing continuous gameplay in a multiplayer video game through an unbounded gameplay session |
10286326, | Jul 03 2014 | Activision Publishing, Inc. | Soft reservation system and method for multiplayer video games |
10300390, | Apr 01 2016 | Activision Publishing, Inc.; ACTIVISION PUBLISHING, INC | System and method of automatically annotating gameplay of a video game based on triggering events |
10304240, | Jun 22 2012 | Matterport, Inc. | Multi-modal method for interacting with 3D models |
10311627, | Dec 16 2016 | Samsung Electronics Co., Ltd. | Graphics processing apparatus and method of processing graphics pipeline thereof |
10315110, | Nov 11 2013 | Amazon Technologies, Inc. | Service for generating graphics object data |
10315113, | May 14 2015 | Activision Publishing, Inc. | System and method for simulating gameplay of nonplayer characters distributed across networked end user devices |
10322351, | Jul 03 2014 | Activision Publishing, Inc. | Matchmaking system and method for multiplayer video games |
10347013, | Nov 11 2013 | Amazon Technologies, Inc. | Session idle optimization for streaming server |
10374928, | Nov 11 2013 | Amazon Technologies, Inc. | Efficient bandwidth estimation |
10376781, | Oct 21 2015 | Activision Publishing, Inc. | System and method of generating and distributing video game streams |
10376792, | Jul 03 2014 | Activision Publishing, Inc. | Group composition matchmaking system and method for multiplayer video games |
10376793, | Feb 18 2010 | Activision Publishing, Inc. | Videogame system and method that enables characters to earn virtual fans by completing secondary objectives |
10421019, | May 12 2010 | Activision Publishing, Inc. | System and method for enabling players to participate in asynchronous, competitive challenges |
10456660, | Jun 03 2008 | TweedleTech, LLC | Board game with dynamic characteristic tracking |
10456675, | Jun 03 2008 | TweedleTech, LLC | Intelligent board game system with visual marker based game object tracking and identification |
10463964, | Nov 17 2016 | ACTIVISION PUBLISHING, INC | Systems and methods for the real-time generation of in-game, locally accessible heatmaps |
10463971, | Dec 06 2017 | ACTIVISION PUBLISHING, INC | System and method for validating video gaming data |
10471348, | Jul 24 2015 | Activision Publishing, Inc.; ACTIVISION PUBLISHING, INC | System and method for creating and sharing customized video game weapon configurations in multiplayer video games via one or more social networks |
10486068, | May 14 2015 | Activision Publishing, Inc. | System and method for providing dynamically variable maps in a video game |
10500498, | Nov 29 2016 | Activision Publishing, Inc. | System and method for optimizing virtual games |
10537809, | Dec 06 2017 | ACTIVISION PUBLISHING, INC | System and method for validating video gaming data |
10561945, | Sep 27 2017 | Activision Publishing, Inc. | Methods and systems for incentivizing team cooperation in multiplayer gaming environments |
10573065, | Jul 29 2016 | ACTIVISION PUBLISHING, INC | Systems and methods for automating the personalization of blendshape rigs based on performance capture data |
10586380, | Jul 29 2016 | ACTIVISION PUBLISHING, INC | Systems and methods for automating the animation of blendshape rigs |
10596471, | Dec 22 2017 | Activision Publishing, Inc. | Systems and methods for enabling audience participation in multi-player video game play sessions |
10601885, | Nov 11 2013 | Amazon Technologies, Inc. | Adaptive scene complexity based on service quality |
10627983, | Dec 24 2007 | Activision Publishing, Inc. | Generating data for managing encounters in a virtual world environment |
10650539, | Dec 06 2016 | Activision Publishing, Inc. | Methods and systems to modify a two dimensional facial image to increase dimensional depth and generate a facial image that appears three dimensional |
10668367, | Jun 15 2015 | ACTIVISION PUBLISHING, INC | System and method for uniquely identifying physical trading cards and incorporating trading card game items in a video game |
10668381, | Dec 16 2014 | Activision Publishing, Inc. | System and method for transparently styling non-player characters in a multiplayer video game |
10694352, | Oct 28 2015 | ACTIVISION PUBLISHING, INC | System and method of using physical objects to control software access |
10701332, | Sep 22 2017 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, image processing system, and storage medium |
10702779, | Nov 17 2016 | Activision Publishing, Inc. | Bandwidth and processing efficient heatmaps |
10709981, | Nov 17 2016 | ACTIVISION PUBLISHING, INC | Systems and methods for the real-time generation of in-game, locally accessible barrier-aware heatmaps |
10740974, | Sep 15 2017 | Snap Inc.; SNAP INC | Augmented reality system |
10765948, | Dec 22 2017 | ACTIVISION PUBLISHING, INC | Video game content aggregation, normalization, and publication systems and methods |
10775959, | Jun 22 2012 | Matterport, Inc. | Defining, displaying and interacting with tags in a three-dimensional model |
10778756, | Nov 11 2013 | Amazon Technologies, Inc. | Location of actor resources |
10807003, | Apr 29 2016 | Activision Publishing, Inc. | Systems and methods for determining distances required to achieve a line of site between nodes |
10818028, | Dec 17 2018 | Microsoft Technology Licensing, LLC | Detecting objects in crowds using geometric context |
10818060, | Sep 05 2017 | Activision Publishing, Inc. | Systems and methods for guiding motion capture actors using a motion reference system |
10818264, | Oct 27 2016 | GoPro, Inc. | Generating virtual reality and augmented reality content for a live event |
10835818, | Jul 24 2015 | Activision Publishing, Inc. | Systems and methods for customizing weapons and sharing customized weapons via social networks |
10857468, | Jul 03 2014 | Activision Publishing, Inc. | Systems and methods for dynamically weighing match variables to better tune player matches |
10861079, | Feb 23 2017 | ACTIVISION PUBLISHING, INC | Flexible online pre-ordering system for media |
10864443, | Dec 22 2017 | ACTIVISION PUBLISHING, INC | Video game content aggregation, normalization, and publication systems and methods |
10898813, | Oct 21 2015 | Activision Publishing, Inc. | Methods and systems for generating and providing virtual objects and/or playable recreations of gameplay |
10905963, | Dec 31 2012 | Activision Publishing, Inc. | System and method for creating and streaming augmented game sessions |
10909758, | Mar 19 2014 | Matterport, Inc. | Selecting two-dimensional imagery data for display within a three-dimensional model |
10953314, | Jun 03 2008 | TweedleTech, LLC | Intelligent game system for putting intelligence into board and tabletop games including miniatures |
10974150, | Sep 27 2017 | Activision Publishing, Inc. | Methods and systems for improved content customization in multiplayer gaming environments |
10981051, | Dec 19 2017 | ACTIVISION PUBLISHING, INC | Synchronized, fully programmable game controllers |
10981069, | Mar 07 2008 | Activision Publishing, Inc. | Methods and systems for determining the authenticity of copied objects in a virtual environment |
10987588, | Nov 29 2016 | Activision Publishing, Inc. | System and method for optimizing virtual games |
10991110, | Dec 06 2016 | Activision Publishing, Inc. | Methods and systems to modify a two dimensional facial image to increase dimensional depth and generate a facial image that appears three dimensional |
10997760, | Aug 31 2018 | SNAP INC | Augmented reality anthropomorphization system |
11040286, | Sep 27 2017 | Activision Publishing, Inc. | Methods and systems for improved content generation in multiplayer gaming environments |
11062509, | Jun 22 2012 | Matterport, Inc. | Multi-modal method for interacting with 3D models |
11076140, | Feb 05 2018 | Canon Kabushiki Kaisha | Information processing apparatus and method of controlling the same |
11097193, | Sep 11 2019 | ACTIVISION PUBLISHING, INC | Methods and systems for increasing player engagement in multiplayer gaming environments |
11115712, | Dec 15 2018 | ACTIVISION PUBLISHING, INC | Systems and methods for indexing, searching for, and retrieving digital media |
11117055, | Dec 06 2017 | ACTIVISION PUBLISHING, INC | Systems and methods for validating leaderboard gaming data |
11148063, | Dec 22 2017 | Activision Publishing, Inc. | Systems and methods for providing a crowd advantage to one or more players in the course of a multi-player video game play session |
11185784, | Oct 08 2015 | Activision Publishing, Inc. | System and method for generating personalized messaging campaigns for video game players |
11189084, | Jul 29 2016 | Activision Publishing, Inc. | Systems and methods for executing improved iterative optimization processes to personify blendshape rigs |
11192028, | Nov 19 2018 | ACTIVISION PUBLISHING, INC | Systems and methods for the real-time customization of video game content based on player data |
11195018, | Apr 20 2017 | Snap Inc. | Augmented reality typography personalization system |
11207596, | Nov 17 2016 | Activision Publishing, Inc. | Systems and methods for the real-time generation of in-game, locally accessible barrier-aware heatmaps |
11213753, | Nov 17 2016 | Activision Publishing, Inc. | Systems and methods for the generation of heatmaps |
11224807, | May 14 2015 | Activision Publishing, Inc. | System and method for providing dynamically variable maps in a video game |
11263670, | Nov 19 2018 | ACTIVISION PUBLISHING, INC | Systems and methods for dynamically modifying video game content based on non-video gaming content being concurrently experienced by a user |
11263797, | Nov 02 2017 | Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften | Real-time potentially visible set for streaming rendering |
11278813, | Dec 22 2017 | ACTIVISION PUBLISHING, INC | Systems and methods for enabling audience participation in bonus game play sessions |
11305191, | Dec 20 2018 | ACTIVISION PUBLISHING, INC | Systems and methods for controlling camera perspectives, movements, and displays of video game gameplay |
11310346, | Oct 21 2015 | Activision Publishing, Inc. | System and method of generating and distributing video game streams |
11315331, | Oct 30 2015 | Snap Inc. | Image based tracking in augmented reality systems |
11335067, | Sep 15 2017 | Snap Inc. | Augmented reality system |
11344808, | Jun 28 2019 | ACTIVISION PUBLISHING, INC | Systems and methods for dynamically generating and modulating music based on gaming events, player profiles and/or player reactions |
11351459, | Aug 18 2020 | Activision Publishing, Inc. | Multiplayer video games with virtual characters having dynamically generated attribute profiles unconstrained by predefined discrete values |
11351466, | Dec 05 2014 | ACTIVISION PUBLISHING, ING.; ACTIVISION PUBLISHING, INC | System and method for customizing a replay of one or more game events in a video game |
11380051, | Nov 30 2015 | Snap Inc. | Image and point cloud based tracking and in augmented reality systems |
11405643, | Aug 15 2017 | Nokia Technologies Oy | Sequential encoding and decoding of volumetric video |
11413536, | Dec 22 2017 | Activision Publishing, Inc. | Systems and methods for managing virtual items across multiple video game environments |
11420119, | May 14 2015 | Activision Publishing, Inc. | Systems and methods for initiating conversion between bounded gameplay sessions and unbounded gameplay sessions |
11420122, | Dec 23 2019 | Activision Publishing, Inc.; ACTIVISION PUBLISHING, INC | Systems and methods for controlling camera perspectives, movements, and displays of video game gameplay |
11422671, | Jun 22 2012 | Matterport, Inc. | Defining, displaying and interacting with tags in a three-dimensional model |
11423556, | Dec 06 2016 | Activision Publishing, Inc. | Methods and systems to modify two dimensional facial images in a video to generate, in real-time, facial images that appear three dimensional |
11423605, | Nov 01 2019 | ACTIVISION PUBLISHING, INC | Systems and methods for remastering a game space while maintaining the underlying game simulation |
11439904, | Nov 11 2020 | ACTIVISION PUBLISHING, INC | Systems and methods for imparting dynamic and realistic movement to player-controlled avatars in video games |
11439909, | Apr 01 2016 | Activision Publishing, Inc. | Systems and methods of generating and sharing social messages based on triggering events in a video game |
11446582, | Dec 31 2012 | Activision Publishing, Inc. | System and method for streaming game sessions to third party gaming consoles |
11450019, | Dec 17 2018 | Microsoft Technology Licensing, LLC | Detecting objects in crowds using geometric context |
11450050, | Aug 31 2018 | Snap Inc. | Augmented reality anthropomorphization system |
11524234, | Aug 18 2020 | Activision Publishing, Inc. | Multiplayer video games with virtual characters having dynamically modified fields of view |
11524237, | May 14 2015 | Activision Publishing, Inc. | Systems and methods for distributing the generation of nonplayer characters across networked end user devices for use in simulated NPC gameplay sessions |
11537209, | Dec 17 2019 | ACTIVISION PUBLISHING, INC | Systems and methods for guiding actors using a motion capture reference system |
11551408, | Dec 28 2016 | Panasonic Intellectual Property Corporation of America | Three-dimensional model distribution method, three-dimensional model receiving method, three-dimensional model distribution device, and three-dimensional model receiving device |
11551410, | Jun 22 2012 | Matterport, Inc. | Multi-modal method for interacting with 3D models |
11563774, | Dec 27 2019 | Activision Publishing, Inc.; ACTIVISION PUBLISHING, INC | Systems and methods for tracking and identifying phishing website authors |
11600046, | Mar 19 2014 | Matterport, Inc. | Selecting two-dimensional imagery data for display within a three-dimensional model |
11666831, | Dec 22 2017 | Activision Publishing, Inc. | Systems and methods for determining game events based on a crowd advantage of one or more players in the course of a multi-player video game play session |
11676319, | Aug 31 2018 | Snap Inc. | Augmented reality anthropomorphtzation system |
11679330, | Dec 18 2018 | ACTIVISION PUBLISHING, INC | Systems and methods for generating improved non-player characters |
11679333, | Oct 21 2015 | Activision Publishing, Inc. | Methods and systems for generating a video game stream based on an obtained game log |
11704703, | Nov 19 2018 | Activision Publishing, Inc. | Systems and methods for dynamically modifying video game content based on non-video gaming content being concurrently experienced by a user |
11709551, | Dec 17 2019 | Activision Publishing, Inc. | Systems and methods for guiding actors using a motion capture reference system |
11712627, | Nov 08 2019 | ACTIVISION PUBLISHING, INC | System and method for providing conditional access to virtual gaming items |
11717753, | Sep 29 2020 | ACTIVISION PUBLISHING, INC | Methods and systems for generating modified level of detail visual assets in a video game |
11721080, | Sep 15 2017 | Snap Inc. | Augmented reality system |
11724188, | Sep 29 2020 | ACTIVISION PUBLISHING, INC | Methods and systems for selecting a level of detail visual asset during the execution of a video game |
11741530, | Feb 23 2017 | Activision Publishing, Inc. | Flexible online pre-ordering system for media |
11748579, | Feb 20 2017 | Snap Inc. | Augmented reality speech balloon system |
11769307, | Oct 30 2015 | Snap Inc. | Image based tracking in augmented reality systems |
11794104, | Nov 11 2020 | Activision Publishing, Inc. | Systems and methods for pivoting player-controlled avatars in video games |
11794107, | Dec 30 2020 | ACTIVISION PUBLISHING, INC | Systems and methods for improved collision detection in video games |
11806626, | Dec 22 2017 | Activision Publishing, Inc. | Systems and methods for incentivizing player participation in bonus game play sessions |
11833423, | Sep 29 2020 | ACTIVISION PUBLISHING, INC | Methods and systems for generating level of detail visual assets in a video game |
11839814, | Dec 23 2019 | Activision Publishing, Inc. | Systems and methods for controlling camera perspectives, movements, and displays of video game gameplay |
11853439, | Dec 30 2020 | ACTIVISION PUBLISHING, INC | Distributed data storage system providing enhanced security |
11857876, | May 14 2015 | Activision Publishing, Inc. | System and method for providing dynamically variable maps in a video game |
11861795, | Feb 17 2017 | Snap Inc. | Augmented reality anamorphosis system |
11883745, | Nov 19 2018 | Activision Publishing, Inc. | Systems and methods for providing a tailored video game based on a player defined time period |
11896905, | May 14 2015 | Activision Publishing, Inc. | Methods and systems for continuing to execute a simulation after processing resources go offline |
11911689, | Dec 19 2017 | Activision Publishing, Inc. | Synchronized, fully programmable game controllers |
6753875, | Aug 03 2001 | HEWLETT-PACKARD DEVELOPMENT COMPANY L P | System and method for rendering a texture map utilizing an illumination modulation value |
6978230, | Oct 10 2000 | International Business Machines Corporation; DASSAULT | Apparatus, system, and method for draping annotations on to a geometric surface |
6980690, | Jan 20 2000 | Canon Kabushiki Kaisha | Image processing apparatus |
7034832, | Apr 05 2001 | KONAMI DIGITAL ENTERTAINMENT CO , LTD | Computer readable medium storing 3-D image processing program, 3-D image processing method and device, 3-D image processing program, and video game device |
7062527, | Apr 19 2000 | RPX Corporation | Management and scheduling of a distributed rendering method and system |
7092983, | Apr 19 2000 | RPX Corporation | Method and system for secure remote distributed rendering |
7148861, | Mar 01 2003 | Boeing Company, the | Systems and methods for providing enhanced vision imaging with decreased latency |
7348989, | Mar 07 2003 | ARCH VISION, INC | Preparing digital images for display utilizing view-dependent texturing |
7362335, | Jul 19 2002 | RPX Corporation | System and method for image-based rendering with object proxies |
7439976, | May 29 2001 | Koninklijke Philips Electronics N V | Visual communication signal |
7508977, | Jan 20 2000 | Canon Kabushiki Kaisha | Image processing apparatus |
7528831, | Sep 18 2003 | CANON EUROPA N V | Generation of texture maps for use in 3D computer graphics |
7567246, | Jan 30 2003 | UNIVERSITY OF TOKYO, THE | Image processing apparatus, image processing method, and image processing program |
7619626, | Mar 01 2003 | Boeing Company, the | Mapping images from one or more sources into an image for display |
7783695, | Apr 19 2000 | RPX Corporation | Method and system for distributed rendering |
7903121, | Mar 17 2008 | RPX Corporation | System and method for image-based rendering with object proxies |
7925391, | Jun 02 2005 | The Boeing Company | Systems and methods for remote display of an enhanced image |
8072456, | Jul 19 2002 | RPX Corporation | System and method for image-based rendering with object proxies |
8357040, | Jul 31 2007 | SG GAMING, INC | Templated three-dimensional wagering game features |
8583724, | Apr 28 2000 | Nvidia Corporation | Scalable, multi-user server and methods for rendering images from interactively customizable scene information |
8666657, | Feb 21 2008 | VISUAL REAL ESTATE, INC | Methods for and apparatus for generating a continuum of three-dimensional image data |
8874284, | Jun 02 2005 | The Boeing Company | Methods for remote display of an enhanced image |
8884984, | Oct 15 2010 | Microsoft Technology Licensing, LLC | Fusing virtual content into real content |
8890866, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Method and apparatus of providing street view data of a comparable real estate property |
8902226, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Method for using drive-by image data to generate a valuation report of a selected real estate property |
8930844, | Aug 22 2000 | Network repository of digitalized 3D object models, and networked generation of photorealistic images based upon these models | |
9098870, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Internet-accessible real estate marketing street view system and method |
9122053, | Oct 15 2010 | Microsoft Technology Licensing, LLC | Realistic occlusion for a head mounted augmented reality display |
9165397, | Jun 19 2013 | GOOGLE LLC | Texture blending between view-dependent texture and base texture in a geographic information system |
9171402, | Jun 19 2013 | GOOGLE LLC | View-dependent textures for interactive geographic information system |
9264765, | Aug 10 2012 | Sovereign Peak Ventures, LLC | Method for providing a video, transmitting device, and receiving device |
9311396, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Method of providing street view data of a real estate property |
9311397, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Method and apparatus of providing street view data of a real estate property |
9348141, | Oct 27 2010 | Microsoft Technology Licensing, LLC | Low-latency fusing of virtual and real content |
9374552, | Nov 11 2013 | Amazon Technologies, Inc | Streaming game server video recorder |
9384277, | Aug 31 2004 | VISUAL REAL ESTATE, INC | Three dimensional image data models |
9413830, | Nov 11 2013 | Amazon Technologies, Inc | Application streaming service |
9437034, | Dec 15 2014 | GOOGLE LLC | Multiview texturing for three-dimensional models |
9471997, | Nov 11 2013 | Amazon Technologies, Inc. | Image composition based on remote object data |
9514546, | Nov 11 2013 | Amazon Technologies, Inc. | Image composition based on remote object data |
9547920, | Nov 11 2013 | Amazon Technologies, Inc. | Image composition based on remote object data |
9549008, | Nov 11 2013 | Amazon Technologies, Inc. | Adaptive content transmission |
9578074, | Nov 11 2013 | Amazon Technologies, Inc | Adaptive content transmission |
9582904, | Nov 11 2013 | Amazon Technologies, Inc | Image composition based on remote object data |
9596280, | Nov 11 2013 | Amazon Technologies, Inc | Multiple stream content presentation |
9604139, | Nov 11 2013 | Amazon Technologies, Inc | Service for generating graphics object data |
9608934, | Nov 11 2013 | Amazon Technologies, Inc | Efficient bandwidth estimation |
9626790, | Jun 19 2013 | GOOGLE LLC | View-dependent textures for interactive geographic information system |
9634942, | Nov 11 2013 | Amazon Technologies, Inc | Adaptive scene complexity based on service quality |
9641592, | Nov 11 2013 | Amazon Technologies, Inc | Location of actor resources |
9649551, | Jun 03 2008 | TweedleTech, LLC | Furniture and building structures comprising sensors for determining the position of one or more objects |
9672066, | Mar 12 2014 | LIVE PLANET LLC | Systems and methods for mass distribution of 3-dimensional reconstruction over network |
9704282, | Jun 19 2013 | GOOGLE LLC | Texture blending between view-dependent texture and base texture in a geographic information system |
9710973, | Oct 27 2010 | Microsoft Technology Licensing, LLC | Low-latency fusing of virtual and real content |
9805479, | Nov 11 2013 | Amazon Technologies, Inc | Session idle optimization for streaming server |
9808706, | Jun 03 2008 | TweedleTech, LLC | Multi-dimensional game comprising interactive physical and virtual components |
9849369, | Jun 03 2008 | TweedleTech, LLC | Board game with dynamic characteristic tracking |
RE45264, | Aug 31 2004 | Visual Real Estate, Inc. | Methods and apparatus for generating three-dimensional image data models |
Patent | Priority | Assignee | Title |
5561745, | Oct 16 1992 | Rockwell Collins Simulation And Training Solutions LLC | Computer graphics for animation by time-sequenced textures |
5699497, | Feb 17 1994 | Rockwell Collins Simulation And Training Solutions LLC | Rendering global macro texture, for producing a dynamic image, as on computer generated terrain, seen from a moving viewpoint |
5710875, | Sep 09 1994 | Fujitsu Limited | Method and apparatus for processing 3-D multiple view images formed of a group of images obtained by viewing a 3-D object from a plurality of positions |
5805782, | Jul 09 1993 | Microsoft Technology Licensing, LLC | Method and apparatus for projective texture mapping rendered from arbitrarily positioned and oriented light source |
6072496, | Jun 08 1998 | Microsoft Technology Licensing, LLC | Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects |
6289380, | Jul 18 1996 | CA, INC | Network management system using virtual reality techniques to display and simulate navigation to network components |
6307567, | Dec 29 1996 | HANGER SOLUTIONS, LLC | Model-based view extrapolation for interactive virtual reality systems |
6320596, | Feb 24 1999 | Intel Corporation | Processing polygon strips |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Nov 09 1999 | IBM Corporation | (assignment on the face of the patent) | / | |||
Nov 09 1999 | KLOSOWSKI, JAMES T | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010381 | /0589 | |
Nov 09 1999 | SUITS, FRANK | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010381 | /0589 | |
Nov 09 1999 | HORN, WILLIAM P | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 010381 | /0589 | |
Dec 31 2012 | International Business Machines Corporation | ACTIVISION PUBLISHING, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029900 | /0285 | |
Jan 31 2014 | ACTIVISION PUBLISHING, INC | BANK OF AMERICA, N A | SECURITY AGREEMENT | 032240 | /0257 | |
Oct 14 2016 | BANK OF AMERICA, N A | ACTIVISION ENTERTAINMENT HOLDINGS, INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 040381 | /0487 | |
Oct 14 2016 | BANK OF AMERICA, N A | ACTIVISION PUBLISHING, INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 040381 | /0487 | |
Oct 14 2016 | BANK OF AMERICA, N A | ACTIVISION BLIZZARD INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 040381 | /0487 | |
Oct 14 2016 | BANK OF AMERICA, N A | BLIZZARD ENTERTAINMENT, INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 040381 | /0487 |
Date | Maintenance Fee Events |
Jun 30 2006 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 16 2010 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Aug 25 2014 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 25 2006 | 4 years fee payment window open |
Aug 25 2006 | 6 months grace period start (w surcharge) |
Feb 25 2007 | patent expiry (for year 4) |
Feb 25 2009 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 25 2010 | 8 years fee payment window open |
Aug 25 2010 | 6 months grace period start (w surcharge) |
Feb 25 2011 | patent expiry (for year 8) |
Feb 25 2013 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 25 2014 | 12 years fee payment window open |
Aug 25 2014 | 6 months grace period start (w surcharge) |
Feb 25 2015 | patent expiry (for year 12) |
Feb 25 2017 | 2 years to revive unintentionally abandoned end. (for year 12) |