A method for managing a display space for a 3d environment is provided. A 3d scene having at least one scene object is displayed and the visible surfaces of the scene objects are represented as visible space in a 2D view plane representation. controllable objects that are to be placed in the scene are defined by parameters such as size, placement priority, proximity relationships and the like. The available space for placing controllable objects, which can include empty space and low priority background and foreground regions, is determined for each controllable object. The placement for controllable objects in the 3d space is then determined in accordance with at least placement parameter and one of the visible space and available space of the view-plane representation such that view management objectives, such as not occluding important scene objects, are accomplished.
|
12. A method of annotating scene objects in a frame of a 3d environment comprising:
determining visible surfaces of a plurality of non-controllable scene objects in the 3d environment and representing the visible surfaces of the plurality of non-controllable scene objects as visible space in a view-plane representation, using a computer processor, wherein at least one of the plurality of non-controllable scene objects overlaps with another of the plurality of non-controllable scene objects in the view-plane representation;
defining at least one annotation object associated with one of the plurality of non-controllable scene objects;
for each of the at least one annotation object, if the annotation object fits within the visible space of the associated non-controllable scene object, then placing the annotation object within the extents of the visible surfaces of the associated non-controllable scene object in the 3d environment, otherwise, if the annotation object does not fit within the visible space of the associated non-controllable scene object, determining an available space for the annotation object in the view-plane representation and placing the annotation object in the 3d environment in accordance with at least one placement parameter and the available space for the annotation object in the view-plane representation; and
displaying the annotation objects in the 3d environment in accordance with the placement,
wherein the annotation objects are displayed using a head-tracked display device.
17. A method for managing a display space for a 3d environment comprising:
determining visible surfaces of a plurality of non-controllable scene objects in the 3d environment and representing the visible surfaces of the plurality of non-controllable scene objects as visible space in a view-plane representation, using a computer processor, wherein at least one of the plurality of non-controllable scene objects overlaps with another of the plurality of non-controllable scene objects in the view-plane representation;
defining at least one controllable object to be placed in the 3d environment;
for each controllable object, determining an available space in the view-plane representation; and
determining a placement of each controllable object in the 3d environment in accordance with at least one placement parameter and the available space for the controllable object in the view-plane representation
wherein a particular controllable object is associated with one of the plurality of non-controllable scene objects and wherein, if the particular controllable object fits within the visible space of the associated non-controllable scene object, then the placement of the particular controllable object is determined to be within the extents of the visible surfaces of the associated non-controllable scene object, otherwise, if the particular controllable object does not fit within the visible space of the associated non-controllable scene object, then the placement of the particular controllable object in the 3d environment is determined in accordance with at least one placement parameter and the available space for the controllable object in the view-plane representation.
1. A method for managing a display space for a 3d environment comprising:
determining visible surfaces of a plurality of non-controllable scene objects in the 3d environment and representing the visible surfaces of the plurality of non-controllable scene objects as visible space in a view-plane representation, using a computer processor, wherein at least one of the plurality of non-controllable scene objects overlaps with another of the plurality of non-controllable scene objects in the view-plane representation;
defining at least one controllable object to be placed in the 3d environment;
for each controllable object, determining an available space in the view-plane representation; and
determining a placement of each controllable object in the 3d environment in accordance with at least one placement parameter and the available space for the controllable object in the view-plane representation,
wherein the controllable objects are displayed using a head-tracked display device,
wherein a particular controllable object is associated with one of the plurality of non-controllable objects and wherein, if the particular controllable object fits within the visible space of the associated non-controllable object, then the placement of the particular controllable object is determined to be within the extents of the visible surfaces of the associated non-controllable object, otherwise, if the particular controllable object does not fit within the visible space of the associated non-controllable object, then the placement of the particular controllable object in the 3d environment is determined in accordance with at least one placement parameter and the available space for the controllable object in the view-plane representation.
2. The method for managing a display space of
3. The method for managing a display space of
4. The method for managing a display space of
5. The method for managing a display space of
6. The method for managing a display space of
7. The method for managing a display space of
8. The method for managing a display space of
9. The method for managing a display space of
10. The method for managing a display space of
11. The method for managing a display space of
13. The method for annotating scene objects of
14. The method for annotating scene objects of
15. The method for annotating scene objects of
16. The method for annotating scene objects of
18. The method for managing a display space of
19. The method for managing a display space of
20. The method for managing a display space of
21. The method for managing a display space of
22. The method for managing a display space of
23. The method for managing a display space of
24. The method for managing a display space of
25. The method for managing a display space of
26. The method for managing a display space of
27. The method for managing a display space of
28. The method for managing a display space of
29. The method for managing a display space of
30. The method for managing a display space of
determining the placement of a first controllable object in accordance with at least one placement parameter and the available space for the controllable object in the view-plane representation; and
determining the placement of a second controllable object in accordance with at least one placement parameter, the available space for the controllable object in the view-plane representation, and the placement of the first controllable object.
31. The method for managing a display space of
determining the placement of a first controllable object in accordance with at least one placement parameter and the available space for the controllable object in the view-plane representation; and
determining the placement of a second controllable object in accordance with at least one placement parameter, the available space for the controllable object in the view-plane representation, and the placement of the first controllable object.
|
|||||||||||||||||||||||||||||||
This application is a continuation of U.S. patent application Ser. No. 10/477,872, filed Jun. 14, 2004 now U.S. Pat. No. 7,643,024, which is a national phase of International Application PCT/US02/015576, filed May 16, 2002, which claims the benefit of U.S. Provisional Patent Application Ser. No. 60/291,798, filed on May 17, 2001, entitled View Management For Virtual And Augmented Reality, the contents of which is are hereby incorporated by reference in their entirety.
The present invention was made in part with support from the National Library of Medicine, Grant No. 5-R01 LM06593-02 and the Office of Naval Research, Contract Nos. N00014-99-1-0683, N00014-99-1-0249 and N00014-99-1-0394. Accordingly, the United States government may have certain rights to this invention.
The present invention relates generally to three dimensional imaging and more particularly relates to a system and method for managing the placement of controllable objects in a three dimensional projection.
Computer graphics systems which are commonly used today generally provide a representation of the workspace, or display screen, occupied by the various elements of the scene. Designing a graphical user interface (GUI) for viewing and manipulating a virtual three-dimensional (3D) space requires creating a set of objects and their properties, arranging them in a scene, setting a viewing specification, determining lighting and rendering parameters, and deciding how to update these decisions for each frame. Some of these decisions may be fully constrained; for example, a simulation may determine the position and shape of certain objects, or the viewing specification may be explicitly controlled by the user. In contrast, other decisions must be resolved by the GUI designer. Of particular interest are those decisions that determine the spatial layout of the projections of objects on the view plane. These decisions can be referred to collectively as view management. For example, some objects may be sufficiently important to the user's task that they should not be occluded. In addition, the members of a group of related objects may need to be placed together to emphasize their relationship.
In a static scene, observed from a fixed viewing specification, view-management decisions might be made in advance, by hand, and hold throughout the life of an application. It is also common in both 2D and 3D interactive GUIs to avoid automating view management when possible. For example, a fixed area of the screen may be dedicated to a menu, or the user may explicitly control the positions of permanent menus or temporary pop-up menus, or the positions and sizes of windows. However, hard-wired or direct-manipulation control becomes problematic when applied to dynamic scenes that include autonomous objects and to head-tracked displays. In this type of situation, continual and unpredictable changes in object geometry or viewing specification result in continual changes in the spatial and visibility relationships among the projections on the view plane. In these cases, view-management decisions must be made on the fly if they are to take dynamic changes into account.
Augmented reality applications are especially challenging in this regard. Virtual and physical objects reside in the same 3D space and there may be no way to control the behavior of many of the physical objects. For example, the view through an optical see-through head-worn display includes all the physical objects that occupy the user's field of view in addition to the virtual objects being displayed. In this case, the portion of the field of view that can be augmented may be relatively small.
It would be desirable to manage the virtual space such that objects could be added or moved in a controlled manner. For example, it would be desirable if virtual annotations could be added to the virtual space and interspersed among the objects they describe and reconfigured automatically and understandably to take into account changes in the objects themselves and how they are viewed.
A method for managing a display space for a 3D environment includes the steps of determining the visible surfaces of at least one first object in a 3D environment and representing the visible surfaces of the at least one first object as visible space in a view-plane representation. A plurality of controllable objects to be placed in the scene are defined and for each controllable object, the available space in the view-plane representation is determined. The placement of each controllable object in the 3D scene is then determined in accordance with at least one placement parameter and the available space for the controllable object in the view-plane representation.
Also in accordance with the present invention is a method of annotating scene objects in a frame of a 3D environment. The method includes determining the visible surfaces of at least one scene object in a 3D environment and representing the visible surfaces as visible space in a view-plane representation. At least one annotation object associated with at least one scene object to be placed in the scene is defined. If the annotation object fits within the visible surfaces of the associated scene object, then the annotation object is placed within the extents of the visible surfaces of the associated scene object. The placement of annotation objects which cannot be placed within the extents of the visible surface of the associated scene object in the 3D scene is determined in accordance with at least one placement parameter and the visible space of the view-plane representation. The annotation objects can then be displayed in the 3D environment in accordance with the determined placement.
Further objects, features and advantages of the invention will become apparent from the following detailed description taken in conjunction with the accompanying figures showing illustrative embodiments of the invention, in which:
Throughout the figures, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the subject invention will now be described in detail with reference to the figures, it is done so in connection with the illustrative embodiments. It is intended that changes and modifications can be made to the described embodiments without departing from the true scope and spirit of the subject invention as defined by the appended claims.
Controllable objects are associated with a variety of properties and placement constraints that will determine if and where the objects will be placed in the 3D space. The object properties can include minimum and maximum object size, minimum font size, aspect ratio and other constraints on how the object can appear in the space. The placement constraints generally relate to where the object can be placed. Placement constraints can refer to the image space in general, to non-controllable scene objects, and to other controllable objects. The placement constraints can be a single constraint, such as “always place A centered within B” or a more flexible set of hierarchical rules which determine placement based on a prioritized rule set. For example a rule set for placing controllable object A with respect to object B could be, “if B is large enough to accept A with minimum font=8 point, place A within B, else place A above B without overlapping any other scene object.” These two examples are not exhaustive and merely serve to illustrate the concept of placement parameters for controllable objects.
Based on the particular application, the object properties and placement constraints (placement parameters) will be defined for each controllable object (step 105). In certain applications, controllable objects can be grouped by object type, such as labels, which may have a common set, or subset, of placement constraints.
For a current frame of the 3D image space, the set of visible surfaces of objects placed in the view space from a selected view point will be determined (step 110). The visible surfaces of the non-controllable scene objects are projected into a 2D representation of the scene, referred to as a view-plane representation (step 115). Based on the view-plane representation and the properties and placement constraints of the controllable objects, such as labels, annotations, user controlled icons, and the like, the available space for each controllable object is determined (step 120). As used herein, the term available space refers to space in the view-plane which satisfies the properties and placement constraints of the particular controllable object. Once the available space for the controllable objects is determined, the placement of the controllable objects is determined (step 125). From the determined placement in the view-plane, the controllable objects can be placed in the 3D space (step 130).
The controllable object properties can include a placement priority which determines the order in which the controllable objects will be placed. The placement constraints can relate to both non-controllable scene objects as well as other controllable objects. As a result, it can be desirable for the available space for the controllable objects to be determined in priority order, determining the placement of higher priority objects first and determining the available space for subsequent controllable objects in consideration of such placement. For example, if lower priority objects are not allowed to overlap higher priority controllable objects, previously non-occupied space now occupied by a previously placed higher priority object would no longer be available space for the lower priority object. Such a “greedy algorithm” can be performed by repeating steps 115, 120 and 125 for each controllable object, in the order of priority. Preferably, only those portions of the view-plane representation affected by the placement of an object are recalculated when steps 115, 120 and 125 are repeated.
The available space for a controllable object is determined by that object's properties and placement constraints. However, in some cases, objects can be grouped by an object type, such as label, which have common properties and placement constraints. In certain cases, non-controllable objects will include placement constraints that prevent overlap with the visible surfaces of the scene objects. In such cases, in addition to having a data structure which defines the visible space in the view-plane, it can be efficient to maintain a second data structure describing the space in the view-plane which is not occupied by objects of interest. Although such areas in the 3D scene are not generally truly empty, these regions can collectively be referred to as non-occupied space areas and can be considered available for certain types of controllable objects. For example, it may be determined that any type of controllable object can be placed in regions of the 3D image space which are occupied by grass. In this case, any visible regions in the 3D image space identified as grass regions can be considered non-occupied space.
The present invention can be further described by way of example.
The use of the BSP tree will now be described in greater detail. Visible-surface determination can be performed by sorting the upright extents of the scene objects' projections in visibility order, such as front to back with respect to the view point. In the present case, visible-surface determination for interactive rendering is accomplished independently of the process for view-plane space determination, preferably by the use of specialized graphics hardware. A Binary Space Partitioning (BSP) Tree algorithm such as is described in the article entitled “Set Operations on Polyhedra Using Binary Space Partitioning Trees,” by W. Thibault et al, Computer Graphics, 21(4), pp. 153-162 July 1987 (Proc. SIGGRAPH '87), which is hereby incorporated by reference in its entirety, can be used to efficiently produce the visibility order for an arbitrary projection of a scene. A BSP tree is a binary tree whose nodes typically represent actual polygons (or polygon fragments) in the 3D scene. Because in the present case it is desirable to determine the visibility order for objects, rather than for the polygons of which they are composed, BSP tree nodes in the present invention are generally defined by planes that separate objects, rather than planes that embed objects' polygons. The partitioning planes can be chosen by using the heuristics described by Thibault et al. in the article referenced above. Although BSP trees are often used for purely static scenes, dynamic objects can be handled efficiently by adding these objects to the tree last and removing and adding them each time the objects move.
Referring to
When visible surface determination is performed using a BSP tree algorithm, the process of generating the view-plane representation can be performed by traversing the BSP tree in front-to-back order relative to the view point to find the visible portions of the scene's objects. The 2D space representation is projected to determine approximations of the visible space and non-occupied space portions of the view plane. For each node obtained from the BSP tree in front-to-back order, the new node's upright extent is intersected with the members of the current list of largest non-occupied space rectangles. This can be performed efficiently by maintaining the set of largest non-occupied space rectangles in a 2D interval tree, such as described by H. Samet in the text “The Design and Analysis of Spatial Data Structures,” Addison-Wesley, Reading, Mass., 1990, or other suitable data structure, to allow an efficient window query to determine the members that actually intersect the extent. The intersection yields a set of rectangles, some of which may be wholly contained within others. Those rectangles which are subsumed in other rectangles are eliminated, resulting in a set of largest rectangles whose union is the visible portion of the new node.
During the process of determining the visible surfaces in the scene, scene objects may be intersected by projection planes of the binary space partitioning tree and, as a result, represented on the view-plane by two or more visible surface regions which are adjacent to one another. As a result, some objects are represented by a single BSP tree node, while others are split across nodes. (Objects may be split during BSP tree construction, or split prior to BSP tree construction to better approximate a large object by the upright extents of a set of smaller objects.) Therefore, it is preferable for a node's visible rectangles to be coalesced with those of all previously processed nodes from the same object to create the list of largest rectangles in the union of the nodes.
For example, it is possible for a partitioning plane for the binary space partitioning tree to be placed adjacent to the right surface 410 of scene object 1 205. In this case, instead of the visible surfaces of scene object 2 being represented as two overlapping rectangles 610, 615, scene object 2 would be represented in separate nodes of the BSP tree as rectangle 815 and adjacent rectangle 610, as illustrated in
Returning to
As discussed above, the visible surface of objects can be estimated by largest rectangles which encompass the extents of the object's visible surfaces. In certain cases, this may result in an inefficient allocation of the view plane. In order to more efficiently allocate the view plane, the extents of the visible surfaces of an object can be represented by a set of multiple, smaller rectangles which more closely follow the contours of the object. For example, consider an object whose visible extents is shaped like the letter T. This object, when estimated by a single rectangle will be allocated far more space on the view plane than necessary. However, if the extents of the object's vertical portion is represented by a first rectangle and the extents of the horizontal portion represented by as second rectangle, it will be appreciated that the view plane now more efficiently represents the visible space of the object.
As noted above, empty space is generally not truly empty, but can include regions of the scene which have been deemed as available for the placement of controllable objects. For example, referring to
The 2D view-plane representation of
A common form of controllable object is an annotation which provides information about an object in the 3D scene, such as a label which identifies a scene object in the 3D scene. In order to be meaningful, such annotations should be uniquely associated with the object. One preferred way of associating an annotation with an object is to place the annotation entirely within the visible space of the object or, alternatively, placed in closely located available space with a leader line connecting the annotation to the object. In addition, in many applications, such as labeling, it is undesirable for the annotations, once placed, to overlap each other.
If in step 920, it is determined that the controllable object does not fit within the extents of a visible rectangle of the associated scene object, the controllable object will be placed in a suitable available space rectangle, generally near the associated scene object, in accordance with placement constraints for the object. The view-plane representation is queried to determine the set of available space rectangles that are large enough to receive the controllable object (step 950). In the event that there is more than one acceptable available space rectangle, one or more placement parameters associated with the object can be used to select one available space rectangle from the set. For example, rules can include relationships such as: closest to, smallest, adjacent, above, below, left of, right of, and the like. Such rules can be used alone or in combination in selecting the appropriate available space rectangle. Once an available space rectangle is selected, the controllable object is placed within the selected available space rectangle in a size an position consistent with the placement constraints for the object (step 960).
In order to avoid overlap or occlusion of the newly placed controllable object, the space occupied by the controllable object is designated as visible space and the available space in the view plane is recalculated (step 970). The next object is selected (step 940) and the process is repeated for each controllable object to be placed. If the controllable object will not fit within any visible space rectangle or suitable available space rectangle, the controllable object will not be placed unless a rule giving this object priority is provided.
If in step 910, it is determined that a controllable object to be placed is not associated with a scene object, placement for the object will be made in accordance with placement rules associated with the object and the scene. For example, a pop-up text box for user notes may have an associated rule that places this object in the uppermost available space rectangle large enough to receive it. Scene-based rules may also be used, such as maintaining a central region of the scene free of controllable objects, such as may be desirable when using a see through head mounted display unit to view the 3D environment simultaneously with the surrounding physical environment and other users in a collaborative setting. In such a case, the controllable object is placed in accordance with the placement parameters (step 985) and the non-occupied space in the 2D view-plane is recalculated (step 990). The next object is selected (step 940) and the process repeats for the next controllable object with flow returning to step 910.
As is evident from
To avoid having a controllable object repeatedly jump between two positions as the scene undergoes minor changes in view point from one frame to the next, it is desirable to include state hysteresis in the positioning of the controllable objects. There are several situations in which objects may change state, resulting in a discrete visual change, such as from an internal label to external one, or from being displayed to not being displayed. Some GUIs use three constraints on the size of an object being displayed: minimum size, maximum size, and preferred size (e.g., Java 2D). As shown in the state diagram of
In the present state hysteresis analysis, the position of an object in the previous frame is compared to an ideal position for the object in the current frame. If these two positions are not the same, a timer is initiated. If at the end of the timer interval, the current position of the object is not the ideal position, the object is then moved to the new ideal position. This may result in a momentary positioning which is not ideal, but avoids the undesirable jumping or flickering of the controllable objects within the image.
Referring to
In addition to state hysteresis, it is desirable to place objects being laid out in the 3D scene in roughly the same position relative to an associated object or to a screen position if the available object is screen-stabilized as in the previous frame. This is referred to as positional stability. For an object L being placed relative to object A, two possible layouts can be computed: the best possible layout independent of the previous layout, and the closest possible layout to the previous layout. For example, when L is an internal label for A, the best possible layout may be to use the visible space in A that can contain the largest allowable version of L. To determine the closest possible layout to the previous layout, the position of L's centroid is computed in the previous frame relative to A's unclipped width and height in the previous frame. These proportions can then be used to compute from A's unclipped width and height in the current frame, a predicted position LC for L's centroid. Next, the best and closest possible layouts are compared. If they are the same, then this layout is used. If the layouts are different, a timer is initiated and if the best and closest fail to coincide after a set amount of time, the best position is selected and the timer is reset.
A third method of improving temporal continuity is to interpolate between certain kinds of discrete changes. For example, to minimize the effect of discontinuous jumps during the state changes discussed above, L is interpolated from its previous position and scale to its new ones. In changing from internal to external annotations, the object or leader line can also grow or shrink.
In placing the controllable object into the 3D scene, it is preferable to place the controllable object at a depth, e.g., z-dimension in the 3D scene, which corresponds to the associated scene object. In the case of stereo display devices, the present invention can be implemented by determining the view plane for a single eye, an average distance between the two eyes or individually for each eye.
The present system and methods have been implemented in Java 1.3 with Java 3D 1.2.1.01. The software was operated on a 1.4 GHz Intel Pentium 4 processor with 512 MB RAM and a SONICBlue FireGL 2 graphics board, running Windows 2000. The visible-surface processing performed by the methods described herein is only used for view-management operations. Rendering is accomplished through a separate processing engine, such as Java3D. While performance depends on the complexity of the scene, the present system runs at about 10-25 frames per second in stereo for an 800×600 resolution Sony LD1-D100B head-worn display (with the view-plane representation computed for a single eye).
The present invention can be implemented on various computer platforms and use various input devices and display devices. Examples include a conventional CRT display of a 3D space, such as a college campus, wherein the scene objects represent buildings on the campus and the controllable objects include labels identifying the building names and selectable annotations, including text or multimedia supplements associated with a particular building. Thus an interactive tour can be provided in a 3D environment to one or more users in either a stationary or mobile embodiment. More sophisticated examples of the present invention include stationary or mobile augmented reality environments where multiple users engage in a collaboration with respect to a 3D environment which is presented to each user through see-through head-mounted or hand held or stationary display units, such as the Sony LD1-D100B head-worn display. In a collaborative setting, controllable objects can be further defined as “private” to a particular user and “public” to all or a group of collaborators.
Although the present invention has been described in connection with specific exemplary embodiments, it should be understood that various changes, substitutions and alterations can be made to the disclosed embodiments without departing from the spirit and scope of the invention as set forth in the appended claims.
Feiner, Steven K., Bell, Blaine A., Hoellerer, Tobias H
| Patent | Priority | Assignee | Title |
| 10248304, | Nov 30 2016 | LSIS CO., LTD. | Method for displaying monitoring screen at a display location |
| 11263816, | Dec 15 2016 | INTERDIGITAL CE PATENT HOLDINGS, SAS | Method and device for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment |
| 11798239, | Dec 15 2016 | INTERDIGITAL CE PATENT HOLDINGS, SAS | Method and device for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment |
| Patent | Priority | Assignee | Title |
| 4642790, | Mar 31 1983 | INTERNATIONAL BUSINESS MACHINES CORPORATION ARMONK, NY 10504 A CORP OF NY | Presentation space management and viewporting on a multifunction virtual terminal |
| 4819189, | May 26 1986 | Kabushiki Kaisha Toshiba | Computer system with multiwindow presentation manager |
| 5430831, | Mar 19 1991 | KONINKLIJKE KPN N V | Method of packing rectangular objects in a rectangular area or space by determination of free subareas or subspaces |
| 5515494, | Dec 17 1992 | SAMSUNG ELECTRONICS CO , LTD | Graphics control planes for windowing and other display operations |
| 5574836, | Jan 22 1996 | PIXEL KIRETIX, INC | Interactive display apparatus and method with viewer position compensation |
| 5657463, | Jan 19 1994 | Apple Computer, Inc.; Apple Computer, Inc | Method and apparatus for positioning a new window on a display screen based on an arrangement of previously-created windows |
| 5825363, | May 24 1996 | Microsoft Technology Licensing, LLC | Method and apparatus for determining visible surfaces |
| 5835692, | Nov 21 1994 | ACTIVISION PUBLISHING, INC | System and method for providing mapping notation in interactive video displays |
| 5982389, | Jun 17 1996 | Microsoft Technology Licensing, LLC | Generating optimized motion transitions for computer animated objects |
| 6008809, | Sep 22 1997 | International Business Machines Corporation | Apparatus and method for viewing multiple windows within a dynamic window |
| 6023275, | Apr 30 1996 | Microsoft Technology Licensing, LLC | System and method for resizing an input position indicator for a user interface of a computer system |
| 6115052, | Feb 12 1998 | Mitsubishi Electric Research Laboratories, Inc | System for reconstructing the 3-dimensional motions of a human figure from a monocularly-viewed image sequence |
| 6215496, | Jul 23 1998 | Microsoft Technology Licensing, LLC | Sprites with depth |
| 6266064, | May 29 1998 | Microsoft Technology Licensing, LLC | Coherent visibility sorting and occlusion cycle detection for dynamic aggregate geometry |
| 6344863, | Nov 24 1999 | International Business Machines Corporation | Three-dimensional GUI windows with variable-speed perspective movement |
| 6359603, | Nov 18 1995 | Meta Platforms, Inc | Portable display and methods of controlling same |
| 6654036, | Jun 05 2000 | GOOGLE LLC | Method, article of manufacture and apparatus for controlling relative positioning of objects in a windows environment |
| 6690393, | Dec 24 1999 | Koninklijke Philips Electronics N V | 3D environment labelling |
| 6928621, | Jun 11 1993 | Apple Inc | System with graphical user interface including automatic enclosures |
| 7404147, | Apr 24 2000 | TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK, THE | System and method for dynamic space management of a display space |
| 7643024, | May 17 2001 | TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK, THE | System and method for view management in three dimensional space |
| 8234580, | Apr 24 2000 | The Trustees of Columbia University in the City of New York | System and method for dynamic space management of a display space |
| 20090037841, | |||
| WO182279, | |||
| WO199512194, |
| Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
| Jul 02 2009 | The Trustees of Columbia University in the City of New York | (assignment on the face of the patent) | / | |||
| Dec 17 2009 | COLUMBIA UNIVERSITY NEW YORK MORNINGSIDE | NATIONAL INSTITUTES OF HEALTH NIH , U S DEPT OF HEALTH AND HUMAN SERVICES DHHS , U S GOVERNMENT | CONFIRMATORY LICENSE SEE DOCUMENT FOR DETAILS | 023751 | /0563 |
| Date | Maintenance Fee Events |
| Nov 06 2017 | REM: Maintenance Fee Reminder Mailed. |
| Mar 21 2018 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
| Mar 21 2018 | M2554: Surcharge for late Payment, Small Entity. |
| Nov 15 2021 | REM: Maintenance Fee Reminder Mailed. |
| May 02 2022 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
| Date | Maintenance Schedule |
| Mar 25 2017 | 4 years fee payment window open |
| Sep 25 2017 | 6 months grace period start (w surcharge) |
| Mar 25 2018 | patent expiry (for year 4) |
| Mar 25 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
| Mar 25 2021 | 8 years fee payment window open |
| Sep 25 2021 | 6 months grace period start (w surcharge) |
| Mar 25 2022 | patent expiry (for year 8) |
| Mar 25 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
| Mar 25 2025 | 12 years fee payment window open |
| Sep 25 2025 | 6 months grace period start (w surcharge) |
| Mar 25 2026 | patent expiry (for year 12) |
| Mar 25 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |