computer-implemented systems and methods for displaying an object having three-dimensional characteristics on a user interface metadata and shape data related to a three-dimensional object are received. The shape data defines the three-dimensional object in a three-dimensional coordinate format. computer instructions are generated that define region(s) of the object in a two-dimensional format. The instructions include the metadata and allow a user to access metadata associated with the image being displayed to a user.
|
1. A computer-implemented method for use in displaying an image representative of a three-dimensional object, comprising the steps of:
receiving metadata and shape data related to a three-dimensional object;
wherein the shape data defines the three-dimensional object in a three-dimensional coordinate format;
wherein the metadata is associated with the three-dimensional object;
defining a region of the object using a two dimensional format;
wherein the defined region is associated with at least a portion of the metadata;
wherein the defined region allows a user to access metadata associated with an image being displayed to a user;
defining additional regions of the object using a two-dimensional coordinate format;
displaying the defined region and the additional regions to the user;
wherein collectively the displayed defined region and the additional regions are substantially a visual duplicate of the object.
25. A computer-readable storage medium encoded with instructions that cause a computer to perform a method for use in displaying an image representative of a three-dimensional object, said method comprising the steps of:
receiving metadata and shape data related to a three-dimensional object;
wherein the shape data defines the three-dimensional object in a three-dimensional coordinate format;
wherein the metadata is associated with the three-dimensional object;
defining a region of the object using a two dimensional format;
wherein the defined region is associated with at least a portion of the metadata;
wherein the defined region allows a user to access metadata associated with an image being displayed to a user;
defining additional regions of the object using a two-dimensional coordinate format;
displaying the defined region and the additional regions to the user;
wherein collectively the displayed defined region and the additional regions are substantially a visual duplicate of the object.
18. A computer-implemented apparatus for providing a three-dimensional object for display on a user interface, comprising:
means for receiving metadata and shape data related to a three-dimensional object;
wherein the shape data defines the three-dimensional object in a three-dimensional coordinate format;
wherein the metadata is associated with the three-dimensional object;
means for generating markup language instructions that define a region of the object using a two dimensional coordinate format;
wherein the defined region is associated with at least a portion of the metadata;
means for generating markup language instructions that define additional regions of the object using a two-dimensional coordinate format;
means for displaying the defined region and the additional regions to the user;
wherein the generated markup language instructions allow a user to access metadata associated with an image being displayed to a user;
wherein collectively the displayed defined region and the additional regions are substantially a visual duplicate of the object.
17. A computer-implemented apparatus for use in displaying on a user interface an image that is representative of a three-dimensional object, comprising:
a data structure tat stores metadata and shape data associated with the three-dimensional object, wherein the shape data defines the three-dimensional object in a three-dimensional coordinate format;
wherein the metadata is associated with the three-dimensional object;
an image map generator that defines a region of the object using a two dimensional format;
wherein the defined region has an association with at least a portion of the metadata;
wherein the defined regions allows a user to access the metadata associated with an image displayed on the user interface;
wherein the image map generator defines additional regions of the object using a two-dimensional coordinate format;
wherein the defined region and the additional regions are displayed to the user;
wherein collectively the displayed defined regions and the additional regions are substantially a visual duplicate of the object when displayed on the user interface.
2. The method of
5. The method of
6. The method of
wherein the generated markup language instructions include Hypertext Markup Language (HTML) instructions.
7. The method of
wherein the generated markup language instructions include instructions on how to display the image;
wherein the generated markup language instructions define a plurality of regions for the object by specifying two-dimensional coordinates that indicate boundaries of the regions.
8. The method of
wherein the displayed image is operable, because of the generated markup language instructions, to allow a user to utilize a hyperlink associated with the displayed image.
9. The method of
wherein the displayed image is operable, because of the generated markup language instructions, to allow a user to utilize hyperlinks associated with different portions of the displayed image.
10. The method of
wherein the displayed image is operable, because of the generated markup language instructions, to allow a user to view mouse over text associated with the displayed image.
11. The method of
wherein the displayed image is operable, because of the generated markup language instructions, to allow a user to view mouse over text associated with different portions of the displayed image.
12. The method of
using a bitmap image of the three-dimensional object to detect boundaries associated with regions within the bitmap image;
wherein the detected boundaries identify regions that are associated with the metadata;
wherein an array of points define the boundaries;
creating an image map from the array of points;
generating the markup language instructions based upon the image map.
13. The method of
reducing the number of points that define the boundaries before creating the image map.
14. The method of
creating a non-shaded, color-coded bitmap image of elements that appear within the three-dimensional object for use in detecting the boundaries.
15. The method of
wherein a user is able to access metadata for the image being displayed on a user interface because of the image map.
16. The method of
19. The apparatus of
22. The apparatus of
23. The apparatus of
wherein the generated markup language instructions include Hypertext Markup Language (HTML) instructions.
24. The method of
|
The present invention relates generally to displaying images and more particularly to generating images for display on computer-human interfaces.
Web browsers handle many different types of images, and may allow information associated with the images to be displayed to the user. As an illustration, a user can use a pointing device to mouse over a portion of an image. While mousing over the image, tip information or other type of information that may be associated with the image is displayed to the user. However, during the generation of an image for use with a web browser, such useful information as well as other information associated with an image may be lost. In accordance with the teachings provided herein, systems and methods are disclosed herein to address this and other issues related to handling images.
As an example of a system and method disclosed herein, metadata and shape data related to a three-dimensional object are received. The shape data defines the three-dimensional object in a three-dimensional coordinate format. Computer instructions are generated that define region(s) of the object in a two-dimensional format. The instructions include the metadata and allow a user to access metadata associated with the image being displayed to a user.
The system 30 allows for content to contain image maps that have been generated from three dimensional object(s) 36. A three-dimensional object 36 may represent many different types of items, such as a three-dimensional version of a pie chart, geographical map or house.
Through use of information about a three dimensional object 36, an image map 38 is generated and provided to a user computer 32 via a server 42 having a data connection to the network 34. The image map 38 provides a list of coordinates relating to an image of a three-dimensional object 36, and defines one or more regions within the image.
The image map 38 contains information associated with a defined region. The information can be a hyperlink to another destination, mouse over tip information or other type information associated with the three-dimensional object 36. The created image map 38 allows a user to interact with a static image in, among other things, a web browser.
An example is shown in
The image map generator finds the two-dimensional projections of the object, and determines which object is closest to the viewer at points in the projection so as to determine the parts of the 3D objects that are visible in the intermediate and final images. The image map generator 100 produces a set of polygons which enclose the visible parts of each element or region in the object and associates the object metadata with the proper parts of the object. In this way, three-dimensional data stored in a data structure is translated into a two-dimensional image map.
The image map 102 and an image 106 representative of the three-dimensional object 104 are provided to a web browser 110 for display to the user 108. Because of the generated image map 102, the user 108 can interact with the displayed image 112, so as to be able to access hyperlinks for drilling down to view additional information and/or to be able to mouse over a region of the image 112 to view tips and other data associated with that region of the image 112.
With reference to
Step 210 outputs the image map information containing the determined points and metadata information. Step 212 generates the image to be used with the image map and incorporates the image map into a Hypertext Markup Language (HTML) document for use by a user's web browser.
It should be understood that similar to the other processing flows described herein, the steps and the order of the steps may be altered, modified and/or augmented and still achieve the desired outcome. For example, step 202 may be modified so as to not use a bitmap image representation of the three-dimensional object. Instead other image representations may be used, such as a PNG format, etc. Moreover, it should be understood that a bitmap image does not have to be displayed in order for the image to be processed.
In this example, each pie piece is associated with a metadata index (e.g., a unique number that relates the data represented in the pie chart graph to its metadata as shown in
As discussed above, an arbitrary unique number can be used as a metadata index number.
After the bitmap is rendered, an edge detection routine is conducted. For example, a scan-line routine is used to collect the outline for each shape by isolating regions which contain only a single metadata index value. This is accomplished by beginning in the upper left hand corner of the image and then proceeding to the right across the image. If a color-coded metadata index approach is used, then if a pixel color does not match the background color or a color just collected, the code checks for the occurrence of this color in an array of collected colors. If the color is found in the array, a check is done to see if the point has already been recorded. If the point has been collected, the scan continues across the image to the right until a new color is found (not including the background color). If the point is not found in a previously recorded shape or the color is not found, the color is added to the array of collected colors and the shape is then traced and the outline of the color is collected. The results of this step can be seen in
After the shapes are collected, one or more algorithms can be used to remove excess points from the image map to bring the total number per shape to below a particular number of points (e.g., one hundred points per polygon shape) for performance or implementation reasons. Simple point dropping can be applied first. Then horizontal/vertical and collinear checks can be applied to reduce the number of points per shape further. The results of this step can be seen at 450 in
Once the shapes are culled down to one hundred or less points, metadata is assigned to each shape through the shape's metadata index in the bitmap as shown in the table of
Based upon the metadata index and the determined boundaries, an image map is created with the appropriate HTML code to represent the shapes of the image in question. Below shows sample portions of the HTML produced for the current pie chart example for the data tips:
<area shape=poly alt=″Name: Thomas
Age: 19 (22.35%)″ title=″Name: Thomas
Age: 19 (22.35%)″ href=“http://www.enf.org/”
coords=“319,219,307,211,295,202,283,193,271,185,259,176,247,168,235,159,223,150,211,142,
199,133,187,125,175,116,186,109,198,104,210,99,222,95,234,91,246,88,258,85,270,83,282,82,294,
81,342,80,348,81,354,82,360,82,372,84,384,86,396,89,408,92,420,95,432,100,444,105,454,111,
442,121,436,126,430,131,418,140,406,150,394,160,382,169,370,179,358,189,346,198,334,208,
322,218,”>
<area shape=poly alt=″Name: Timothy
Age: 9 (10.59%)″ title=″Name: Timothy
Age: 9 (10.59%)″ href=“http://www.internaldrive.com/”
coords=“310,219,298,217,286,216,274,214,262,213,250,212,238,210,226,209,214,208,202,206,
190,205,178,204,166,202,154,201,142,200,130,198,118,197,106,195,109,183,115,171,123,159,133,
147,145,136,157,127,169,119,180,121,192,129,204,138,216,146,228,155,240,164,252,172,264,
181,276,189,282,194,288,198,300,207,312,215,312,219,”>
<area shape=poly alt=″Name: TBA
Age: 7 (8.24%)″ title=″Name: TBA
Age: 7 (8.24%)″ href=“http://www.guardup.com/camp.htm”
coords=“362,240,350,234,338,228,326,222,328,214,340,204,346,199,352,194,364,185,376,175,
388,165,400,156,412,146,424,136,436,127,448,117,460,113,472,120,484,128,496,138,507,149,517,
161,522,173,519,182,513,184,507,185,495,189,483,192,471,195,459,198,447,201,435,204,423,
207,411,210,399,213,387,216,375,219,363,222,362,234,”>
Etc.
Any attribute of the HTML map element could be used to describe in some way the shapes that have been collected (e.g., ‘alt’, ‘title’, and ‘href’). The image map in this example links various parts of an image without resorting to dividing the image itself into separate parts.
When the image map file is written out and the full image with all lighting and shading is produced, a post-processing step incorporates the full image and image map HTML code into an HTML document.
As shown in
While examples have been used to disclose the invention, including the best mode, and also to enable a person skilled in the art to make and use the invention, the patentable scope of the invention is defined by claims, and may include other examples that occur to those skilled in the art. For example, it should be understood that many different three-dimensional objects can be used with the disclosed systems and methods. As an illustration,
<map name=“MW4CEB2O”>
<area shape=poly alt=″Camp: TechCamp
Number of Participants: 507
Season: summer″ title=″Camp: TechCamp
Number of Participants: 507
Season: summer″ href=“http://www.internaldrive.com/”
coords=“470,387,470,201,465,199,393,199,390,196,390,124,402,112,414,105,498,105,502,107,
502,365,499,371,487,383,481,387,475,387,”>
<area shape=poly alt=″Camp: GuardUp
Number of Participants: 474
Season: summer″ title=″Camp: GuardUp
Number of Participants: 474
Season: summer″ href=“http://www.guardup.com/camp.htm”
coords=“347,387,347,207,344,203,272,203,268,201,268,141,280,129,292,122,376,122,379,125,
379,197,369,206,357,218,352,387,”>
<area shape=poly alt=″Camp: EaglesNest
Number of Participants: 463
Season: summer″ title=″Camp: EaglesNest
Number of Participants: 463
Season: summer″ href=“http://www.enf.org/”
coords=“224,387,224,201,146,200,145,195,145,147,157,135,169,127,253,127,256,130,256,202,
245,212,234,224,234,386,229,387,”>
<area shape=poly alt=″Camp: TechCamp
Number of Participants: 387
Season: spring″ title=″Camp: TechCamp
Number of Participants: 387
Season: spring″ href=“http://www.internaldrive.com/”
coords=“358,420,358,222,360,216,372,204,378,200,468,200,469,205,469,397,467,403,455,415,
359,420,”>
Etc.
In addition to the wide variety of objects that can be handled by the systems and methods disclosed herein, it is further noted that the systems and methods may be implemented on various types of computer architectures, such as for example on a networked system, or in a client-server configuration, or in an application service provider configuration. In multiple computer systems, data signals may be conveyed via networks (e.g., local area network, wide area network, internet, etc.), fiber optic medium, carrier waves, wireless networks, etc. for communication among multiple computers or computing devices.
The systems' and methods' data may be stored as one or more data structures in computer memory and/or storage depending upon the application at hand. The data structure disclosed herein describe formats for use in storing data on computer-readable media or use by a computer program.
The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions for use in execution by a processor to perform the methods' operations and implement the systems described herein.
The computer components, software modules, functions and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
It should be understood that as used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. Finally, as used in the description herein and throughout the claims that follow, the meanings of “and” and “or” include both the conjunctive and disjunctive and may be used interchangeably unless the context clearly dictates otherwise; the phrase “exclusive or” may be used to indicate situation where only the disjunctive meaning may apply.
Layne, Paul W., Frazelle, R. Allen, Hennes, Scott C.
Patent | Priority | Assignee | Title |
11227330, | Sep 03 2010 | Determining a part having an anomaly in equipment and initiating an electronic transaction for replacing the part using a three-dimensional (3D) model of the equipment | |
7502036, | Mar 03 2004 | Xenogenic Development Limited Liability Company | System for delivering and enabling interactivity with images |
7542050, | Mar 03 2004 | Xenogenic Development Limited Liability Company | System for delivering and enabling interactivity with images |
7616834, | Mar 03 2004 | Xenogenic Development Limited Liability Company | System for delivering and enabling interactivity with images |
7755643, | Mar 03 2004 | Xenogenic Development Limited Liability Company | System for delivering and enabling interactivity with images |
7956872, | Mar 03 2004 | Xenogenic Development Limited Liability Company | System for delivering and enabling interactivity with images |
8411109, | Mar 03 2004 | Xenogenic Development Limited Liability Company | System for delivering and enabling interactivity with images |
8473270, | Nov 10 2003 | BROOKS AUTOMATION HOLDING, LLC; Brooks Automation US, LLC | Methods and systems for controlling a semiconductor fabrication process |
8692843, | Mar 10 2011 | Biotronik SE & Co. KG | Method for graphical display and manipulation of program parameters on a clinical programmer for implanted devices and clinical programmer apparatus |
9051825, | Jan 26 2011 | Schlumberger Technology Corporation | Visualizing fluid flow in subsurface reservoirs |
9087413, | Mar 03 2004 | Xenogenic Development Limited Liability Company | System for delivering and enabling interactivity with images |
9245170, | Jun 27 2012 | The Boeing Company | Point cloud data clustering and classification using implicit geometry representation |
9489689, | Sep 03 2010 | 3D imaging |
Patent | Priority | Assignee | Title |
5708764, | Mar 24 1995 | ACTIVISION PUBLISHING, INC | Hotlinks between an annotation window and graphics window for interactive 3D graphics |
5721851, | Jul 31 1995 | International Business Machines Corporation | Transient link indicators in image maps |
5805170, | May 07 1996 | Microsoft Technology Licensing, LLC | Systems and methods for wrapping a closed polygon around an object |
6008814, | May 23 1997 | International Business Machines Corporation | Method and system for providing network navigation aids |
6341178, | Dec 04 1995 | Xerox Corporation | Method and apparatus for lossless precompression of binary images |
6362817, | May 18 1998 | IN3D Corporation | System for creating and viewing 3D environments using symbolic descriptors |
6363404, | Jun 26 1998 | Microsoft Technology Licensing, LLC | Three-dimensional models with markup documents as texture |
6580823, | Feb 09 1999 | International Business Machines Corporation | Image maps |
6680730, | Jan 25 1999 | Remote control of apparatus using computer networks | |
6978418, | Apr 13 1999 | HEWLETT-PACKARD DEVELOPMENT COMPANY L P | Dynamic-adaptive client-side image map |
7171627, | Jul 05 2000 | Sony Corporation | Device for displaying link information and method for displaying the same |
20010030651, | |||
20020035728, | |||
20040119759, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 28 2004 | FRAZELLE, R ALLEN | SAS INSTITUTE INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 015299 | /0842 | |
Apr 28 2004 | HENNES, SCOTT C | SAS INSTITUTE INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 015299 | /0842 | |
Apr 28 2004 | LAYNE, PAUL W | SAS INSTITUTE INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 015299 | /0842 | |
May 03 2004 | SAS Institute Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 31 2011 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Sep 16 2015 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Sep 27 2019 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Apr 01 2011 | 4 years fee payment window open |
Oct 01 2011 | 6 months grace period start (w surcharge) |
Apr 01 2012 | patent expiry (for year 4) |
Apr 01 2014 | 2 years to revive unintentionally abandoned end. (for year 4) |
Apr 01 2015 | 8 years fee payment window open |
Oct 01 2015 | 6 months grace period start (w surcharge) |
Apr 01 2016 | patent expiry (for year 8) |
Apr 01 2018 | 2 years to revive unintentionally abandoned end. (for year 8) |
Apr 01 2019 | 12 years fee payment window open |
Oct 01 2019 | 6 months grace period start (w surcharge) |
Apr 01 2020 | patent expiry (for year 12) |
Apr 01 2022 | 2 years to revive unintentionally abandoned end. (for year 12) |