A multimedia message system automatically generates visual representations (thumbnails) of message or media objects and references (links) between media objects; nests messages within themselves; and automatically updates generated thumbnails. An object, including a thumbnail image of the object's contents and a link to the original content, is created in response to simple user inputs or commands. An image of the object is generated whether the object be a web page, a multimedia message, a hypertext link, a video clip, or a document. A link or reference to the original object from which the image was formed is also generated. The system retrieves and displays information referenced by an object and shown by the thumbnail image corresponding to the object. The system also automatically updates the thumbnail image(s) representing an object any time the underlying object or information from which the image was generated has been modified.
|
1. A method for creating a representation, the method comprising:
capturing an image of a first object, the first object associated with a first software application;
determining a reference to the first object;
creating a second object associated with a second software application and an image of the second object, the second software application being distinct from the first software application;
creating a reference marker, the reference marker graphically connecting a location in the image of the second object with a location in the image of the first object;
creating the representation, the representation comprising the captured image, the determined reference, the image of the second object, and the reference marker; and
adding the representation to a message.
20. An apparatus for creating a representation stored on a computer-readable storage medium, the apparatus comprising:
an image generation module configured to capture an image of a first object, the first object associated with a first software application and an image of a second object;
a link generation module configured to determine a reference to the first object;
an object creation module coupled for communication with the image generation module and the link generation module, the object creation module configured to create the second object, the second object associated with a second software application, the second software application being distinct from the first software application, a reference marker, the reference marker graphically connecting a location in the image of the second object with a location in the image of the first object, and the representation, the representation comprising the captured image, the determined reference, the image of the second object, and the reference marker; and
an automatic message creation module coupled for communication with the image generation module and the link generation module, the automatic message creation module configured to add the representation to a message.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
9. The method of
10. The method of
11. The method of
12. The method of
13. The method of
14. The method of
determining whether a web page corresponding to the hypertext link is accessible;
responsive to having determined that the web page is accessible, presenting the web page; and
responsive to having determined that the web page is not accessible, presenting a web page that corresponds to the hypertext link and is stored in memory.
15. The method of
receiving an input from a user, the input selecting the second object; and
responsive to having received the input, displaying the first object.
17. The method of
capturing a new image of the first object; and
replacing, within the second object, the image with the new image.
18. The method of
determining a new reference to the first object; and
replacing, within the second object, the reference with the new reference.
19. The method of
determining whether the first object has changed; and
responsive to having determined that the first object has changed, updating the second object.
21. The apparatus of
22. The apparatus of
23. The apparatus of
24. The apparatus of
25. The apparatus of
26. The apparatus of
27. The apparatus of
28. The apparatus of
29. The apparatus of
30. The apparatus of
31. The apparatus of
|
The present application is a continuation of U.S. patent application Ser. No. 09/671,505, entitled “System and Method for Automatic Generation of Visual Representations and Links in a Hierarchical Messaging System,” filed on Sep. 26, 2000, now U.S. Pat. No. 6,693,652 which application is a continuation-in-part of U.S. patent application Ser. No. 09/407,010, entitled “Method and Apparatus for Generating Visual Representations for Audio Documents,” filed on Sep. 28, 1999, now U.S. Pat. No. 6,624,826 which applications are incorporated herein. The present invention also relates to U.S. patent application Ser. No. 09/540,042, entitled “Systems And Methods For Providing Rich Multimedia Messages To Remote Users Using Telephones And Facsimile Machines,” filed on Mar. 31, 2000, which is incorporated herein by reference. The present invention also relates to U.S. patent application Ser. No. 09/587,591, entitled “Method and System for Electronic Message Composition with Relevant Documents” filed on May 31, 2000, which is incorporated herein by reference. The present invention also relates to U.S. patent application Ser. No. 10/043,443, entitled “System and Method for Audio Creation and Editing in a Multimedia Messaging Environment.”
1. Field of the Invention
The present invention relates to systems and methods for authoring or generating, storing and retrieving multimedia messages that may include audio, text documents, images, web pages (URLs) and video. In particular, the present invention relates to a system and method for automatically generating images and links in a multimedia message system. The present invention also relates to a hierarchical message system providing nesting of messages within each other.
2. Description of the Background Art
A large percentage of a typical person's day is spent communicating with others through various mechanisms including oral and written media. Further, there is often a tradeoff between rich, oral communication media and less rich, written communication media. While oral media enable negotiation, clarification, explanation and exchange of subjective views, written media enable the exchange of large amounts of accurate, objective or numeric data.
This dichotomous relationship between oral and written communication similarly exists within the electronic realm. Simple textual email messages, although easy to author, typically do not allow rich, expressive communication as may sometimes be required. On the other hand, tools for creating richer, more expressive messages, such as multimedia presentation software, are too complex and time-consuming for casual or day-to-day use. Furthermore, multimedia presentation software typically is not designed for use as a bi-directional communication or conversation tool. Multimedia “documents” produced using this software tend to present information to an audience, rather than allow user interaction and self-guided learning.
Existing messaging systems employ a single primary media. E-mail uses text, while voicemail uses recorded audio for conveying the message. Some systems allow other media objects to be “attached” to a message, but do not support explicit reference from the message content to the attached objects or particular pieces of the attached objects. Such references are needed in order to allow the sender of the message to, for example, refer to a particular paragraph in a printed document or a face in a photographic image.
A mechanism for specifying these references and a visual representation of the references and indicated media objects is required. Furthermore, a user may wish to refer to/include one or more previous messages in a new message. An efficient means for creating and viewing such hierarchical messages is therefore needed.
Therefore, what is needed is a method for creating a simple and effective multimedia-authoring tool that overcomes the limitations found within the prior art.
The present invention overcomes the deficiencies and limitations of the prior art by providing a system and method for creating, sending and receiving multimedia messages. The multimedia message system includes modules for the automatic generation of visual representations of the media objects and references to them such as thumbnail images and links. The multimedia message system also includes modules for the hierarchical nesting of message within themselves, and automatic updating of generated thumbnails. In one embodiment, the system includes an automatic message creation module, an image generation module, a link generation module, a hierarchical object display module and a dynamic updating module. These modules are coupled by a bus for communication with a processor of the multimedia message system and integrated as a part of the multimedia message system. The automatic message creation module controls other modules such that an object including a thumbnail image of the object's contents and a link to the original content are created in response to simple user inputs or commands. The image generation module and the link generation module operate in response to the automatic message creation module to generate an image of the object whether it be a web page, a multimedia message, a hypertext link, a video clip, or a document; or to generate a link or reference to the original object from which the image was formed, respectively. The hierarchical object display module allows the system to retrieve and display information referenced by an object and shown by the thumbnail image corresponding to the object. The system also includes the dynamic updating module that automatically updates the thumbnail image(s) representing an object any time the underlying object or information from which the image was generated has been modified.
The present invention also includes a number of novel methods including: a method for automatically creating thumbnail images of objects; a method for specifying a reference from recorded audio to a media object or component; a method for automatically creating an object including an image of a web page; a method for automatically creating an object including an image of an existing multimedia message; a method for automatically creating an object including an image from a hypertext link; a method for viewing information for an object; and a method for automatically updating images of an object after a change to an existing multimedia message.
The invention is illustrated by way of example, and not by way of limitation in the figures of the accompanying drawings in which like reference numerals refer to similar elements.
A method and apparatus for generating visual representations for multimedia documents is described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the invention.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
Moreover, the present invention is claimed below operating on or working in conjunction with an information system. Such an information system as claimed may be the entire messaging system as detailed below in the preferred embodiment or only portions of such a system. For example, the present invention can operate with an information system that need only be a browser in the simplest sense to present and display media objects. The information system might alternately be the messaging system described below with reference to
Control unit 150 may comprise an arithmetic logic unit, a microprocessor, a general purpose computer, a personal digital assistant or some other information appliance equipped to provide electronic display signals to display device 100. In one embodiment, control unit 150 comprises a general purpose computer having a graphical user interface, which may be generated by, for example, a program written in Java running on top of an operating system like WINDOWS® or UNIX® based operating systems. In one embodiment, electronic documents 110, 120, 130, and 140 are generated by one or more application programs executed by control unit 150 including, without limitation, word processing applications, electronic mail applications, spreadsheet applications, and web browser applications. In one embodiment, the operating system and/or one or more application programs executed by control unit 150 provide “drag-and-drop” functionality where each electronic document, such as electronic documents 110, 120, 130, and 140, may be encapsulated as a separate data object.
Referring still to
Processor 102 processes data signals and may comprise various computing architectures including a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, or an architecture implementing a combination of instruction sets. Although only a single processor is shown in
Main memory 104 may store instructions and/or data that may be executed by processor 102. The instructions and/or data may comprise code for performing any and/or all of the techniques described herein. Main memory 104 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, or some other memory device known in the art. The memory 104 is described in more detail below with reference to
Data storage device 107 stores data and instructions for processor 102 and may comprise one or more devices including a hard disk drive, a floppy disk drive, a CD-ROM device, a DVD-ROM device, a DVD-RAM device, a DVD-RW device, a flash memory device, or some other mass storage device known in the art.
System bus 101 represents a shared bus for communicating information and data throughout control unit 150. System bus 101 may represent one or more buses including an industry standard architecture (ISA) bus, a peripheral component interconnect (PCI) bus, a universal serial bus (USB), or some other bus known in the art to provide similar functionality.
Additional components coupled to control unit 150 through system bus 101 include display device 100, keyboard 122, cursor control device 123, network controller 124 and audio device 125. Display device 100 represents any device equipped to display electronic images and data as described herein. Display device 100 may be a cathode ray tube (CRT), liquid crystal display (LCD), or any other similarly equipped display device, screen, or monitor. Keyboard 122 represents an alphanumeric input device coupled to control unit 150 to communicate information and command selections to processor 102. Cursor control 123 represents a user input device equipped to communicate positional data as well as command selections to processor 102. Cursor control 123 may include a mouse, a trackball, a stylus, a pen, a touch screen, cursor direction keys, or other mechanisms to cause movement of a cursor. Network controller 124 links control unit 150 to a network that may include multiple processing systems. The network of processing systems may comprise a local area network (LAN), a wide area network (WAN) (e.g., the Internet), and/or any other interconnected data path across which multiple devices may communicate.
One or more I/O devices 125 are coupled to the system bus 101. For example, the I/O device 125 may be an audio device 125 equipped to receive audio input and transmit audio output. Audio input may be received through various devices including a microphone within audio device 125 and network controller 124. Similarly, audio output may originate from various devices including processor 102 and network controller 124. In one embodiment, audio device 125 is a general purpose audio add-in/expansion card designed for use within a general purpose computer system. Optionally, audio device 125 may contain one or more analog-to-digital or digital-to-analog converters, and/or one or more digital signal processors to facilitate audio processing.
It should be apparent to one skilled in the art that control unit 150 may include more or fewer components than those shown in
In accordance with one embodiment, one can record a variable-length audio narration that may optionally describe one or more electronic documents or images displayed upon a display device. In one embodiment, by indicating a position on a display screen through clicking, pointing, or touching the display screen, audio recording is initiated and a dynamically adjustable audio gauge is displayed. In one embodiment, the audio gauge increases in size in proportion to the amount of audio recorded while the audio gauge is active. Audio recording may cease when the audio level drops below a predetermined threshold or may cease in response to specific user input. In one embodiment, for each additional positional stimulus received, a new audio gauge is generated and the previous audio gauge ceases to be adjusted, thereby becoming inactive.
The term “positional stimulus,” as referred to herein, represents an input that can simultaneously indicate an electronic location on the display screen with an instant in time tracked by the control unit. Various input sources may generate a positional stimulus including, without limitation, a computer mouse, a trackball, a stylus or pen, and cursor control keys. Similarly, a touch screen is capable of both generating and detecting a positional stimulus. In one embodiment, positional stimuli are detected by control unit 150, whereas in another embodiment, positional stimuli are detected by display device 100.
In an exemplary embodiment, once a positional stimulus occurs, such as a “click” of a mouse or a “touch” on a touch screen, an audio gauge is generated on display device 100 at the location indicated by the positional stimulus. At substantially the same time as the audio gauge is generated, control unit 150, or a similarly equipped device coupled to control unit 150, begins to record audio input. In one embodiment, the size of the audio gauge displayed is dynamically adjusted to proportionally indicate the amount of audio recorded by control unit 150, or the similarly equipped device coupled to control unit 150. Audio may be recorded by control unit 150 through audio device 125 or similar audio hardware (or software), and the audio may be stored within data storage device 107 or a similarly equipped audio storage device. In one embodiment, control unit 150 initiates audio recording in response to detecting a positional stimulus, whereas in an alternative embodiment, control unit 150 automatically initiates audio recording upon detecting audio input above a predetermined threshold level. In another embodiment, a set of on-screen or physical buttons are used to control recording. Buttons for audio control are well-known and include “Record”, “Play”, “Stop”, “Pause”, “Fast Forward”, and “Rewind”. Similarly, audio recording may automatically be terminated upon the audio level dropping below a predetermined threshold or upon control unit 150 detecting a predetermined duration of silence where there is no audio input.
In one embodiment, audio gauge 232 is displayed overlaid upon electronic document 130 and includes start indicator 234 and stop indicator 236. Start indicator 234 marks the location at which an initial positional stimulus for audio gauge 232 was detected and stop indicator 236 marks the location at which audio gauge 232 ceased while being dynamically adjusted. In one embodiment, audio gauges cease being dynamically adjusted as a result of audio input ceasing or falling below a minimum threshold level. Since, in
Audio recorded according to the methods described herein may be played back or replayed in any of a number of ways. In one embodiment, recorded audio is replayed when control unit 150 detects a positional stimulus indicating a location on, or substantially close to, the start indicator, of the associated audio gauge. In another embodiment, recorded audio is replayed when control unit 150 detects a positional stimulus indicating a location on, or substantially close to, any part of the associated audio gauge or electronic document or when the user presses a button as described above.
Audio gauges may also include a replay progress indicator such as progress puck 233. In one embodiment, as recorded audio is replayed, progress puck 233 moves along audio gauge 232 so as to indicate both the amount of recorded audio replayed as well as the amount of recorded audio remaining to be replayed. In
Reference markers may also be utilized to enhance understanding of recorded audio content.
In an exemplary embodiment, reference markers 232 and 234 are generated on display device 100 while audio is being recorded by control unit 150. Recall that according to one embodiment, audio is recorded and an audio gauge 242 generated in response to the system (either control unit 150 or display device 100) detecting a positional stimulus. As audio continues to be recorded, the size of the corresponding audio gauge 242 is proportionally adjusted so as to reflect the amount of audio recorded. In one embodiment, if the system detects an additional positional stimulus indicating a location on or substantially close to an electronic document while audio gauge 242 is being adjusted (i.e., audio is being recorded), the system generates a reference marker connecting the end-point of the audio gauge 242 to that location indicated on the electronic document. In the case of audio gauge 242, reference marker 232 is initiated by a positional stimulus detected at time T1, whereas reference marker 234 is initiated by a positional stimulus detected at a later time T2. In one embodiment, during replay of the recorded audio, reference marker 232 is displayed upon display device 100 when the recorded audio reaches time T1 and reference marker 234 is displayed upon display device 100 when the recorded audio reaches time T2.
The location on an electronic document to which a reference marker is graphically connected may be represented by (x, y) coordinates in the case where an electronic document represents an image, or the location may be represented by a single coordinate in the case where an electronic document represents a linear document. Examples of linear documents may include a plain text document, a hypertext markup language (HTML) document, or some other markup language-based document including extensible markup language (XML) documents.
In one embodiment, if during audio recording the system detects an additional positional stimulus that is not located on or substantially close to an electronic document, control unit 150 generates an additional audio gauge rather than a reference marker. The additional audio gauge may be generated in a manner similar to the first audio gauge described above. In one embodiment, control unit 150 graphically connects multiple audio gauges in the order in which they were generated. Upon audio replay, control unit 150 may sequentially replay the recorded audio in the chronological order that the audio was recorded. In one embodiment, one or more progress indicators may be utilized to display the amount of audio played with respect to each audio gauge. In another embodiment, a single progress indicator that sequentially travels from one audio gauge to another corresponding to the order of audio replay may be used.
In one embodiment, objects such as audio gauges, reference markers, electronic document thumbnails and icons may be repositioned individually or as a group, anywhere on display device 100 using conventional “drag” operations.
In another embodiment, neither the audio gauges nor the reference markers are displayed as recording occurs. However, a data file is appended that includes locations of the referenced documents and timestamps for when the reference occurred. Details on such a data file are described in more detail below.
In one embodiment, the user's voice is recorded along with his or her “deictic” gestures (e.g., references to objects). In one embodiment, an interface includes a number of objects that are displayed on the screen. In such a case, recording begins either when the user presses a ‘“record” button or when the system detects the start of speech through its microphone. Whenever a user touches an object's graphical representation on a touch screen, a time-stamped event is recorded. Recording ends either when the user presses a “stop” button or when the system detects end of speech. When playing back this message, the system plays the audio, and at the appropriate times displays the referred-to objects.
In one embodiment, the system allows the user to record an audio narrative and refer to digital photographs uploaded from a camera simply by touching them on a touch screen. The resulting presentation is stored using the multimedia description languages Synchronized Multimedia Integration Language (SMIL) and RealPix, allowing for playback using the widely distributed RealPlayer. A simple extension allows the user to refer to points or regions within objects, by monitoring the locations “touched” more precisely. On playback, such gestures can become highlighting strokes overlaid on images or documents.
A multimedia message or chronicle is a particular type of audio narrative that includes one or more narration threads and one or more references to various types of electronic documents. Multiple sub-messages, each containing a singular narration thread, may be combined to form a larger multimedia message. Within a multimedia message, it is possible for one or more persons to reference various types of electronic documents including, for example, but not limited to, a Web page with hyperlinks, a slide show containing audio narration, a text document containing text annotations, a scanned document image, a word processor document, a presentation, a digital photograph, etc. The references may refer to the contents of the entire electronic document or to a specific area within the electronic document. A linear ordering of sub-messages may also be specified allowing them to be played back in a default order. As will be described in more detail below with reference to
Each narration thread may contain one or more references to various electronic documents. For example, narration thread 321 contains one reference to each of electronic documents 324, 328 and 332, for a total of three references. Narration thread 342, however, contains only a single reference to single electronic document 346. Each audio clip within a narration thread may contain any number of references to any number of electronic documents, or no references at all. For example, audio clip 322 contains a single reference to electronic document 324, audio clip 326 contains one reference to electronic document 328 and one reference to electronic document 332, and audio clip 330 does not contain any references.
Each reference may either indicate an entire electronic document, as shown by reference point 323, or optionally indicate a specific area within an electronic document, as shown by reference points 327. The coordinates representing such reference points may have different interpretations depending upon the type of electronic document they are referencing. For example, if the electronic document is an image, coordinates of the reference point may be absolute pixel coordinates. If the document is a web page, however, coordinates of the reference point may be a character position within an HTML file. In the case of a document stored as a series of page images, for instance, a scanned document, the reference point may be a page number plus (x, y) coordinates. Alternatively, if a document is represented by a layout language such as Postscript or PDF (Portable Document Format), the coordinate can be a character position within the file. Then, upon rendering (during playback), this can be translated to a position on the screen.
The multimedia message described above with respect to
Electronic mail (email) software usually provides an option by which an original email message may be included in a reply. Typically, an email reply can be interspersed among the lines of the original message, or it can be appended or prepended to the original message as a single block. Multimedia messages may similarly be transferred over a network using a variety of readily available email applications known in the art or other means for distribution of files/electronic data.
Additionally, audio gauges 410 and 420 are shown connected together by connector 415, which indicates that the two audio clips represented by audio gauges 410 and 420 are chronologically adjacent (420 was recorded subsequent to 410). The connection may only indicate chronological ordering. However, a user can place gauges anywhere on the display. In one embodiment, audio clips recorded by the same user are considered to be related. In one embodiment, the face image of the user who recorded the audio is displayed beside the corresponding audio gauge(s). In
Once the multimedia message illustrated by
Upon receipt of the email containing the XML representation's URL, user “B” may have several viewing options. In one embodiment, upon accessing the URL containing the XML representation, the XML representation is parsed to create and play the message if user “B” has an appropriate application to view the XML representation. In another embodiment, where user “B” does not have an appropriate application to view the XML representation, the message may alternatively be displayed as a standard HTML-based web page. That is, the XML representation containing individual URLs pointing to one or more electronic documents and audio clips is displayed as a list of individual hyperlinks rather than a message. In yet another embodiment, the message may be translated into a Synchronized Multimedia Integration Language (SMIL) formatted file as specified by the World-Wide Web (WWW) consortium. Using a “viewer” such as RealPlayer G2 from Real Networks, Inc., user “B” may view the SMIL message as a slideshow in which the audio clips and corresponding electronic documents and references are simultaneously presented in an order, such as the order in which they were recorded.
Access to the received multimedia message may optionally be limited by access control functions. In one embodiment, a user may retrieve the message only if he or she is the sender or named recipient. In another embodiment, users may be required to authenticate themselves with, for example, a user name and/or password prior to accessing the message.
Once user “B” receives the message, user “B” may reply by adding additional electronic documents and audio clips (represented by audio gauges).
Once user “B” enters a response to the message or multimedia message received from user “A”, user “B” may send the reply back to user “A” or to some other user or group of users. Assuming the reply is sent back to user “A”, in one embodiment, user “A” first hears the additions made to user “A's” message by user “B”. That is, upon receipt, user “A” hears the recorded audio represented by audio gauge 430.
In one embodiment, a multimedia message may be displayed with separate users' additions or replies being selectively viewable.
A method and apparatus for recording and playback of multidimensional walkthrough narratives is disclosed. A three dimensional modeling language is used to automatically create a three-dimensional environment using pre-existing electronic documents. The objects are 3D. In one embodiment, they are shown on a 2D display such as display device 100. In another embodiment, a stereo 3D display (e.g., head-mounted glasses) can be used.
A first user, or author, may navigate throughout the three-dimensional environment while simultaneously recording the path taken and any accompanying audio input. In one of two playback modes, a second user can be shown a “walkthrough” of the three-dimensional scene corresponding to the path taken by the author. In the other playback mode, a second user is free to navigate the three-dimensional world while the author's path is displayed.
Generation
Cubes 510, 515 and 520 are depicted in
In one embodiment, a two-dimensional reduced-size “thumbnail” image is created and superimposed upon a three-dimensionally rendered figure such as cubes 510, 515 and 520. In such a manner, a two-dimensional image can be converted into a three-dimensional representation of that image. In one embodiment, cubes 510, 515 and 520 are defined through extended markup language (XML). In another embodiment a three-dimensional modeling language such as VRML, 3DML, and X3D may be used.
As each three-dimensional figure is generated, it is displayed within the three-dimensional environment. In one embodiment, each three-dimensional figure is randomly placed or displayed within the three-dimensional environment as it is generated. In another embodiment, each three-dimensional figure is displayed with respect to other preexisting three-dimensional figures according to a placement scheme. In one embodiment, placement schemes are based upon characteristics of the electronic documents contained within the three-dimensional figures. Examples of placement schemes include, without limitation, time of creation, content, and media type. In yet another embodiment, the three-dimensional figures are displayed at a pre-determined fixed distance from one another. By displaying the three-dimensional figures according to various placement schemes, it is possible for an author to group or cluster certain types of information together to help guide user understanding. In one embodiment, the user, or one who navigates the three-dimensional environment after creation, is able to rearrange the three-dimensional figures according to his own organizational preference.
Recording
Once the three-dimensional environment is created, a user may navigate it. In one embodiment, navigation is possible through the use of a readily available “player” application including a virtual-reality modeling language (VRML) viewer such as Cosmo Player available from Silicon Graphics, Inc., of Mountain View, Calif., or a three dimensional modeling language (3DML) viewer such as Flatland Rover available from Flatland Online Inc., of San Francisco, Calif. In one embodiment, a special class of user, called an author, is able to navigate through the three-dimensional environment while the author's virtual movements are recorded. The term “recording” as used herein is meant to describe the process of retaining navigational and audio input as generated by a user with respect to the three-dimensional environment.
In an exemplary embodiment, an author navigates through a three-dimensional environment while a processing device, such as processor 102, causes the author's movements to be recorded. Any audio narrated by the author while navigating is also recorded, thus creating a walkthrough. In one embodiment, as the audio is recorded, it is segmented so as to divide the audio into multiple audio clips of varying duration according to a segmenting scheme. The audio may be recorded as described above. Thus, in one embodiment, a 2D multimedia message is created and viewed as a 3D walkthrough, and vice versa. Similarly, in another embodiment, video content may be recorded and segmented in lieu of audio. As the author navigates toward a three-dimensional figure, the electronic document superimposed upon the figure appears larger to the author. By approaching the figure, the author may take a closer look at the figure or electronic document contained thereon. If so equipped, the player application may also provide the author the opportunity to view the electronic document in a separate, full-screen display, in another part of the display, or in a dedicated portion of the display overlaying the walkthrough.
In one embodiment, each proximity indicator is associated with an audio or a video segment that presumably relates to the three-dimensional figure bounded by the proximity indicator. In one embodiment, multiple three-dimensional figures may exist within a single proximity indicator, and in another embodiment, multiple proximity indicators may bound a single three-dimensional figure.
Playback
A user of the three-dimensional multimedia narrative described herein can choose whether to pursue playback of the recorded three-dimensional walkthrough in passive or active modes.
According to one embodiment, in a passive mode, the playback is movie—like in that the user is shown a three-dimensional walkthrough corresponding to the path taken by the author when the walkthrough was recorded. In one embodiment, audio narration that was recorded by the author is also played while in a passive mode. As documents are passed in a passive mode, the viewing user can also view the source of the documents in a separate window or viewing application.
In an active playback mode, the user is free to navigate the three-dimensional environment without being limited by the author's previously taken path. According to one embodiment of the present invention, while in active mode, the author's path remains visible as the user navigates through the three-dimensional environment. In yet another embodiment, segmented audio recorded by the author is played as the user approaches a related three-dimensional figure. Referring once again to
System Overview. Referring now to
The operating system 802 is preferably one of a conventional type such as, WINDOWS®, SOLARIS® or LINUX® based operating systems. Although not shown, the memory unit 104 may also include one or more application programs including, without limitation, word processing applications, electronic mail applications, spreadsheet applications, and web browser applications.
The multimedia message system 804 is preferably a message system for creating, storing, sending and retrieving rich multimedia messages. The functionality for such a system has been described above with reference to
Today, it is well understood by those skilled in the art that multiple computers can be used in the place of a single computer by applying the appropriate software, hardware, and communication protocols. For instance, data used by a computer often resides on a hard disk or other storage device that is located somewhere on the network to which the computer is connected and not within the computer enclosure itself. That data can be accessed using NFS, FTP, HTTP or one of many other remote file access protocols. Additionally, remote procedure calls (RPC) can execute software on remote processors not part of the local computer. In some cases, this remote data or remote procedure operation is transparent to the user of the computer and even to the application itself because the remote operation is executed through the underlying operating system as if it were a local operation.
It should be apparent to those skilled in the art that although the embodiment described in this invention refers to a single computer with local storage and processor, the data might be stored remotely in a manner that is transparent to the local computer user or the data might explicitly reside in a remote computer accessible over the network. In either case, the functionality of the invention is the same and both embodiments are recognized and considered as possible embodiments of this invention.
For example,
The web browser 806 is of a conventional type that provides access to the Internet and processes HTML, XML or other mark up language to generate images on the display device 100. For example, the web browser 806 could be Netscape Navigator or Microsoft Internet Explorer.
The memory 808 for users, passwords, message distribution lists and media objects is shown as being connected to the bus 101 for access by the various modules 802-818. The memory 808 is distinctive from the prior art in that the memory also stores object information as will be described in more detail below with reference to
The automatic object creation module 810 is coupled to the bus 101 for communication with the multimedia message system 804, the link generation module 812, the image generation module 814, the dynamic updating module 818 and the hierarchical display module 816. The automatic object creation module 810 interacts with these components as will be described below with reference to
The link generation module 812 is responsive to the automatic object creation module 810 and receives and sends commands, and determines and provides references or links to underlying data to the automatic object creation module 810 and the multimedia message system 804. The links and references can then be displayed or embedded into objects for future use. The operation of the link generation module 812 can be best understood with reference to
The image generation module 814 is responsive to the automatic object creation module 810 and receives and sends commands, and determines and provides thumbnail images of underlying data to the automatic object creation module 810 and the multimedia message system 804. The thumbnail images can then be displayed and/or embedded into objects for future use. The operation of the image generation module 814 can be best understood with reference to
The hierarchical display module 816 controls the display of images in conjunction with the multimedia message system 804. The hierarchical display module 816 provides for the display of the thumbnail images of an object, and responsive to user input will display the corresponding original content (message, object, web page, or link) from which the thumbnail image was generated. The hierarchical display module 816 can be best understood with reference to
The dynamic updating module 818 works in conjunction with the multimedia message system 804, automatic object creation module 810 and the image generation module 814. The dynamic updating module 818 controls the updating of any thumbnails automatically upon modification of an existing message by any user. The operation of the dynamic updating module 818 can be best understood with reference to
The media object cache 820 forms a portion of memory 104 and temporarily stores media objects used by the multimedia message system 804 for faster access. The media object cache 820 stores media objects identical to those stored on the data storage device 107 or other storage devices accessible via the network controller 124. The media objects have the same format and information as will be described below with reference to
Referring now to
If you drag an image off of your hard disk:
Source
Original file name
Image
New Unique ID assigned to this image file that is
now stored on the MMS 804
Cache
Original image contents - same as “Image” above
If you drag in a web page
Source
URL of web page
Image
Captured image from screen. This image file is
assigned a new unique ID and is now stored on
the MMS 804
Cache
Original web page contents
If you drag an image off of the web page:
Source
URL of web page
Image
Image downloaded from web page. This image
file is assigned a new unique ID and is now
stored on the MMS 804
Cache
Original web page contents
Drag in a message:
Source
Unique message ID
Image
Image that was created to represent this message at
the time the message was created.
Cache
Not necessary because this message is already
stored on the MMS 804. In other words, the
cached message is the same as the original message
pointed to by “source”
There is other data 908 that might be stored with the objects, but Source, Image, and Cache are the fundamental required data.
To better understand the operation of the present invention, an exemplary architecture for multimedia conversation system 1000 constructed in accordance with the present invention is shown in
Overview of Automatic Thumbnail Generation Process. Referring now to
Referring now to
There are a variety of possible ways to determine which content is being dropped into a multimedia message. The following description is meant to be an example and not an exhaustive description of the process. Many other ways of implementing the drag and drop operation will be known and easily implemented by those skilled in the art.
On a Microsoft Windows platform running Internet Explorer, a drag operation from an Explorer Window is initiated when the user clicks on one of: 1) the icon next to the URL in the address window, 2) an image in the web page, or 3) highlighted text within the page itself. The user then drags the selected item into the message window of the multimedia message application. When the user releases the mouse button in the message window, the multimedia application (MMA) is informed of the contents of the dragged objects by the operating system drag and drop interface that is implemented differently depending on whether JAVA, Visual Basic, or some other programming language is used. Dropping an object into JAVA, for instance, means that the JAVA application listening for a drop operation receives an object that represents a JAVA String and contains some textual description of the dropped object. In the case of a file from the file storage of the computer, the string is the complete path and filename of the file object. In the case of an image from a web page, it is also a complete path and filename, but the file comes from a special temporary directory that indicates it was downloaded from the Internet. In the case of dragging a URL from the Internet Explorer window, the String object represents a full URL, starting with “http://”. This makes it easy to distinguish between an image and a full URL dragged from the Explorer window.
If a full URL is received by the MMA, it takes a snapshot of the portion of the screen containing the Internet Explorer browser in order to make a visual representation of the link and display that in the MMA's message window. Scaling down the high-resolution image captured from the screen can also generate a lower-resolution thumbnail.
If an image is dragged in from the web browser to the MMA, that image is used as the representation of the link and the URL of the link is requested from the Internet Explorer browser using methods widely understood by software developers. Thus, both the representational image and the link are captured and stored as part of the object 900.
If an image is dragged from the file storage of the computer or another networked computer, the image is stored in the multimedia message system 804 and is thereby made available to message recipients. This is accomplished using either a network file copy over NFS, using FTP, HTTP or some other well-known network file transfer protocol. The image is assigned a unique ID number and the URL of the newly transferred image becomes the link or pointer to the dropped image. In addition, the image itself becomes the image representing the file and a low-resolution version of the image can be created as a thumbnail for the high-resolution image.
Other documents stored on a local file system and dropped into the MMA can be automatically added to the multimedia message system 804 and the link or URL would be based on the new unique filename generated by the multimedia message system 804 for that file. If it is possible to extract or construct an image that represents that file, such an extracted or constructed image could represent the file in the object 900. However, if an image cannot be made or obtained to represent that file, a generic pictorial representation of document type, commonly referred to as an ‘icon’, could be chosen and associated with the object 900. In the case that the file type is unrecognizable, a sufficiently generic icon could be selected to represent the file. All objects dragged into the multimedia message system 804 application window can have images that represent them—either very specific images that replicate the content of the object, more generic icon images that represent simply the type of content that is contained in the file, or in the final case, an image that indicates the content is unknown.
If selected text is dragged into an MMA window, the string object delivered to the application contains the selected text. In this case, either the text can be converted into a visual representation like a label and that label can be used in the multimedia message, or a text icon can be the image for the object 900.
Multimedia messages stored in the multimedia message system 804 already have images associated with them. If the object dragged into an MMA window is another multimedia message, the representative image is used in the created object 900. The link is a pointer or URL to the MMS which indicates which message is being referenced.
There are many cases where information is made available on the web only after the user has entered an appropriate username and password. Web documents are often modified, sometimes daily, which means that a URL may not actually contain the same information at a later point in time. For this reason, it is advantageous to store information contained on a web page at the time the URL is dropped into the MMA. The web page is downloaded and stored in a file that is associated with the object 900—for example in other information 908. This information can be used to present the original information contained in the web page to the recipient of the message even if the web page has been modified in the time between when the message was sent and it was received.
At step 1214, an image is captured from the screen to associate with a URL that has been dropped into the MMA window. There are many ways of generating a representational image of which capturing the screen is perhaps the most convenient. An image can be generated inside a memory buffer that is not displayed on the screen as well. In the case of an audio file, a waveform of the audio object could be rendered and used as the image for the audio object. Alternatively, the thumbnail could have been generated previously and retrieved.
Next in step 1210, the information in the window or object is captured. For example, the multimedia message or web page in a window is retrieved. Then in step 1212, a reference or link to the information displayed in the window is stored in the object. For example, if the information is a web page, then a hypertext link to the page may be stored. If the information is a message in the multimedia message system 804, then a reference to that message may be stored. Other links to other forms of information such as intranet web pages or other content may similarly be stored. In an alternate embodiment, the system 804 may capture the data and store it as part of the multimedia message system. For a web page, the HTML code used to produce the page can be stored. This is particularly advantageous to ensure that the data is preserved for later use. This is a particular issue as the content on pages present on the World Wide Web can change and destroy the value and meaning of links and references to a given page. Next in step 1214, an image of the information captured in step 1210 is generated. This is preferably performed by generating a screen shot of the information being displayed in the window. Alternatively, the data captured may be rendered and processed by other means. For example, the image data may be scaled; the waveform of an audio object may be compacted, or the thumbnail may have been generated previously and need only be retrieved. Then in step 1216, an object or an instance of an object as described with reference to
Referring now to
Referring now to
Referring now to
Viewing Original Content Represented by Thumbnail Images. Referring now to
The process begins by displaying 1602 a thumbnail image of a message or message object by the multimedia message system. Such a thumbnail image may merely be part of a larger message having a plurality of elements or may be displayed alone. Next, the user selects 1604 an object for display of its original content. This is preferably done with the user positioning the pointer over the thumbnail image and clicking the mouse. Once the object has been selected, the method preferably performs a series of tests 1606, 1612, 1618 to determine the type of underlying content that has been encapsulated in the object, and the manner of display best suited for the content.
In step 1606, the method first tests whether the object references a multimedia message. If so, the method retrieves 1608 the message referenced or linked to the object. Then a window of the multimedia message system is opened or created, and the retrieved multimedia object is displayed 1610 in the window. The user may then further review the original message, send responses or access other objects shown in the original message. It should be noted that the multimedia message may include a reference to another multimedia message (nested message).
If the object does not reference a multimedia message in step 1606, the method continues in step 1612. In step 1612, the method first tests whether the object references a web page. If so, the method retrieves 1614 the web page referenced or linked to the object. Then a window of the web browser is opened, and the retrieved web page is displayed 1616 in the window. Alternatively, a cached version of the web page and a screen shot could be shown instead of the retrieved web page. The user may then use the full functionality of the web browser to go to other links and retrieve other pages.
If the object does not reference a web page in step 1612, the method continues in step 1624, and the method assumes that the object references original content such as a document, sound clip or video clip. Then the method determines the reference to the original content file in step 1624. Then in step 1626, the method retrieves the information corresponding to the reference from step 1624. Finally in step 1628, the method opens or creates a window suitable for the retrieved information and displays the information using the application designated for the object media type. For example, the information could be a document, spreadsheet, or sound clip, and the appropriate application is launched so that the information can be viewed, modified, or transmitted.
Referring now to
While the present invention has been described with reference to certain preferred embodiments, those skilled in the art will recognize that various modifications may be provided. For example, the drag and drop functionality provided by the present invention may be used to augment the capabilities already existing in a multimedia message system. In particular, if a user is recording a voice message, and at the same time drags and drops one window into the composition window for the voice message, the system could automatically capture the image as described above and create a new visual object as well as also provide an index to a particular point in the voice message that relates to the captured image. This could all be done automatically at the same time the user is creating the voice message. Variations upon and modifications to the preferred embodiments are provided for by the present invention, which is limited only by the following claims.
Barrus, John W., Wolff, Gregory J.
Patent | Priority | Assignee | Title |
10268662, | Sep 10 2012 | The Boeing Company | Panoptic visualization of a document according to the structure thereof |
10268761, | Dec 21 2011 | The Boeing Company | Panoptic visualization document collection |
10362196, | Jan 05 2013 | Duvon Corporation | Secured communication distribution system and method |
10691408, | Aug 18 2014 | NightLight Systems LLC | Digital media message generation |
10728197, | Aug 18 2014 | NightLight Systems LLC | Unscripted digital media message generation |
10735360, | Aug 18 2014 | NightLight Systems LLC | Digital media messages and files |
10735361, | Aug 18 2014 | NightLight Systems LLC | Scripted digital media message generation |
10992623, | Aug 18 2014 | NightLight Systems LLC | Digital media messages and files |
11082377, | Aug 18 2014 | NightLight Systems LLC | Scripted digital media message generation |
11120196, | Sep 17 2009 | BORDER STYLO, LLC | Systems and methods for sharing user generated slide objects over a network |
11308260, | Mar 20 2006 | Alof Media, LLC | Hyperlink with graphical cue |
11500527, | Sep 21 2018 | LOILO INC | Presentation material creation apparatus, presentation material creation method, and presentation material creation program |
11797749, | Sep 17 2009 | BORDER STYLO, LLC | Systems and methods for anchoring content objects to structured documents |
7912468, | Feb 24 2006 | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | Multi-communication pathway addressing in a mobile communication device |
8812561, | Sep 17 2009 | BORDER STYLO, LLC | Systems and methods for sharing user generated slide objects over a network |
8838101, | Feb 24 2006 | AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED | Multi-communication pathway addressing in a mobile communication device |
8875054, | Jul 30 2010 | Apple Inc. | Hybrid knob/slider control |
9049258, | Sep 17 2009 | BORDER STYLO, LLC | Systems and methods for anchoring content objects to structured documents |
9069452, | Dec 01 2010 | Apple Inc. | Morphing a user-interface control object |
Patent | Priority | Assignee | Title |
5745112, | Dec 16 1994 | International Business Machine Corp. | Device and method for a window responding to a drag operation |
5886274, | Jul 11 1997 | Seer Systems, Inc. | System and method for generating, distributing, storing and performing musical work files |
6091408, | Aug 13 1996 | Z-Axis Corporation | Method for presenting information units on multiple presentation units |
6237010, | Oct 06 1997 | Canon Kabushiki Kaisha | Multimedia application using flashpix file format |
6275829, | Nov 25 1997 | Microsoft Technology Licensing, LLC | Representing a graphic image on a web page with a thumbnail-sized image |
6404441, | Jul 16 1999 | VLDL, LLC, C O JIM LYNCH | System for creating media presentations of computer software application programs |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 22 2004 | Ricoh Company, Ltd. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jan 08 2010 | ASPN: Payor Number Assigned. |
Jan 08 2010 | RMPN: Payer Number De-assigned. |
Jan 11 2013 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Feb 24 2017 | REM: Maintenance Fee Reminder Mailed. |
Jul 14 2017 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jul 14 2012 | 4 years fee payment window open |
Jan 14 2013 | 6 months grace period start (w surcharge) |
Jul 14 2013 | patent expiry (for year 4) |
Jul 14 2015 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jul 14 2016 | 8 years fee payment window open |
Jan 14 2017 | 6 months grace period start (w surcharge) |
Jul 14 2017 | patent expiry (for year 8) |
Jul 14 2019 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jul 14 2020 | 12 years fee payment window open |
Jan 14 2021 | 6 months grace period start (w surcharge) |
Jul 14 2021 | patent expiry (for year 12) |
Jul 14 2023 | 2 years to revive unintentionally abandoned end. (for year 12) |