Embodiments of the present invention provide techniques for retrieving and displaying multimedia information. According to an embodiment of the present invention, a graphical user interface (gui) is provided that displays multimedia information that may be stored in a multimedia document. According to the teachings of the present invention, the gui enables a user to navigate through multimedia information stored in a multimedia document. The gui provides both a focused and a contextual view of the contents of the multimedia document.
|
15. A method of displaying multimedia information stored in a multimedia document on a display, the method comprising:
displaying a graphical user interface (gui) on the display;
displaying, in a first area of the gui, a first visual representation of the multimedia information stored in the multimedia document, the first visual representation including a first representation of information of a first type stored in the multimedia document and a first representation of information of a second type stored in the multimedia document;
displaying, in the first area of the gui, a first lens positionable over a plurality of portions of the first visual representation displayed within the first area of the gui, the first lens covering a first portion of the first visual representation within the first area;
displaying, in a second area of the gui, a second visual representation of the multimedia information stored in the multimedia document based on the first lens covering the first portion of the first visual representation within the first area, the second visual representation including a second representation of the information of the first type stored in the multimedia document and a second representation of the information of the second type stored in the multimedia document;
receiving information indicating a user-specified concept of interest; and
analyzing the multimedia information stored in the multimedia document to identify one or more locations in the multimedia information that are relevant to the user-specified concept of interest;
wherein displaying, in the first area of the gui, the first visual representation of the multimedia information stored in the multimedia document comprises annotating the one or more locations in the multimedia information that are relevant to the user-specified concept of interest; and
wherein displaying, in the second area of the gui, the second visual representation of the multimedia information stored in the multimedia document comprises annotating the one or more locations in the multimedia information that are relevant to the user-specified concept of interest and that are located in the first portion of the first visual representation covered by the first lens within the first area.
30. A computer program product stored on a computer-readable storage medium for displaying multimedia information stored in a multimedia document on a display, the computer program product comprising:
code for displaying a graphical user interface (gui) on the display;
code for displaying, in a first area of the gui, a first visual representation of the multimedia information stored in the multimedia document, the first visual representation including a first representation of information of a first type stored in the multimedia document and a first representation of information of a second type stored in the multimedia document;
code for displaying a first lens positionable over a plurality of portions of the first visual representation displayed within the first area of the gui, the first lens covering a first portion of the first visual representation within the first area;
code for displaying, in a second area of the gui, a second visual representation of the multimedia information stored in the multimedia document based on the first lens covering the first portion of the first visual representation within the first area, the second visual representation including a second representation of the information of the first type stored in the multimedia document and a second representation of the information of the second type stored in the multimedia document;
code for receiving information indicating a user-specified concept of interest; and
code for analyzing the multimedia information stored in the multimedia document to identify one or more locations in the multimedia information that are relevant to the user-specified concept of interest;
wherein the code for displaying, in the first area of the gui, the first visual representation of the multimedia information stored in the multimedia document comprises code for annotating the one or more locations in the multimedia information that are relevant to the user-specified concept of interest; and
wherein the code for displaying, in the second area of the gui, the second visual representation of the multimedia information stored in the multimedia document comprises code for annotating the one or more locations in the multimedia information that are relevant to the user-specified concept of interest and that are located in the first portion of the first visual representation covered by the first lens within the first area.
45. A system for displaying multimedia information stored in a multimedia document, the system comprising:
a display;
a processor; and
a memory coupled to the processor, the memory configured to store a plurality of code modules for execution by the processor, the plurality of code modules comprising:
a code module for displaying a graphical user interface (gui) on the display;
a code module for displaying, in a first area of the gui, a first visual representation of the multimedia information stored in the multimedia document, the first visual representation including a first representation of information of a first type stored in the multimedia document and a first representation of information of a second type stored in the multimedia document;
a code module for displaying, in the first area of the gui, a first lens positionable over a plurality of portions of the first visual representation displayed within the first area of the gui, the first lens covering a first portion of the first visual representation within the first area;
a code module for displaying, in a second area of the gui, a second visual representation of the multimedia information stored in the multimedia document based on the first lens covering the first portion of the first visual representation within the first area, the second visual representation including a second representation of the information of the first type stored in the multimedia document and a second representation of the information of the second type stored in the multimedia document;
a code module for receiving information indicating a user-specified concept of interest; and
a code module for analyzing the multimedia information stored in the multimedia document to identify one or more locations in the multimedia information that are relevant to the user-specified concept of interest;
wherein the code module for displaying, in the first area of the gui, the first visual representation of the multimedia information stored in the multimedia document comprises annotating the one or more locations in the multimedia information that are relevant to the user-specified concept of interest; and
wherein the code module for displaying, in the second area of the gui, the second visual representation of the multimedia information stored in the multimedia document comprises annotating the one or more locations in the multimedia information that are relevant to the user-specified concept of interest and that are located in the first portion of the first visual representation covered by the first lens within the first area.
1. A method of displaying multimedia information stored in a multimedia document on a display, the method comprising:
displaying a graphical user interface (gui) on the display;
displaying, in a first area of the gui, a first visual representation of the multimedia information stored in the multimedia document, the first visual representation including a first representation of information of a first type stored in the multimedia document and a first representation of information of a second type stored in the multimedia document;
displaying, in the first area of the gui, a first lens positionable over a plurality of portions of the first visual representation displayed within the first area of the gui, the first lens covering a first portion of the first visual representation within the first area;
displaying, in a second area of the gui, a second visual representation of the multimedia information stored in the multimedia document based on the first lens covering the first portion of the first visual representation within the first area, the second visual representation including a second representation of the information of the first type stored in the multimedia document and a second representation of the information of the second type stored in the multimedia document;
displaying, in the second area of the gui, a second lens positionable over a plurality of portions of the second visual representation displayed within the second area of the gui, the second lens covering a first portion of the second visual representation within the second area; and
displaying, in a third area of the gui, a third visual representation of the multimedia information stored in the multimedia document based on the second lens covering the first portion of the second visual representation within the second area, the third visual representation including a third representation of the information of the first type and a third representation of the information of the second type,
wherein displaying the first visual representation of the multimedia information stored in the multimedia document in the first area of the gui comprises:
displaying a first thumbnail image in the first area of the gui, the first thumbnail image comprising the first representation of the information of the first type; and
displaying a second thumbnail image in the first area of the gui, the second thumbnail image comprising the first representation of the information of the second type,
wherein displaying the second visual representation of the multimedia information stored in the multimedia document in the second area of the gui comprises:
displaying, in a first sub-area of the second area of the gui, the portion of the first representation of the information of the first type covered by the first lens as the second representation of the information of the first type; and
displaying, in a second sub-area of the second area of the gui, the portion of the first representation of the information of the second type covered by the first lens as the second representation of the information of the second type,
wherein displaying the third visual representation of the multimedia information stored in the multimedia document in the third area of the gui comprises:
displaying, in a first sub-area of the third area of the gui, the portion of the second representation of the information of the first type covered by the second lens as the third representation of the information of the first type; and
displaying, in a second sub-area of the third area of the gui, the portion of the second representation of the information of the second type covered by the second lens as the third representation of the information of the first type.
16. A computer program product stored on a computer-readable storage medium for displaying multimedia information stored in a multimedia document on a display, the computer program product comprising:
code for displaying a graphical user interface (gui) on the display;
code for displaying, in a first area of the gui, a first visual representation of the multimedia information stored in the multimedia document, the first visual representation including a first representation of information of a first type stored in the multimedia document and a first representation of information of a second type stored in the multimedia document;
code for displaying a first lens positionable over a plurality of portions of the first visual representation displayed within the first area of the gui, the first lens covering a first portion of the first visual representation within the first area;
code for displaying, in a second area of the gui, a second visual representation of the multimedia information stored in the multimedia document based on the first lens covering the first portion of the first visual representation within the first area, the second visual representation including a second representation of the information of the first type stored in the multimedia document and a second representation of the information of the second type stored in the multimedia document;
code for displaying, in the second area of the gui, a second lens positionable over a plurality of portions of the second visual representation displayed within the second area of the gui, the second lens covering a first portion of the second visual representation within the second area; and
code for displaying, in a third area of the gui, a third visual representation of the multimedia information stored in the multimedia document based on the second lens covering the first portion of the second visual representation within the second area, the third visual representation comprising a third representation of the information of the first type and a third representation of the information of the second type,
wherein the code for displaying the first visual representation of the multimedia information stored in the multimedia document in the first area of the gui comprises:
code for displaying a first thumbnail image in the first area of the gui, the first thumbnail image comprising the first representation of the information of the first type; and
code for displaying a second thumbnail image in the first area of the gui, the second thumbnail image comprising the first representation of the information of the second type,
wherein the code for displaying the second visual representation of the multimedia information stored in the multimedia document in the second area of the gui comprises:
code for displaying, in a first sub-area of the second area of the gui, the portion of the first representation of the information of the first type covered by the first lens; and
code for displaying, in a second sub-area of the second area of the gui, the portion of the first representation of the information of the second type covered by the first lens,
wherein the code for displaying the third visual representation of the multimedia information stored in the multimedia document in the third area of the gui comprises:
code for displaying, in a first sub-area of the third area of the gui, the portion of the second representation of the information of the first type covered by the second lens as the third representation of the information of the first type; and
code for displaying, in a second sub-area of the third area of the gui, the portion of the second representation of the information of the second type covered by the second lens as the third representation of the information of the second type.
31. A system for displaying multimedia information stored in a multimedia document, the system comprising:
a display;
a processor; and
a memory coupled to the processor, the memory configured to store a plurality of code modules for execution by the processor, the plurality of code modules comprising:
a code module for displaying a graphical user interface (gui) on the display;
a code module for displaying, in a first area of the gui, a first visual representation of the multimedia information stored in the multimedia document, the first visual representation including a first representation of information of a first type stored in the multimedia document and a first representation of information of a second type stored in the multimedia document;
a code module for displaying, in the first area of the gui, a first lens positionable over a plurality of portions of the first visual representation displayed within the first area of the gui, the first lens covering a first portion of the first visual representation within the first area;
a code module for displaying, in a second area of the gui, a second visual representation of the multimedia information stored in the multimedia document based on the first lens covering the first portion of the first visual representation within the first area, the second visual representation including a second representation of the information of the first type stored in the multimedia document and a second representation of the information of the second type stored in the multimedia document;
a code module for displaying, in the second area of the gui, a second lens positionable over a plurality of portions of the second visual representation displayed within the second area of the gui, the second lens covering a first portion of the second visual representation within the second area; and
a code module for displaying, in a third area of the gui, a third visual representation of the multimedia information stored in the multimedia document based on the second lens covering the first portion of the second visual representation within the second area, the third visual representation including a third representation of the information of the first type and a third representation of the information of the second type,
wherein the code module for displaying the first visual representation of the multimedia information stored in the multimedia document in the first area of the gui comprises:
a code module for displaying a first thumbnail image in the first area of the gui, the first thumbnail image comprising the first representation of the information of the first type; and
a code module for displaying a second thumbnail image in the first area of the gui, the second thumbnail image comprising the first representation of the information of the second type,
wherein the code module for displaying the second visual representation of the multimedia information stored in the multimedia document in the second area of the gui comprises:
a code module for displaying, in a first sub-area of the second area of the gui, the portion of the first representation of the information of the first type covered by the first lens as the second representation of the information of the first type; and
a code module for displaying, in a second sub-area of the second area of the gui, the portion of the first representation of the information of the second type covered by the first lens as the second representation of the information of the second type,
wherein the code module for displaying the third visual representation of the multimedia information stored in the multimedia document in the third area of the gui comprises:
a code module for displaying, in a first sub-area of the third area of the gui, the portion of the second representation of the information of the first type covered by the second lens as the third representation of the information of the first type; and
a code module for displaying, in a second sub-area of the third area of the gui, the portion of the second representation of the information of the second type covered by the second lens as the third representation of the information of the first type.
2. The method of
displaying a first thumbnail image in the first area of the gui, the first thumbnail image comprising the first representation of the information of the first type; and
displaying a second thumbnail image in the first area of the gui, the second thumbnail image comprising the first representation of the information of the second type.
3. The method of
displaying, in a first sub-area of the second area of the gui, the second representation of the information of the first type as a portion of the first representation of the information of the first type covered by the first lens; and
displaying, in a second sub-area of the second area of the gui, the second representation of the information of the first type as a portion of the first representation of the information of the second type covered by the first lens.
4. The method of
determining a first time and a second time associated with the first lens;
displaying, in the second area of the gui, a representation of the information of the first type occurring between the first time and the second time associated with the first lens as the second representation of the information of the first type; and
displaying, in the second area of the gui, a representation of the information of the second type occurring between the first time and the second time associated with the first lens as the second representation of the information of the second type.
5. The method of
receiving user input moving the first lens over the first visual representation displayed within the first area to cover a second portion of the first visual representation within the first area; and
responsive to the user input, automatically changing the second visual representation displayed in the second area of the gui such that the second visual representation of the multimedia information stored in the multimedia document displayed in the second area of the gui corresponds to the second portion of the first visual representation of the multimedia information stored in the multimedia document covered by the first lens.
6. The method of
determining a first time and a second time associated with the second lens;
displaying, in the third area of the gui, a representation of the information of the first type occurring between the first time and the second time associated with the second lens as the third representation of the information of the first type; and
displaying, in the third area of the gui, a representation of the information of the second type occurring between the first time and the second time associated with the second lens as the third representation of the information of the second type.
7. The method of
receiving user input moving the second lens over the second visual representation displayed within the second area to cover a second portion of the second visual representation within the second area; and
responsive to the user input, automatically changing the third visual representation displayed in the third area of the gui such that the third visual representation of the multimedia information stored in the multimedia document displayed in the third area of the gui corresponds to the second portion of the second visual representation of the multimedia information stored in the multimedia document covered by the second lens.
8. The method of
receiving user input moving the first lens over the first visual representation displayed within the first area to cover a second portion of the first visual representation within first area; and
responsive to the user input, automatically:
changing the second visual representation displayed in the second area of the gui such that the second visual representation of the multimedia information stored in the multimedia document displayed in the second area of the gui corresponds to the second portion of the first visual representation of the multimedia information stored in the multimedia document covered by the first lens; and
changing the third visual representation displayed in the third area of the gui such that the third visual representation of the multimedia information stored in the multimedia document displayed in the third area of the gui corresponds to the second visual representation of the multimedia information stored by the multimedia document within the second area.
9. The method of
displaying a sub-lens covering a portion of the first visual representation displayed within the first area of the gui corresponding to the first portion of the second visual representation within the second area of the gui covered by the second lens.
10. The method of
receiving user input moving the second lens over the second visual representation displayed within the second area to cover a second portion of the second visual representation within the second area; and
responsive to the user input, automatically changing position of the sub-lens to cover a portion of the first visual representation displayed within the first area of the gui corresponding to the second portion of the second visual representation within the second area covered by the second lens.
11. The method of
the information of the first type corresponds to video information; and
the first representation of the information of the first type comprises one or more video keyframes extracted from the video information.
12. The method of
the information of the second type corresponds to audio information; and
the first representation of the information of the second type comprises text information obtained from transcribing the audio information.
13. The method of
the information of the second type corresponds to closed-caption (CC) text information; and
the first representation of the information of the second type comprises text information included in the CC text information.
14. The method of
receiving input indicating selection of a portion of the multimedia information occurring between a first time and a second time; and
performing a first operation on the portion of the multimedia information occurring between a first time and a second time.
17. The computer program product of
code for displaying a first thumbnail image in the first area of the gui, the first thumbnail image comprising the first representation of the information of the first type; and
code for displaying a second thumbnail image in the first area of the gui, the second thumbnail image comprising the first representation of the information of the second type.
18. The computer program product of
code for displaying, in a first sub-area of the second area of the gui, the second representation of the information of the first type as a portion of the first representation of the information of the first type covered by the first lens; and
code for displaying, in a second sub-area of the second area of the gui, the second representation of the information of the second type as a portion of the first representation of the information of the second type covered by the first lens.
19. The computer program product of
code for determining a first time and a second time associated with the first lens;
code for displaying, in the second area of the gui, a representation of information of the first type occurring between the first time and the second time associated with the first lens as the second representation of the information of the first type; and
code for displaying, in the second area of the gui, a representation of information of the second type occurring between the first time and the second time associated with the first lens as the second representation of the information of the second type.
20. The computer program product of
code for receiving user input moving the first lens over the first visual representation within the first area to cover a second portion of the first visual representation within the first area; and
code for responsive to the user input, automatically changing the second visual representation displayed in the second area of the gui such that the second visual representation of the multimedia information stored in the multimedia document displayed in the second area of the gui corresponds to the second portion of the first visual representation of the multimedia information stored in the multimedia document covered by the first lens.
21. The computer program product of
code for determining a first time and a second time associated with the second lens;
code for displaying, in the third area of the gui, a representation of the information of the first type occurring between the first time and the second time associated with the second lens as the third representation of the information of the first type; and
code for displaying, in the third area of the gui, a representation of the information of the second type occurring between the first time and the second time associated with the second lens as the third representation of the information of the second type.
22. The computer program product of
code for receiving user input moving the second lens over the second visual representation displayed within the second area to cover a second portion of the second visual representation within the second area; and
responsive to the user input, code for automatically changing the third visual representation displayed in the third area of the gui such that the third visual representation of the multimedia information stored in the multimedia document displayed in the third area of the gui corresponds to the second portion of the second visual representation of the multimedia information stored in the multimedia document covered by the second lens.
23. The computer program product of
code for receiving user input moving the first lens over the first visual representation displayed within the first area to cover a second portion of the first visual representation within the first area; and
responsive to the user input, code for automatically:
changing the second visual representation displayed in the second area of the gui such that the second visual representation of the multimedia information stored in the multimedia document displayed in the second area of the gui corresponds to the second portion of the first visual representation of the multimedia information stored in the multimedia document covered by the first lens; and
changing the third visual representation displayed in the third area of the gui such that the third visual representation of the multimedia information stored in the multimedia document displayed in the third area of the gui corresponds to the second visual representation of the multimedia information stored by the multimedia document within the second area.
24. The computer program product of
code for displaying a sub-lens covering a portion of the first visual representation displayed within the first area of the gui corresponding to the first portion of the second visual representation within the second area of the gui covered by the second lens.
25. The computer program product of
code for receiving user input moving the second lens over the second visual representation displayed within the second area to cover a second portion of the second visual representation within the second area; and
responsive to the user input, code for automatically changing position of the sub-lens to cover a portion of the first visual representation displayed within the first area of the gui corresponding to the second visual representation within the second area covered by the second lens.
26. The computer program product of
the information of the first type corresponds to video information; and
the first representation of the information of the first type comprises one or more video keyframes extracted from the video information.
27. The computer program product of
the information of the second type corresponds to audio information; and
the first representation of information of the second type comprises text information obtained from transcribing the audio information.
28. The computer program product of
the information of the second type corresponds to closed-caption (CC) text information; and
the first representation of information of the second type comprises text information included in the CC text information.
29. The computer program product of
code for receiving input indicating selection of a portion of the multimedia information occurring between a first time and a second time; and
code for performing a first operation on the portion of the multimedia information occurring between a first time and a second time.
32. The system of
a code module for displaying a first thumbnail image in the first area of the gui, the first thumbnail image comprising the first representation of the information of the first type; and
a code module for displaying a second thumbnail image in the first area of the gui, the second thumbnail image comprising the first representation of the information of the second type.
33. The system of
a code module for, in a first sub-area of the second area of the gui, the second representation of the information of the first type as a portion of the first representation of the information of the first type covered by the first lens; and
a code module for displaying, in a second sub-area of the second area of the gui, the second representation of the information of the first type as a portion of the first representation of the information of the second type covered by the first lens.
34. The system of
a code module for determining a first time and a second time associated with the first lens;
a code module for displaying, in the second area of the gui, a representation of the information of the first type occurring between the first time and the second time associated with the first lens as the second representation of the information of the first type; and
a code module for displaying, in the second area of the gui, a representation of the information of the second type occurring between the first time and the second time associated with the first lens as the second representation of the information of the second type.
35. The system of
a code module for receiving user input moving the first lens over the first visual representation displayed within the first area to cover a second portion of the first visual representation within the first area; and
responsive to the user input, a code module for automatically changing the second visual representation displayed in the second area of the gui such that the second visual representation of the multimedia information stored in the multimedia document displayed in the second area of the gui corresponds to the second portion of the first visual representation of the multimedia information stored in the multimedia document covered by the first lens.
36. The system of
a code module for determining a first time and a second time associated with the second lens;
a code module for displaying, in the third area of the gui, a representation of the information of the first type occurring between the first time and the second time associated with the second lens as the third representation of the information of the first type; and
a code module for displaying, in the third area of the gui, a representation of the information of the second type occurring between the first time and the second time associated with the second lens as the third representation of the information of the second type.
37. The system of
a code module for receiving user input moving the second lens over the second visual representation displayed within the second area to cover a second portion of the second visual representation within the second area; and
responsive to the user input, a code module for automatically changing the third visual representation displayed in the third area of the gui such that the third visual representation of the multimedia information stored in the multimedia document displayed in the third area of the gui corresponds to the second portion of the second visual representation of the multimedia information stored in the multimedia document covered by the second lens.
38. The system of
a code module for receiving user input moving the first lens over the first visual representation displayed within the first area to cover a second portion of the first visual representation within first area; and
responsive to the user input, a code module for automatically:
changing the second visual representation displayed in the second area of the gui such that the second visual representation of the multimedia information stored in the multimedia document displayed in the second area of the gui corresponds to the second portion of the first visual representation of the multimedia information stored in the multimedia document covered by the first lens; and
changing the third visual representation displayed in the third area of the gui such that the third visual representation of the multimedia information stored in the multimedia document displayed in the third area of the gui corresponds to the second visual representation of the multimedia information stored by the multimedia document within the second area.
39. The system of
a code module for displaying a sub-lens covering a portion of the first visual representation displayed within the first area of the gui corresponding to the first portion of the second visual representation within the second area of the gui covered by the second lens.
40. The system of
a code module for receiving user input moving the second lens over the second visual representation displayed within the second area to cover a second portion of the second visual representation within the second area; and
responsive to the user input, a code module for automatically changing position of the sub-lens to cover a portion of the first visual representation displayed within the first area of the gui corresponding to the second portion of the second visual representation within the second area covered by the second lens.
41. The system of
the information of the first type corresponds to video information; and
the first representation of the information of the first type comprises one or more video keyframes extracted from the video information.
42. The system of
the information of the second type corresponds to audio information; and
the first representation of the information of the second type comprises text information obtained from transcribing the audio information.
43. The system of
the information of the second type corresponds to closed-caption (CC) text information; and
the first representation of the information of the second type comprises text information included in the CC text information.
44. The system of
a code module for receiving input indicating selection of a portion of the multimedia information occurring between a first time and a second time; and
a code module for performing a first operation on the portion of the multimedia information occurring between a first time and a second time.
|
The present application claims priority from and is a continuation-in-part (CIP) application of U.S. Non-Provisional patent application Ser. No. 08/995,616, entitled “AUTOMATIC ADAPTIVE DOCUMENT READING HELP SYSTEM” filed Dec. 22, 1997, the entire contents of which are herein incorporated by reference for all purposes.
The present application incorporates by reference for all purposes the entire contents of U.S. Non-Provisional application Ser. No. 10/001,895, entitled “PAPER-BASED INTERFACE FOR MULTIMEDIA INFORMATION” filed Nov. 19, 2001.
A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the xerographic reproduction by anyone of the patent document or the patent disclosure in exactly the form it appears in the U.S. Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The present invention relates to user interfaces for displaying information and more particularly to user interfaces for retrieving and displaying multimedia information that may be stored in a multimedia document.
With rapid advances in computer technology, an increasing amount of information is being stored in the form of electronic (or digital) documents. These electronic documents include multimedia documents that store multimedia information. The term “multimedia information” is used to refer to information that comprises information of several different types in an integrated form. The different types of information included in multimedia information may include a combination of text information, graphics information, animation information, sound (audio) information, video information, slides information, whiteboard information, and other types of information. Multimedia information is also used to refer to information comprising one or more objects wherein the objects include information of different types. For example, multimedia objects included in multimedia information may comprise text information, graphics information, animation information, sound (audio) information, video information, slides information, whiteboard information, and other types of information. Multimedia documents may be considered as compound objects that comprise video, audio, closed-caption text, keyframes, presentation slides, whiteboard capture information, as well as other multimedia type objects. Examples of multimedia documents include documents storing interactive web pages, television broadcasts, videos, presentations, or the like.
Several tools and applications are conventionally available that allow users to play back, store, index, edit, or manipulate multimedia information stored in multimedia documents. Examples of such tools and/or applications include proprietary or customized multimedia players (e.g., RealPlayer™ provided by RealNetworks, Microsoft Windows Media Player provided by Microsoft Corporation, QuickTime™ Player provided by Apple Corporation, Shockwave multimedia player, and others), video players, televisions, personal digital assistants (PDAs), or the like. Several tools are also available for editing multimedia information. For example, Virage, Inc. of San Mateo, Calif. (www.virage.com) provides various tools for viewing and manipulating video content and tools for creating video databases. Virage, Inc. also provides tools for face detection and on-screen text recognition from video information.
Given the vast number of electronic documents, readers of electronic documents are increasingly being called upon to assimilate vast quantities of information in a short period of time. To meet the demands placed upon them, readers find they must read electronic documents “horizontally” rather than “vertically,” i.e., they must scan, skim, and browse sections of interest in one or more electronic documents rather then read and analyze a single document from start to end. While tools exist which enable users to “horizontally” read electronic documents containing text/image information (e.g., the reading tool described in U.S. Non-Provisional patent application Ser. No. 08/995,616), conventional tools cannot be used to “horizontally” read multimedia documents which may contain audio information, video information, and other types of information. None of the multimedia tools described above allow users to “horizontally” read a multimedia document.
In light of the above, there is a need for techniques that allow users to read a multimedia document “horizontally.” Techniques that allow users to view, analyze, and navigate multimedia information stored in multimedia documents are desirable.
The present invention provides techniques for retrieving and displaying multimedia information. According to an embodiment of the present invention, a graphical user interface (GUI) is provided that displays multimedia information that may be stored in a multimedia document. According to the teachings of the present invention, the GUI enables a user to navigate through multimedia information stored in a multimedia document. The GUI provides both a focused and a contextual view of the contents of the multimedia document. The GUI thus allows users to “horizontally” read multimedia documents.
According to an embodiment of the present invention, techniques are provided for displaying multimedia information stored in a multimedia document on a display. The multimedia information comprises information of a plurality of types including information of a first type and information of a second type. In this embodiment, a graphical user interface (GUI) is displayed on the display. A representation of the multimedia information stored by the multimedia document is displayed in a first area of the GUI. The displayed representation of the multimedia information in the first area comprises a representation of information of the first type and a representation of information of the second type. A first lens is displayed covering a first portion of the first area. A representation of multimedia information comprising a portion of the representation of information of the first type covered by the first lens and a portion of the representation of information of the second type covered by the first lens is displayed in a second area of the GUI.
According to another embodiment of the present invention, techniques are provided for displaying multimedia information stored in a multimedia document on a display. The multimedia information comprises information of a first type and information of a second type. In this embodiment, a graphical user interface (GUI) is displayed on the display. A representation of the multimedia information stored by the multimedia document occurring between a start time (ts) and an end time (te) associated with the multimedia document is displayed in a first area of the GUI. The representation of the multimedia information displayed in the first area of the GUI comprises a representation of information of the first type occurring between ts and te and a representation of information of the second type occurring between ts and te, where (te>ts). A first lens is displayed emphasizing a portion of the first area of the GUI, where the portion of the first area emphasized by the first lens comprises a representation of multimedia information occurring between a first time (t1) and a second time (t2), where (ts≦t1<t2≦te). The representation of multimedia information occurring between t1 and t2 is displayed in a second area of the GUI. The representation of multimedia information displayed in the second area comprises a representation of information of the first type occurring between t1 and t2 and a representation of information of the second type occurring between t1 and t2.
According to yet another embodiment of the present invention, techniques are provided for displaying multimedia information stored in a multimedia document on a display. The multimedia information comprises video information and information of a first type. In this embodiment, a graphical user interface (GUI) is displayed on the display. A first set of one or more video keyframes extracted from the video information occurring between a start time (ts) and an end time (te) associated with the multimedia document, where (te>ts), are displayed in a first section of a first area of the GUI. Text information corresponding to the information of the first type occurring between ts and te is displayed in a second section of the first area of the GUI. A first lens is displayed emphasizing a portion of the first section of the first area occurring between a first time (t1) and a second time (t2) and a portion of the second section of the first area occurring between t1 and t2. The emphasized portion of the first section of the first area comprises a second set of one or more video keyframes extracted from the video information occurring between t1 and t2, and the emphasized portion of the second section of the first area comprises text information corresponding to information of the first type occurring between t1 and t2, wherein the second set of one or more keyframes is a subset of the first set of one or more keyframes and (ts≦t1<t2≦te). The second set of one or more keyframes is displayed in a first section of a second area of the GUI. Text information corresponding to the information of the first type occurring between t1 and t2 is displayed in a second section of the second area of the GUI.
The foregoing, together with other features, embodiments, and advantages of the present invention, will become more apparent when referring to the following specification, claims, and accompanying drawings.
Embodiments of the present invention provide techniques for retrieving and displaying multimedia information. According to an embodiment of the present invention, a graphical user interface (GUI) is provided that displays multimedia information that may be stored in a multimedia document. According to the teachings of the present invention, the GUI enables a user to navigate through multimedia information stored in a multimedia document. The GUI provides both a focused and a contextual view of the contents of the multimedia document. The GUI thus allows a user to “horizontally” read multimedia documents.
As indicated above, the term “multimedia information” is intended to refer to information that comprises information of several different types in an integrated form. The different types of information included in multimedia information may include a combination of text information, graphics information, animation information, sound (audio) information, video information, slides information, whiteboard images information, and other types of information. For example, a video recording of a television broadcast may comprise video information and audio information. In certain instances the video recording may also comprise close-captioned (CC) text information which comprises material related to the video information, and in many cases, is an exact representation of the speech contained in the audio portions of the video recording. Multimedia information is also used to refer to information comprising one or more objects wherein the objects include information of different types. For example, multimedia objects included in multimedia information may comprise text information, graphics information, animation information, sound (audio) information, video information, slides information, whiteboard images information, and other types of information.
The term “multimedia document” as used in this application is intended to refer to any electronic storage unit (e.g., a file) that stores multimedia information in digital format. Various different formats may be used to store the multimedia information. These formats include various MPEG formats (e.g., MPEG 1, MPEG 2, MPEG 4, MPEG 7, etc.), MP3 format, SMIL format, HTML+TIME format, WMF (Windows Media Format), RM (Real Media) format, Quicktime format, Shockwave format, various streaming media formats, formats being developed by the engineering community, proprietary and customary formats, and others. Examples of multimedia documents include video recordings, MPEG files, news broadcast recordings, presentation recordings, recorded meetings, classroom lecture recordings, broadcast television programs, or the like.
Communication network 108 provides a mechanism allowing the various computer systems depicted in
Communication links 110 used to connect the various systems depicted in
Computer systems connected to communication network 108 may be classified as “clients” or “servers” depending on the role the computer systems play with respect to requesting information and/or services or providing information and/or services. Computer systems that are used by users to request information or to request a service are classified as “client” computers (or “clients”). Computer systems that store information and provide the information in response to a user request received from a client computer, or computer systems that perform processing to provide the user-requested services are called “server” computers (or “servers”). It should however be apparent that a particular computer system may function both as a client and as a server.
Accordingly, according to an embodiment of the present invention, server system 104 is configured to perform processing to facilitate generation of a GUI that displays multimedia information according to the teachings of the present invention. The GUI generated by server system 104 may be output to the user (e.g., a reader of the multimedia document) via an output device coupled to server system 104 or via client systems 102. The GUI generated by server 104 enables the user to retrieve and browse multimedia information that may be stored in a multimedia document. The GUI provides both a focused and a contextual view of the contents of a multimedia document and thus enables the multimedia document to be read “horizontally.”
The processing performed by server system 104 to generate the GUI and to provide the various features according to the teachings of the present invention may be implemented by software modules executing on server system 104, by hardware modules coupled to server system 104, or combinations thereof. In alternative embodiments of the present invention, the processing may also be distributed between the various computer systems depicted in
The multimedia information that is displayed in the GUI may be stored in a multimedia document that is accessible to server system 104. For example, the multimedia document may be stored in a storage subsystem of server system 104. The multimedia document may also be stored by other systems such as MIS 106 that are accessible to server 104. Alternatively, the multimedia document may be stored in a memory location accessible to server system 104.
In alternative embodiments, instead of accessing a multimedia document, server system 104 may receive a stream of multimedia information (e.g., a streaming media signal, a cable signal, etc.) from a multimedia information source such as MIS 106. According to an embodiment of the present invention, server system 104 stores the multimedia information signals in a multimedia document and then generates a GUI that displays the multimedia information. Examples of MIS 106 include a television broadcast receiver, a cable receiver, a digital video recorder (e.g., a TIVO box), or the like. For example, multimedia information source 106 may be embodied as a television that is configured to receive multimedia broadcast signals and to transmit the signals to server system 104. In alternative embodiments, server system 104 may be configured to intercept multimedia information signals received by MIS 106. Server system 104 may receive the multimedia information directly from MIS 106 or may alternatively receive the information via a communication network such as communication network 108.
As described above, MIS 106 depicted in
Users may use client systems 102 to view the GUI generated by server system 104. Users may also use client systems 102 to interact with the other systems depicted in
According to an embodiment of the present invention, a single computer system may function both as server system 104 and as client system 102. Various other configurations of the server system 104, client system 102, and MIS 106 are possible.
Bus subsystem 204 provides a mechanism for letting the various components and subsystems of computer system 200 communicate with each other as intended. The various subsystems and components of computer system 200 need not be at the same physical location but may be distributed at various locations within network 100. Although bus subsystem 204 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple busses.
User interface input devices 212 may include a keyboard, pointing devices, a mouse, trackball, touchpad, a graphics tablet, a scanner, a barcode scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information using computer system 200.
User interface output devices 214 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may be a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or the like. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system 200. According to an embodiment of the present invention, the GUI generated according to the teachings of the present invention may be presented to the user via output devices 214.
Storage subsystem 206 may be configured to store the basic programming and data constructs that provide the functionality of the computer system and of the present invention. For example, according to an embodiment of the present invention, software modules implementing the functionality of the present invention may be stored in storage subsystem 206 of server system 104. These software modules may be executed by processor(s) 202 of server system 104. In a distributed environment, the software modules may be stored on a plurality of computer systems and executed by processors of the plurality of computer systems. Storage subsystem 206 may also provide a repository for storing various databases that may be used by the present invention. Storage subsystem 206 may comprise memory subsystem 208 and file storage subsystem 210.
Memory subsystem 208 may include a number of memories including a main random access memory (RAM) 218 for storage of instructions and data during program execution and a read only memory (ROM) 220 in which fixed instructions are stored. File storage subsystem 210 provides persistent (non-volatile) storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a Compact Disk Read Only Memory (CD-ROM) drive, an optical drive, removable media cartridges, and other like storage media. One or more of the drives may be located at remote locations on other connected computers.
Computer system 200 itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a mainframe, a kiosk, a personal digital assistant (PDA), a communication device such as a cell phone, or any other data processing system. Server computers generally have more storage and processing capacity then client systems. Due to the ever-changing nature of computers and networks, the description of computer system 200 depicted in
GUI 300 displays multimedia information stored in a multimedia document. The multimedia information stored by the multimedia document and displayed by GUI 300 may comprise information of a plurality of different types. As depicted in
The television broadcast may be stored using a variety of different techniques. According to one technique, the television broadcast is recorded and stored using a satellite receiver connected to a PC-TV video card of server system 104. Applications executing on server system 104 then process the recorded television broadcast to facilitate generation of GUI 300. For example, the video information contained in the television broadcast may be captured using an MPEG capture application that creates a separate metafile (e.g., in XML format) containing temporal information for the broadcast and closed-caption text, if provided. Information stored in the metafile may then be used to generate GUI 300 depicted in
As depicted in
According to an embodiment of the present invention, first viewing area 302 displays one or more commands that may be selected by a user viewing GUI 300. Various user interface features such as menu bars, drop-down menus, cascading menus, buttons, selection bars, buttons, etc. may be used to display the user-selectable commands. According to an embodiment of the present invention, the commands provided in first viewing area 302 include a command that enables the user to select a multimedia document whose multimedia information is to be displayed in the GUI. The commands may also include one or more commands that allow the user to configure and/or customize the manner in which multimedia information stored in the user-selected multimedia document is displayed in GUI 300. Various other commands may also be provided in first viewing area 302.
According to an embodiment of the present invention, second viewing area 304 displays a scaled representation of multimedia information stored by the multimedia document. The user may select the scaling factor used for displaying information in second viewing area 304. According to a particular embodiment of the present invention, a representation of the entire (i.e., multimedia information between the start time and end time associated with the multimedia document) multimedia document is displayed in second viewing area 304. In this embodiment, one end of second viewing area 304 represents the start time of the multimedia document and the opposite end of second viewing area 304 represents the end time of the multimedia document.
As shown in
Thumbnail image 312-2 displays a representation of video information included in the multimedia information displayed by GUI 300. In the embodiment depicted in
One or more thumbnail images may be displayed in second viewing area 304 based upon the different types of information included in the multimedia information being displayed. Each thumbnail image 312 displayed in second viewing area 304 displays a representation of information of a particular type included in the multimedia information stored by the multimedia document. According to an embodiment of the present invention, the number of thumbnails displayed in second viewing area 304 and the type of information displayed by each thumbnail is user-configurable.
According to an embodiment of the present invention, the various thumbnail images displayed in second viewing area 304 are temporally synchronized or aligned with each other along a timeline. This implies that the various types of information included in the multimedia information and occurring at approximately the same time are displayed next to each other. For example, thumbnail images 312-1 and 312-2 are aligned such that the text information (which may represent CC text information or a transcript of the audio information) displayed in thumbnail image 312-1 and video keyframes displayed in thumbnail 312-2 that occur in the multimedia information at a particular point in time are displayed close to each other (e.g., along the same horizontal axis). Accordingly, information that has a particular time stamp is displayed proximal to information that has approximately the same time stamp. This enables a user to determine the various types of information occurring approximately concurrently in the multimedia information being displayed by GUT 300 by simply scanning second viewing area 304 in the horizontal axis.
According to the teachings of the present invention, a viewing lens or window 314 (hereinafter referred to as “thumbnail viewing area lens 314”) is displayed in second viewing area 304. Thumbnail viewing area lens 314 covers or emphasizes a portion of second viewing area 304. According to the teachings of the present invention, multimedia information corresponding to the area of second viewing area 304 covered by thumbnail viewing area lens 314 is displayed in third viewing area 306.
In the embodiment depicted in
In response to a change in the position of thumbnail viewing area lens 314 from a first location in second viewing area 304 to a second location along second viewing area 304, the multimedia information displayed in third viewing area 306 is automatically updated such that the multimedia information displayed in third viewing area 306 continues to correspond to the area of second viewing area 304 emphasized by thumbnail viewing area lens 314. Accordingly, a user may use thumbnail viewing area lens 314 to navigate and scroll through the contents of the multimedia document displayed by GUI 300. Thumbnail viewing area lens 314 thus provides a context and indicates a location of the multimedia information displayed in third viewing area 306 within the entire multimedia document.
As shown in
As described above, multimedia information corresponding to the portion of second viewing area 304 emphasized by thumbnail viewing area lens 314 is displayed in third viewing area 306. Accordingly, a representation of multimedia information occurring between time t1 and t2 (corresponding to a segment of time of the multimedia document emphasized by thumbnail viewing area lens 314) is displayed in third viewing area 306. Third viewing area 306 thus displays a zoomed-in representation of the multimedia information stored by the multimedia document corresponding to the portion of the multimedia document emphasized by thumbnail viewing area lens 314.
As depicted in
Like thumbnail images 312, panels 324 are also temporally aligned or synchronized with each other. Accordingly, the various types of information included in the multimedia information and occurring at approximately the same time are displayed next to each other in third viewing area 306. For example, panels 324-1 and 324-2 depicted in
Panel 324-1 depicted in GUI 300 corresponds to thumbnail image 312-1 and displays text information corresponding to the area of thumbnail image 312-1 emphasized or covered by thumbnail viewing area lens 314. The text information displayed by panel 324-1 may correspond to text extracted from CC information included in the multimedia information, or alternatively may represent a transcript of audio information included in the multimedia information. According to an embodiment of the present invention, the present invention takes advantage of the automatic story segmentation and other features that are often provided in close-captioned (CC) text from broadcast news. Most news agencies who provide CC text as part of their broadcast use a special syntax in the CC text (e.g., a “>>>” delimiter to indicate changes in story line or subject, a “>>” delimiter to indicate changes in speakers, etc.). Given the presence of this kind of information in the CC text information included in the multimedia information, the present invention incorporates these features in the text displayed in panel 324-1. For example, a “>>>” delimiter may be displayed to indicate changes in story line or subject, a “>>” delimiter may be displayed to indicate changes in speakers, additional spacing may be displayed between text portions related to different story lines to clearly demarcate the different stories, etc. This enhances the readability of the text information displayed in panel 324-1.
Panel 324-2 depicted in GUI 300 corresponds to thumbnail image 312-2 and displays a representation of video information corresponding to the area of thumbnail image 312-2 emphasized or covered by thumbnail viewing area lens 314. Accordingly, panel 324-2 displays a representation of video information included in the multimedia information stored by the multimedia document and occurring between times t1 and t2 associated with thumbnail viewing area lens 314. In the embodiment depicted in
Various different techniques may be used to display video keyframes in panel 324-2. According to an embodiment of the present invention, the time segment between time t1 and time t2 is divided into sub-segments of a predetermined time period. Each sub-segment is characterized by a start time and an end time associated with the sub-segment. According to an embodiment of the present invention, the start time of the first sub-segment corresponds to time ti while the end time of the last sub-segment corresponds to time t2. Server 104 then extracts a set of one or more video keyframes from the video information stored by the multimedia document for each sub-segment occurring between the start time and end time associated with the sub-segment. For example, according to an embodiment of the present invention, for each sub-segment, server 104 may extract a video keyframe at 1-second intervals between a start time and an end time associated with the sub-segment.
For each sub-segment, server 104 then selects one or more keyframes from the set of extracted video keyframes for the sub-segment to be displayed in panel 324-2. The number of keyframes selected to be displayed in panel 324-2 for each sub-segment is user-configurable. Various different techniques may be used for selecting the video keyframes to be displayed from the extracted set of video keyframes for each time sub-segment. For example, if the set of video keyframes extracted for a sub-segment comprises 24 keyframes and if six video keyframes are to be displayed for each sub-segment (as shown in
In another embodiment, the video keyframes to be displayed for a sub-segment may be selected based upon the sequential positions of the keyframes in the set of keyframes extracted for sub-segment. For example, if the set of video keyframes extracted for a sub-segment comprises 24 keyframes and if six video keyframes are to be displayed for each sub-segment, then the 1st, 5th, 9th, 13th, 17th, and 21st keyframe may be selected. In this embodiment, a fixed number of keyframes are skipped.
In yet another embodiment, the video keyframes to be displayed for a sub-segment may be selected based upon time values associated with the keyframes in the set of keyframes extracted for sub-segment. For example, if the set of video keyframes extracted for a sub-segment comprises 24 keyframes extracted at a sampling rate of 1 second and if six video keyframes are to be displayed for each sub-segment, then the first frame may be selected and subsequently a keyframe occurring 4 seconds after the previously selected keyframe may be selected.
In an alternative embodiment of the present invention, server 104 may select keyframes from the set of keyframes based upon differences in the contents of the keyframes. For each sub-segment, server 104 may use special image processing techniques to determine differences in the contents of the keyframes extracted for the sub-segment. If six video keyframes are to be displayed for each sub-segment, server 104 may then select six keyframes from the set of extracted keyframes based upon the results of the image processing techniques. For example, the six most dissimilar keyframes may be selected for display in panel 324-2. It should be apparent that various other techniques known to those skilled in the art may also be used to perform the selection of video keyframes.
The selected keyframes are then displayed in panel 324-2. Various different formats may be used to display the selected keyframes in panel 324-2. For example, as shown in
In an alternative embodiment of the present invention, the entire multimedia document is divided into sub-segments of a pre-determined time period. Each sub-segment is characterized by a start time and an end time associated with the sub-segment. According to an embodiment of the present invention, the start time of the first sub-segment corresponds to the start time of the multimedia document while the end time of the last sub-segment corresponds to the end time of the multimedia document. As described above, server 104 then extracts a set of one or more video keyframes from the video information stored by the multimedia document for each sub-segment based upon the start time and end time associated with the sub-segment. Server 104 then selects one or more keyframes for display for each sub-segment. Based upon the position of thumbnail viewing area lens 314, keyframes that have been selected for display and that occur between t1 and t2 associated with thumbnail viewing area lens 314 are then displayed in panel 324-2.
It should be apparent that various other techniques may also be used for displaying video information in panel 324-2 in alternative embodiments of the present invention. According to an embodiment of the present invention, the user may configure the technique to be used for displaying video information in third viewing area 306.
In GUI 300 depicted in
It should be apparent that, in alternative embodiments of the present invention, the number of panels displayed in third viewing area 306 may be more or less than the number of thumbnail images displayed in second viewing area 304. According to an embodiment of the present invention, the number of panels displayed in third viewing area 306 is user-configurable.
According to the teachings of the present invention, a viewing lens or window 322 (hereinafter referred to as “panel viewing area lens 322”) is displayed covering or emphasizing a portion of overview region 306. According to the teachings of the present invention, multimedia information corresponding to the area of third viewing area 306 emphasized by panel viewing area lens 322 is displayed in fourth viewing area 308. A user may change the position of panel viewing area lens 322 by sliding or moving lens 322 along third viewing area 306. In response to a change in the position of panel viewing area lens 322 from a first location in third viewing area 306 to a second location, the multimedia information displayed in fourth viewing area 308 is automatically updated such that the multimedia information displayed in fourth viewing area 308 continues to correspond to the area of third viewing area 306 emphasized by panel viewing area lens 322. Accordingly, a user may use panel viewing area lens 322 to change the multimedia information displayed in fourth viewing area 308.
As described above, a change in the location of panel viewing area lens 322 also causes a change in the location of sub-lens 316 such that the area of second viewing area 304 emphasized by sub-lens 316 continues to correspond to the area of third viewing area 306 emphasized by panel viewing area lens 322. Likewise, as described above, a change in the location of sub-lens 316 also causes a change in the location of panel viewing area lens 322 over third viewing area 306 such that the area of third viewing area 306 emphasized by panel viewing area lens 322 continues to correspond to the changed location of sub-lens 316.
According to an embodiment of the present invention, a particular line of text (or one or more words from the last line of text) emphasized by panel viewing area lens 322 may be displayed on a section of lens 322. For example, as depicted in
According to an embodiment of the present invention, special features may be attached to panel viewing area lens 322 to facilitate browsing and navigation of the multimedia document. As shown in
According to an embodiment of the present invention, window 336 has transparent borders so that portions of the underlying third viewing area 306 (e.g., the keyframes displayed in panel 324-2) can be seen. This helps to maintain the user's location focus while viewing third viewing area 306. The user may use play/pause button 332 to start and stop the video displayed in window 336. The user may change the location of panel viewing area lens 322 while the video is being played back in window 336. A change in the location of panel viewing area lens 322 causes the video played back in window 336 to change corresponding to the new location of panel viewing area lens 322. The video played back in window 336 corresponds to the new time values t3 and t4 associated with panel viewing area lens 322.
As described above, multimedia information corresponding to the section of third viewing area 306 covered by panel viewing area lens 322 (i.e., multimedia information occurring in the time segment between t3 and t4) is displayed in fourth viewing area 308. As depicted in
For example, as depicted in
In alternative embodiments of the present invention, instead of playing back video information, a video keyframe from the video keyframes emphasized by panel viewing area lens 322 in panel 324-2 is displayed in sub viewing area 340-1. According to an embodiment of the present invention, the keyframe displayed in area 340-1 represents a keyframe that is most representative of the keyframes emphasized by panel viewing area lens 322.
According to an embodiment of the present invention, text information (e.g., CC text, transcript of audio information, etc.) emphasized by panel viewing area lens 322 in third viewing area 306 is displayed in sub viewing area 340-2. According to an embodiment of the present invention, sub viewing area 340-2 displays text information that is displayed in panel 324-1 and emphasized by panel viewing area lens 322. As described below, various types of information may be displayed in sub viewing area 340-3.
Additional information related to the multimedia information stored by the multimedia document may be displayed in fifth viewing area 310 of GUI 300. For example, as depicted in
According to an embodiment of the present invention, GUI 300 provides features that enable a user to search for one or more words that occur in the text information (e.g., CC text, transcript of audio information) extracted from the multimedia information. For example, a user can enter one or more query words in input field 354 and upon selecting “Find” button 356, server 104 analyzes the text information extracted from the multimedia information stored by the multimedia document to identify all occurrences of the one or more query words entered in field 354. The occurrences of the one or more words in the multimedia document are then highlighted when displayed in second viewing area 304, third viewing area 306, and fourth viewing area 308. For example, according to an embodiment of the present invention, all occurrences of the query words are highlighted in thumbnail image 312-1, in panel 324-1, and in sub viewing area 340-2. In alternative embodiments of the present invention, occurrences of the one or more query words may also be highlighted in the other thumbnail images displayed in second viewing area 304, panels displayed in third viewing area 306, and sub viewing areas displayed in fourth viewing area 308.
The user may also specify one or more words to be highlighted in the multimedia information displayed in GUI 300. For example, a user may select one or more words to be highlighted from area 352. All occurrences of the keywords selected by the user in area 352 are then highlighted in second viewing area 304, third viewing area 306, and fourth viewing area 308. For example, as depicted in
According to an embodiment of the present invention, lines of text 360 that comprise the user-selected word(s) (or query words entered in field 354) are displayed in sub viewing area 340-3 of fourth viewing area 308. For each line of text, the time 362 when the line occurs (or the timestamp associated with the line of text) in the multimedia document is also displayed. The timestamp associated with the line of text generally corresponds to the timestamp associated with the first word in the line.
For each line of text, one or more words surrounding the selected or query word(s) are displayed. According to an embodiment of the present invention, the number of words surrounding a selected word that are displayed in area 340-3 are user configurable. For example, in GUI 300 depicted in
Further, GUI 300 depicted in
As shown in
According to an embodiment of the present invention, multimedia information displayed in GUI 300 that is relevant to user-specified topics of interest is highlighted or annotated. The annotations provide visual indications of information that is relevant to or of interest to the user. GUI 300 thus provides a convenient tool that allows a user to readily locate portions of the multimedia document that are relevant to the user.
According to an embodiment of the present invention, information specifying topics that are of interest or are relevant to the user may be stored in a user profile. One or more words or phrases may be associated with each topic of interest. Presence of the one or more words and phrases associated with a particular user-specified topic of interest indicates presence of information related to the particular topic. For example, a user may specify two topics of interest—“George W. Bush” and “Energy Crisis”. Words or phrases associated with the topic “George Bush” may include “President Bush,” “the President,” “Mr. Bush,” and other like words and phrases. Words or phrases associated with the topic “Energy Crisis” may include “industrial pollution,” “natural pollution,” “clean up the sources,” “amount of pollution,” “air pollution”, “electricity,” “power-generating plant,” or the like. Probability values may be associated with each of the words or phrases indicating the likelihood of the topic of interest given the presence of the word or phrase. Various tools may be provided to allow the user to configure topics of interest, to specify keywords and phrases associated with the topics, and to specify probability values associated with the keywords or phrases.
It should be apparent that various other techniques known to those skilled in the art may also be used to model topics of interest to the user. These techniques may include the use of Bayesian networks, relevance graphs, or the like. Techniques for determining sections relevant to user-specified topics, techniques for defining topics of interest, techniques for associating keywords and/or key phrases and probability values are described in U.S. application Ser. No. 08/995,616, filed Dec. 22, 1997, the entire contents of which are herein incorporated by reference for all purposes.
According to an embodiment of the present invention, in order to identify locations in the multimedia document related to user-specified topics of interest, server 104 searches the multimedia document to identify locations within the multimedia document of words or phrases associated with the topics of interest. As described above, presence of words and phrases associated with a particular user-specified topic of interest in the multimedia document indicate presence of the particular topic relevant to the user. The words and phrases that occur in the multimedia document and that are associated with user specified topics of interest are annotated when displayed by GUI 300.
In the embodiment depicted in
According to an embodiment of the present invention, server 104 searches the text information (either CC text or transcript of audio information) extracted from the multimedia information to locate words or phrases relevant to the user topics. If server 104 finds a word or phrase in the text information that is associated with a topic of interest, the word or phrase is annotated when displayed in GUI 800. As described above, several different techniques may be used to annotate the word or phrase. For example, the word or phrase may highlighted, bolded, underlined, demarcated using sidebars or balloons, font may be changed, etc.
Keyframes (representing video information of the multimedia document) that are displayed by the GUI and that are related to user specified topics of interest may also be highlighted. According to an embodiment of the present invention, server system 104 may use OCR techniques to extract text from the keyframes extracted from the video information included in the multimedia information. The text output of the OCR techniques may then be compared with words or phrases associated with one or more user-specified topics of interest. If there is a match, the keyframe containing the matched word or phrase (i.e., the keyframe from which the matching word or phrase was extracted by OCR techniques) may be annotated when the keyframe is displayed in GUI 800 either in second viewing area 304, third viewing area 306, or fourth viewing area 308 of GUI 800. Several different techniques may be used to annotate the keyframe. For example, a special box may be drawn around a keyframe that is relevant to a particular topic of interest. The color of the box may correspond to the color associated with the particular topic of interest. The matching text in the keyframe may also be highlighted or underlined or displayed in reverse video. As described above, the annotated keyframes displayed in second viewing area 304 (e.g., the keyframes displayed in thumbnail image 312-2 in
According to an embodiment of the present invention, as shown in
According to an embodiment of the present invention, the relevancy score for a particular topic may be calculated based upon the frequency of occurrences of the words and phrases associated with the particular topic in the multimedia information. Probability values associated with the words or phrases associated with the particular topic may also be used to calculate the relevancy score for the particular topic. Various techniques known to those skilled in the art may also be used to determine relevancy scores for user specified topics of interest based upon the frequency of occurrences of words and phrases associated with a topic in the multimedia information and the probability values associated with the words or phrases. Various other techniques known to those skilled in the art may also be used to calculate the degree of relevancy of the multimedia document to the topics of interest.
As previously stated, a relevance indicator is used to display the degree or relevancy or relevancy score to the user. Based upon information displayed by the relevance indicator, a user can easily determine relevance of multimedia information stored by a multimedia document to topics that may be specified by the user.
A user may specify a topic of interest in field 902. A label identifying the topic of interest can be specified in field 910. The label specified in field 910 is displayed in the GUI generated according to the teachings of the present invention to identify the topic of interest. A list of keywords and/or phrases associated with the topic specified in field 902 is displayed in area 908. A user may add new keywords to the list, modify one or more keywords in the list, or remove one or more keywords from the list of keywords associated with the topic of interest. The user may specify new keywords or phrases to be associated with the topic of interest in field 904. Selection of “Add” button 906 adds the keywords or phrases specified in field 904 to the list of keywords previously associated with a topic. The user may specify a color to be used for annotating information relevant to the topic of interest by selecting the color in area 912. For example, in the embodiment depicted in
According to the teachings of the present invention, various different types of information included in multimedia information may be displayed by the GUI generated by server 104.
The multimedia information stored by the meeting recording may comprise video information, audio information and possibly CC text information, and slides information. The slides information may comprise information related to slides (e.g., a PowerPoint presentation slides) presented during the meeting. For example, slides information may comprise images of slides presented at the meeting. As shown in
Third viewing area 306 comprises three panels 324-1, 324-2, and 324-3. Panel 324-1 displays text information corresponding to the section of thumbnail image 312-1 emphasized or covered by thumbnail viewing area lens 314. Panel 324-2 displays video keyframes corresponding to the section of thumbnail image 312-2 emphasized or covered by thumbnail viewing area lens 314. Panel 324-3 displays one or more slides corresponding to the section of thumbnail image 312-3 emphasized or covered by thumbnail viewing area lens 314. The panels are temporally aligned with one another.
Fourth viewing area 308 comprises three sub-viewing areas 340-1, 340-2, and 340-3. Sub viewing area 340-1 displays video information corresponding to the section of panel 324-2 covered by panel viewing area lens 322. As described above, sub-viewing area 340-1 may display a keyframe corresponding to the emphasized portion of panel 324-2. Alternatively, video based upon the position of panel viewing area lens 322 may be played back in area 340-1. According to an embodiment of the present invention, time t3 associated with lens 322 is used as the start time for playing the video in area 340-1 of fourth viewing area 308. A panoramic shot 1002 of the meeting room (which may be recorded using a 360 degrees camera) is also displayed in area 340-1 of fourth viewing area 308. Text information emphasized by panel viewing area lens 322 in panel 324-1 is displayed in area 340-2 of fourth viewing area 308. One or more slides emphasized by panel viewing area lens 322 in panel 324-3 are displayed in area 340-3 of fourth viewing area 308. According to an embodiment of the present invention, the user may also select a particular slide from panel 324-3 by clicking on the slide. The selected slide is then displayed in area 340-3 of fourth viewing area 308.
According to an embodiment of the present invention, the user can specify the types of information included in the multimedia document that are to be displayed in the GUI. For example, the user can turn on or off slides related information (i.e., information displayed in thumbnail 312-3, panel 324-3, and area 340-3 of fourth viewing area 308) displayed in GUI 1000 by selecting or deselecting “Slides” button 1004. If a user deselects slides information, then thumbnail 312-3 and panel 324-3 are not displayed by GUI 1000. Thumbnail 312-3 and panel 324-3 are displayed by GUI 1000 if the user selects button 1004. Button 1004 thus acts as a switch for displaying or not displaying slides information. In a similar manner, the user can also control other types of information displayed by a GUI generated according to the teachings of the present invention. For example, features may be provided for turning on or off video information, text information, and other types of information that may be displayed by GUI 1000.
The multimedia document whose contents are displayed in GUI 1100 comprises video information, audio information or CC text information, slides information, and whiteboard information. The whiteboard information may comprise images of text and drawings drawn on a whiteboard. As shown in
Third viewing area 306 comprises four panels 324-1, 324-2, 324-3, and 324-4. Panel 324-1 displays text information corresponding to the section of thumbnail image 312-1 emphasized or covered by thumbnail viewing area lens 314. Panel 324-2 displays video keyframes corresponding to the section of thumbnail image 312-2 emphasized or covered by thumbnail viewing area lens 314. Panel 324-3 displays one or more slides corresponding to the section of thumbnail image 312-3 emphasized or covered by thumbnail viewing area lens 314. Panel 324-4 displays one or more whiteboard images corresponding to the section of thumbnail image 312-4 emphasized or covered by thumbnail viewing area lens 314. The panels are temporally aligned with one another.
Fourth viewing area 308 comprises three sub-viewing areas 340-1, 340-2, and 340-3. Area 340-1 displays video information corresponding to the section of panel 324-2 covered by panel viewing area lens 322. As described above, sub-viewing area 340-1 may display a keyframe or play back video corresponding to the emphasized portion of panel 324-2. According to an embodiment of the present invention, time t3 (as described above) associated with lens 322 is used as the start time for playing the video in area 340-1 of fourth viewing area 308. A panoramic shot 1102 of the location where the multimedia document was recorded (which may be recorded using a 360 degrees camera) is also displayed in area 340-1 of fourth viewing area 308. Text information emphasized by panel viewing area lens 322 in panel 324-1 is displayed in area 340-2 of fourth viewing area 308. Slides emphasized by panel viewing area lens 322 in panel 324-3 or whiteboard images emphasized by panel viewing area lens 322 in panel 324-4 may be displayed in area 340-3 of fourth viewing area 308. In the embodiment depicted in
As described above, according to an embodiment of the present invention, the user can specify the types of information from the multimedia document that are to be displayed in the GUI. For example, the user can turn on or off a particular type of information displayed by the GUI. “WB” button 1104 allows the user to turn on or off whiteboard related information (i.e., information displayed in thumbnail image 312-4, panel 324-4, and area 340-3 of fourth viewing area 308) displayed in GUI 1000.
As depicted in
As depicted in
Video keyframes are then extracted from the video information stored by the multimedia document for each group of lines depending on time stamps associated with lines in the group. According to an embodiment of the present invention, server 104 determines a start time and an end time associated with each group of lines. A start time for a group corresponds to a time associated with the first (or earliest) line in the group while an end time for a group corresponds to the time associated with the last line (or latest) line in the group. In order to determine keyframes to be displayed in panel 324-2 corresponding to a particular group of text lines, server 104 extracts a set of one or more video keyframes from the portion of the video information occurring between the start and end time associated with the particular group. One or more keyframes are then selected from the extracted set of video keyframes to be displayed in panel 324-2 for the particular group. The one or more selected keyframes are then displayed in panel 324-1 proximal to the group of lines displayed in panel 324-1 for which the keyframes have been extracted.
For example, in
The number of text lines to be included in a group is user configurable. Likewise, the number of video keyframes to be extracted for a particular group of lines is also user configurable. Further, the video keyframes to be displayed in panel 324-2 for each group of lines can also be configured by the user of the present invention.
The manner in which the extracted keyframes are displayed in panel 324-2 is also user configurable. Different techniques may be used to show the relationships between a particular group of lines and video keyframes displayed for the particular group of lines. For example, according to an embodiment of the present invention, a particular group of lines displayed in panel 324-1 and the corresponding video keyframes displayed in panel 324-2 may be color-coded or displayed using the same color to show the relationship. Various other techniques known to those skilled in the art may also be used to show the relationships.
GUI Generation Technique According to an Embodiment of the Present Invention
The following section describes techniques for generating a GUI (e.g., GUI 300 depicted in
As depicted in
Server 104 then extracts text information from the multimedia information accessed in step 1402 (step 1404). If the multimedia information accessed in step 1402 comprises CC text information, then the text information corresponds to CC text information that is extracted from the multimedia information. If the multimedia information accessed in step 1402 does not comprise CC text information, then in step 1404, the audio information included in the multimedia information accessed in step 1402 is transcribed to generate a text transcript for the audio information. The text transcript represents the text information extracted in step 1404.
The text information determined in step 1404 comprises a collection of lines with each line comprising one or more words. Each word has a timestamp associated with it indicating the time of occurrence of the word in the multimedia information. The timestamp information for each word is included in the CC text information. Alternatively, if the text represents a transcription of audio information, the timestamp information for each word may be determined during the audio transcription process.
As part of step 1404, each line is assigned a start time and an end time based upon words that are included in the line. The start time for a line corresponds to the timestamp associated with the first word occurring in the line, and the end time for a line corresponds to the timestamp associated with the last word occurring in the line.
The text information determined in step 1404, including the timing information, is then stored in a memory location accessible to server 104 (step 1406). In one embodiment, a data structure (or memory structure) comprising a linked list of line objects is used to store the text information. Each line object comprises a linked list of words contained in the line. Timestamp information associated with the words and the lines is also stored in the data structure. The information stored in the data structure is then used to generate GUI 300.
Server 104 then determines a length or height (in pixels) of a panel (hereinafter referred to as “the text canvas”) for drawing the text information (step 1408). In order to determine the length of the text canvas, the duration (“duration”) of the multimedia information (or the duration of the multimedia document storing the multimedia document) in seconds is determined. A vertical pixels-per-second of time (“pps”) value is also defined. The “pps” determines the distance between lines of text drawn in the text canvas. The value of pps thus depends on how close the user wants the lines of text to be to each other when displayed and upon the size of the font to be used for displaying the text. According to an embodiment of the present invention, a 5 pps value is specified with a 6 point font. The overall height (in pixels) of the text canvas (“textCanvasHeight”) is determined as follows:
textCanvasHeight=duration*pps
For example, if the duration of the multimedia information is 1 hour (i.e., 3600 seconds) and for a pps value of 5, the height of the text canvas (textCanvasHeight) is 18000 pixels (3600*5).
Multipliers are then calculated for converting pixel locations in the text canvas to seconds and for converting seconds to pixels locations in the text canvas (step 1410). A multiplier “pix_m” is calculated for converting a given time value (in seconds) to a particular vertical pixel location in the text canvas. The pix_m multiplier can be used to determine a pixel location in the text canvas corresponding to a particular time value. The value of pix_m is determined as follows:
pix—m=textCanvasHeight/duration
For example, if duration=3600 seconds and textCanvasHeight 18000 pixels, then pix_m=18000/3600=5.
A multiplier “sec_m” is calculated for converting a particular pixel location in the text canvas to a corresponding time value. The sec_m multiplier can be used to determine a time value for a particular pixel location in the text canvas. The value of sec_m is determined as follows:
sec—m=duration/textCanvasHeight
For example, if duration=3600 seconds and textCanvasHeight=18000 pixels, then sec_m=3600/18000=0.2.
The multipliers calculated in step 1410 may then be used to convert pixels to seconds and seconds to pixels. For example, the pixel location in the text canvas of an event occurring at time t=1256 seconds in the multimedia information is: 1256*pix_m=1256*5=6280 pixels from the top of the text canvas. The number of seconds corresponding to a pixel location p=231 in the text canvas is: 231*sec_m=231*0.2=46.2 seconds.
Based upon the height of the text canvas determined in step 1408 and the multipliers generated in step 1410, positional coordinates (horizontal (X) and vertical (Y) coordinates) are then calculated for words in the text information extracted in step 1404 (step 1412). As previously stated, information related to words and lines and their associated timestamps may be stored in a data structure accessible to server 104. The positional coordinate values calculated for each word might also be stored in the data structure.
The Y (or vertical) coordinate (Wy) for a word is calculated by multiplying the timestamp (Wt) (in seconds) associated with the word by multiplier pix_m determined in step 1410. Accordingly:
Wy(in pixels)=Wt*pix—m
For example, if a particular word has Wt=539 seconds (i.e., the words occurs 539 seconds into the multimedia information), then Wy=539*5=2695 vertical pixels from the top of the text canvas.
The X (or horizontal) coordinate (Wx) for a word is calculated based upon the word's location in the line and the width of the previous words in the line. For example if a particular line (L) has four words, i.e., L: W1 W2 W3 W4, then
Wx of W1=0
Wx of W2=(Wx of W1)+(Width of W1)+(Spacing between words)
Wx of W3=(Wx of W2)+(Width of W2)+(Spacing between words)
Wx of W4=(Wx of W3)+(Width of W3)+(Spacing between words)
The words in the text information are then drawn on the text canvas in a location determined by the X and Y coordinates calculated for the words in step 1412 (step 1414).
Server 104 then determines a height of thumbnail 312-1 that displays text information in second viewing area 304 of GUI 300 (step 1416). The height of thumbnail 312-1 (ThumbnailHeight) depends on the height of the GUI window used to displaying the multimedia information and the height of second viewing area 304 within the GUI window. The value of ThumbnailHeight is set such that thumbnail 312-1 fits in the GUI in the second viewing area 304.
Thumbnail 312-1 is then generated by scaling the text canvas such that the height of thumbnail 312-1 is equal to ThumbnailHeight and the thumbnail fits entirely within the size constraints of second viewing area 304 (step 1418). Thumbnail 312-1, which represents a scaled version of the text canvas, is then displayed in second viewing area 304 of GUI 300 (step 1420).
Multipliers are then calculated for converting pixel locations in thumbnail 312-1 to seconds and for converting seconds to pixel locations in thumbnail 312-1 (step 1422). A multiplier “tpix_m” is calculated for converting a given time value (in seconds) to a particular pixel location in thumbnail 312-1. Multiplier tpix_m can be used to determine a pixel location in the thumbnail corresponding to a particular time value. The value of tpix_m is determined as follows:
tpix—m=ThumbnailHeight/duration
For example, if duration=3600 seconds and ThumbnailHeight=900, then tpix_m=900/3600=0.25
A multiplier “tsec_m” is calculated for converting a particular pixel location in thumbnail 312-1 to a corresponding time value. Multiplier tsec_m can be used to determine a time value for a particular pixel location in thumbnail 312-1. The value of tsec_m is determined as follows:
tsec—m=duration/ThumbnailHeight
For example, if duration=3600 seconds and ThumbnailHeight=900, then tsec_m=3600/900=4.
Multipliers tpix_m and tsec_m may then be used to convert pixels to seconds and seconds to pixels in thumbnail 312-1. For example, the pixel location in thumbnail 312-1 of a word occurring at time t=1256 seconds in the multimedia information is: 1256*tpix_m=1256*0.25=314 pixels from the top of thumbnail 312-1. The number of seconds represented by a pixel location p=231 in thumbnail 312-1 is: 231*tsec_m=231*4=924 seconds.
For purposes of simplicity, it is assumed that thumbnail 312-1 displaying text information has already been displayed according to the flowchart depicted in
The video keyframes extracted in step 1502 and their associated timestamp information is stored in a data structure (or memory structure) accessible to server 104 (step 1504). The information stored in the data structure is then used for generating thumbnail 312-2.
The video keyframes extracted in step 1504 are then divided into groups (step 1506). A user-configurable time period (“groupTime”) is used to divide the keyframes into groups. According to an embodiment of the present invention, groupTime is set to 8 seconds. In this embodiment, each group comprises video keyframes extracted within an 8 second time period window. For example, if the duration of the multimedia information is 1 hour (3600 seconds) and 3600 video keyframes are extracted from the video information using a sampling rate of 1 frame per second, then if groupTime is set to 8 seconds, the 3600 keyframes will be divided into 450 groups, with each group comprising 8 video keyframes.
A start and an end time are calculated for each group of frames (step 1508). For a particular group of frames, the start time for the particular group is the timestamp associated with the first (i.e., the keyframe in the group with the earliest timestamp) video keyframe in the group, and the end time for the particular group is the timestamp associated with the last (i.e., the keyframe in the group with the latest timestamp) video keyframe in the group.
For each group of keyframes, server 104 determines a segment of pixels on a keyframe canvas for drawing one or more keyframes from the group of keyframes (step 1510). Similar to the text canvas, the keyframe canvas is a panel on which keyframes extracted from the video information are drawn. The height of the keyframe canvas (“keyframeCanvasHeight”) is the same as the height of the text canvas (“textCanvasHeight”) described above (i.e., keyframeCanvasHeight==textCanvasHeight). As a result, multipliers pix_m and sec_m (described above) may be used to convert a time value to a pixel location in the keyframe canvas and to convert a particular pixel location in the keyframe canvas to a time value.
The segment of pixels on the keyframe canvas for drawing keyframes from a particular group is calculated based upon the start time and end time associated with the particular group. The starting vertical (Y) pixel coordinate (“segmentStart”) and the end vertical (Y) coordinate (“segmentEnd”) of the segment of pixels in the keyframe canvas for a particular group of keyframes is calculated as follows:
segmentStart=(Start time of group)*pix—m
segmentEnd=(End time of group)*pix—m
Accordingly, the height of each segment (“segmentHeight”) in pixels of the text canvas is:
segmentHeight=segmentEnd−segmentStart
The number of keyframes from each group of frames to be drawn in each segment of pixels on the text canvas is then determined (step 1512). The number of keyframes to be drawn on the keyframe canvas for a particular group depends on the height of the segment (“segmentHeight”) corresponding to the particular group. If the value of segmentHeight is small only a small number of keyframes may be drawn in the segment such that the drawn keyframes are comprehensible to the user when displayed in the GUI. The value of segmentHeight depends on the value of pps. If pps is small, then segmentHeight will also be small. Accordingly, a larger value of pps may be selected if more keyframes are to be drawn per segment.
According to an embodiment of the present invention, if the segmentHeight is equal to 40 pixels and each group of keyframes comprises 8 keyframes, then 6 out of the 8 keyframes may be drawn in each segment on the text canvas. The number of keyframes to be drawn in a segment is generally the same for all groups of keyframes, for example, in the embodiment depicted in
After determining the number of keyframes to be drawn in each segment of the text canvas, for each group of keyframes, server 104 identifies one or more keyframes from keyframes in the group of keyframes to be drawn on the keyframe canvas (step 1514). Various different techniques may be used for selecting the video keyframes to be displayed in a segment for a particular group of frames. According to one technique, if each group of video keyframes comprises 8 keyframes and if 6 video keyframes are to be displayed in each segment on the keyframe canvas, then server 104 may select the first two video keyframes, the middle two video keyframes, and the last two video keyframes from each group of video keyframes be drawn on the keyframe canvas. As described above, various other techniques may also be used to select one or more keyframes to displayed from the group of keyframes. For example, the keyframes may be selected based upon the sequential positions of the keyframes in the group of keyframes, based upon time values associated with the keyframes, or based upon other criteria.
According to another technique, server 104 may use special image processing techniques to determine similarity or dissimilarity between keyframes in each group of keyframes. If six video keyframes are to be displayed from each group, server 104 may then select six keyframes from each group of keyframes based upon the results of the image processing techniques. According to an embodiment of the present invention, the six most dissimilar keyframes in each group may be selected to be drawn on the keyframe canvas. It should be apparent that various other techniques known to those skilled in the art may also be used to perform the selection of video keyframes.
Keyframes from the groups of keyframes identified in step 1514 are then drawn on the keyframe canvas in their corresponding segments (step 1516). Various different formats may be used for drawing the selected keyframes in a particular segment. For example, as shown in
Server 104 then determines a height (or length) of thumbnail 312-2 that displays the video keyframes in GUI 300 (step 1518). According to the teachings of the present invention, the height of thumbnail 312-2 is set to be the same as the height of thumbnail 312-1 that displays text information (i.e., the height of thumbnail 312-2 is set to ThumbnailHeight).
Thumbnail 312-2 is then generated by scaling the keyframe canvas such that the height of thumbnail 312-2 is equal to ThumbnailHeight and thumbnail 312-2 fits entirely within the size constraints of second viewing area 304 (step 1520). Thumbnail 312-2, which represents a scaled version of the keyframe canvas, is then displayed in second viewing area 304 of GUI 300 (step 1522). Thumbnail 312-2 is displayed in GUI 300 next to thumbnail image 312-1 and is temporally aligned or synchronized with thumbnail 312-1 (as shown in
Multipliers are calculated for thumbnail 312-2 for converting pixel locations in thumbnail 312-2 to seconds and for converting seconds to pixel locations in thumbnail 312-2 (step 1524). Since thumbnail 312-2 is the same length as thumbnail 312-1 and is aligned with thumbnail 312-1, multipliers “tpix_m” and “tsec_m” calculated for thumbnail 312-1 can also be used for thumbnail 312-2. These multipliers may then be used to convert pixels to seconds and seconds to pixels in thumbnail 312-2.
According to the method displayed in
As depicted in
For each group selected in step 1609, server 104 identifies one or more keyframes from the group to be drawn on the keyframe canvas (step 1610). As described above, various techniques may be used to select keyframes to be drawn on the keyframe canvas.
The keyframe canvas is then divided into a number of equal-sized row portions, where the number of row portions is equal to the number of groups selected in step 1609 (step 1612). According to an embodiment of the present invention, the height of each row portion is approximately equal to the height of the keyframe canvas (“keyframeCanvasHeight”) divided by the number of groups selected in step 1609.
For each group selected in step 1609, a row portion of the keyframe canvas is then identified for drawing one or more video keyframes from the group (step 1614). According to an embodiment of the present invention, row portions are associated with groups in chronological order. For example, the first row is associated with a group with the earliest start time, the second row is associated with a group with the second earliest start time, and so on.
For each group selected in step 1609, one or more keyframes from the group (identified in step 1610) are then drawn on the keyframe canvas in the row portion determined for the group in step 1614 (step 1616). The sizes of the selected keyframes for each group are scaled to fit the row portion of the keyframe canvas. According to an embodiment of the present invention, the height of each row portion is more than the heights of the selected keyframes, and height of the selected keyframes is increased to fit the row portion. This increases the size of the selected keyframes and makes them more visible when drawn on the keyframe canvas. In this manner, keyframes from the groups selected in step 1609 are drawn on the keyframe canvas.
The keyframe canvas is then scaled to form thumbnail 312-2 that is displayed in second viewing area 304 according to steps 1618, 1620, and 1622. Since the height of the keyframes drawn on the keyframe canvas is increased according to an embodiment of the present invention, as described above, the keyframes are also more recognizable when displayed in thumbnail 312-2. Multipliers are then calculated according to step 1624. Steps 1618, 1620, 1622, and 1624 are similar to steps 1518, 1520, 1522, and 1524, depicted in
As depicted in
A section of the text canvas (generated in the flowchart depicted in
Time values corresponding to the boundaries of the section of the text canvas identified in step 1704 (marked by pixel locations Pstart and Pend) are then determined (step 1706). The multiplier sec_m is used to calculate the corresponding time values. A time t1 (in seconds) corresponding to pixel location Pstart is calculated as follows:
t1=Pstart*sec—m
A time t2 (in seconds) corresponding to pixel location Pend is calculated as follows:
t2=Pend*sec—m
A section of the keyframe canvas corresponding to the selected section of the text canvas is then identified (step 1708). Since the height of the keyframe canvas is the same as the height of the keyframe canvas, the selected section of the keyframe canvas also lies between pixels locations Pstart and Pend in the keyframe canvas corresponding to times t1 and t2.
The portion of the text canvas identified in step 1704 is displayed in panel 324-1 in third viewing area 306 (step 1710). The portion of the keyframe canvas identified in step 1708 is displayed in panel 324-2 in third viewing area 306 (step 1712).
A panel viewing area lens 322 is displayed covering a section of third viewing area 306 (step 1714). Panel viewing area lens 322 is displayed such that it emphasizes or covers a section of panel 324-1 panel and 324-2 displayed in third viewing area 306 between times t3 and t4 where (t1≦t3<t4≦t2). The top edge of panel viewing area lens 322 corresponds to time t3 and the bottom edge of panel viewing area lens 322 corresponds to time t4. The height of panel viewing area lens 322 (expressed in pixels) is equal to: (Vertical pixel location in the text canvas corresponding to t4)−(Vertical pixel location in the text canvas corresponding to t3). The width of panel viewing area lens 322 is approximately equal to the width of third viewing area 306 (as shown in
A portion of thumbnail 312-1 corresponding to the section of text canvas displayed in panel 324-1 and a portion of thumbnail 312-2 corresponding to the section of keyframe canvas displayed in panel 324-2 are then determined (step 1716). The portion of thumbnail 312-1 corresponding to the section of the text canvas displayed in panel 324-1 is characterized by vertical pixel coordinate (TNstart) marking the starting pixel location of the thumbnail portion, and a vertical pixel coordinate (TNend) marking the ending pixel location of the thumbnail portion. The multiplier tpix_m is used to determine pixel locations TNstart and TNend as follows:
TNstart=t1*tpix—m
TNend=t2*tpix—m
Since thumbnails 312-1 and 312-2 are of the same length and are temporally aligned to one another, the portion of thumbnail 312-2 corresponding to the sections of keyframe canvas displayed in panel 324-2 also lies between pixel locations TNstart and TNend on thumbnail 312-2.
Thumbnail viewing area lens 314 is then displayed covering portions of thumbnails 312-1 and 312-2 corresponding to the section of text canvas displayed in panel 324-1 and the section of keyframe canvas displayed in panel 324-2 (step 1718). Thumbnail viewing area lens 314 is displayed covering portions of thumbnails 312-1 and 312-2 between pixels locations TNstart and TNend of the thumbnails. The height of thumbnail viewing area lens 314 in pixels is equal to (TNend−TNstart). The width of thumbnail viewing area lens 314 is approximately equal to the width of second viewing area 304 (as shown in
A portion of second viewing area 304 corresponding to the section of third viewing area 306 emphasized by panel viewing area lens 322 is then determined (step 1720). In step 1720, server 104 determines a portion of thumbnail 312-1 and a portion of thumbnail 312-2 corresponding to the time period between t3 and t4. The portion of thumbnail 312-1 corresponding to the time window between t3 and t4 is characterized by vertical pixel coordinate (TNSubstart) corresponding to time t3 and marking the starting vertical pixel of the thumbnail portion, and a vertical pixel coordinate (TNSubend) corresponding to time t4 and marking the ending vertical pixel location of the thumbnail portion. Multiplier tpix_m is used to determine pixel locations TNSubstart and TNSubend as follows:
TNSubstart=t3*tpix—m
TNSubend=t4*tpix—m
Since thumbnails 312-1 and 312-2 are of the same length and are temporally aligned to one another, the portion of thumbnail 312-2 corresponding to the time period between t3 and t4 also lies between pixel locations TNSubstart and TNSubend on thumbnail 312-2.
Sub-lens 316 is then displayed covering portions of thumbnails 312-1 and 312-2 corresponding to the time window between t3 and t4 (i.e., corresponding to the portion of third viewing area 306 emphasized by panel viewing area lens 322) (step 1722). Sub-lens 316 is displayed covering portions of thumbnails 312-1 and 312-2 between pixels locations TNSubstart and TNSubend. The height of sub-lens 316 in pixels is equal to (TNSubend−TNSubstart). The width of sub-lens 316 is approximately equal to the width of second viewing area 304 (as shown in
Multimedia information corresponding to the portion of third viewing area 306 emphasized by panel viewing area lens 322 is displayed in fourth viewing area 308 (step 1724). For example, video information starting at time t3 is played back in area 340-1 of fourth viewing area 308 in GUI 300. In alternative embodiments, the starting time of the video playback may be set to any time between and including t3 and t4. Text information corresponding to the time window between t3 and t4 is displayed in area 340-2 of fourth viewing area 308.
The multimedia information may then be analyzed and the results of the analysis are displayed in fifth viewing area 310 (step 1726). For example, the text information extracted from the multimedia information may be analyzed to identify words that occur in the text information and the frequency of individual words. The words and their frequency may be printed in fifth viewing area 310 (e.g., information printed in area 352 of fifth viewing area 310 as shown in
Multimedia Information Navigation
As previously described, a user of the present invention may navigate and scroll through the multimedia information stored by a multimedia document and displayed in GUI 300 using thumbnail viewing area lens 314 and panel viewing area lens 322. For example, the user can change the location of thumbnail viewing area lens 314 by moving thumbnail viewing area lens 314 along the length of second viewing area 304. In response to a change in the position of thumbnail viewing area lens 314 from a first location in second viewing area 304 to a second location along second viewing area 304, the multimedia information displayed in third viewing area 306 is automatically updated such that the multimedia information displayed in third viewing area 306 continues to correspond to the area of second viewing area 304 emphasized by thumbnail viewing area lens 314 in the second location.
Likewise, the user can change the location of panel viewing area lens 322 by moving panel viewing area lens 322 along the length of third viewing area 306. In response to a change in the location of panel viewing area lens 322, the position of sub-lens 316 and also possibly thumbnail viewing area lens 314 are updated to continue to correspond to new location of panel viewing area lens 322. The information displayed in fourth viewing area 308 is also updated to correspond to the new location of panel viewing area lens 322.
As depicted in
Server 104 then determines time values corresponding to the second position of thumbnail viewing area lens 314 (step 1806). A time value t1 is determined corresponding to pixel location TNstart and a time value t2 is determined corresponding to pixel location TNend. The multiplier tsec_m is used to determine the time values as follows:
t1=TNstart*tsec—m
t2=TNend*tsec—m
Server 104 then determines pixel locations in the text canvas and the keyframe canvas corresponding to the time values determined in step 1806 (step 1808). A pixel location Pstart in the text canvas is calculated based upon time t1, and a pixel location Pend in the text canvas is calculated based upon time t2. The multiplier pix_m is used to determine the locations as follows:
Pstart=t1*tpix—m
Pend=t2*tpix—m
Since the text canvas and the keyframe canvas are of the same length, time values t1 and t2 correspond to pixel locations Pstart and Pend in the keyframe canvas.
A section of the text canvas between pixel locations Pstart and Pend is displayed in panel 324-1 (step 1810). The section of the text canvas displayed in panel 324-1 corresponds to the portion of thumbnail 312-1 emphasized by thumbnail viewing area lens 314 in the second position.
A section of the keyframe canvas between pixel locations Pstart and Pend is displayed in panel 324-2 (step 1812). The section of the keyframe canvas displayed in panel 324-2 corresponds to the portion of thumbnail 312-2 emphasized by thumbnail viewing area lens 314 in the second position.
When thumbnail viewing area lens 314 is moved from the first position to the second position, sub-lens 316 also moves along with thumbnail viewing area lens 314. Server 104 then determines a portion of second viewing area 304 emphasized by sub-lens 316 in the second position (step 1814). As part of step 1814, server 104 determines pixel locations (TNSubstart and TNSubend) in thumbnail 312-1 corresponding to the edges of sub-lens 316 in the second position. TNSubstart marks the starting vertical pixel location in thumbnail 312-1, and TNSubend marks the ending vertical pixel location of sub-lens 316 in thumbnail 312-1. Since thumbnails 312-1 and 312-2 are of the same length and are temporally aligned to one another, the portion of thumbnail 312-2 corresponding to second position of sub-lens 316 also lies between pixel locations TNSubstart and TNSubend.
Server 104 then determines time values corresponding to the second position of sub-lens 316 (step 1816). A time value t3 is determined corresponding to pixel location TNSubstart and a time value t4 is determined corresponding to pixel location TNSubend. The multiplier tsec_m is used to determine the time values as follows:
t3=TNSubstart*tsec—m
t4=TNSubend*tsec—m
Server 104 then determines pixel locations in the text canvas and the keyframe canvas corresponding to the time values determined in step 1816 (step 1818). A pixel location PSubstart in the text canvas is calculated based upon time t3, and a pixel location PSubend in the text canvas is calculated based upon time t4. The multiplier pix_m is used to determine the locations as follows:
PSubstart=t3*tpix—m
PSubend=t4*tpix—m
Since the text canvas and the keyframe canvas are of the same length, time values t1 and t2 correspond to pixel locations PSubstart and PSubend in the keyframe canvas.
Panel viewing area lens 322 is drawn over third viewing area 306 covering a portion of third viewing area 306 between pixels location PSubstart and PSubend (step 1820). The multimedia information displayed in fourth viewing area 308 is then updated to correspond to the new position of panel viewing area lens 322 (step 1822).
As depicted in
t3=(Pixel location of top edge of panel viewing area lens 322)*sec—m
t4=(Pixel location of bottom edge of panel viewing area lens 322)*sec—m
Server 104 then determines pixel locations in second viewing area 304 corresponding to the time values determined in step 1904 (step 1906). A pixel location TNSubstart in a thumbnail (either 312-1 or 312-2 since they aligned and of the same length) in second viewing area 304 is calculated based upon time t3, and a pixel location TNSubend in the thumbnail is calculated based upon time t4. The multiplier tpix_m is used to determine the locations as follows:
TNSubstart=t3*tpix—m
TNSubend=t4*tpix—m
Sub-lens 316 is then updated to emphasize a portion of thumbnails 312 in second viewing area 304 between pixel locations determined in step 1906 (step 1908). As part of step 1908, the position of thumbnail viewing area lens 314 may also be updated if pixels positions TNSubstart or TNSubend lie beyond the boundaries of thumbnail viewing area lens 314 when panel viewing area lens 322 was in the first position. For example, if a user uses panel viewing area lens 322 to scroll third viewing area 306 beyond the PanelHeight, then the position of thumbnail viewing area lens 314 is updated accordingly. If the second position of panel viewing area lens 322 lies within PanelHeight, then only sub-lens 316 is moved to correspond to the second position of panel viewing area lens 322 and thumbnail viewing area lens 314 is not moved.
As described above, panel viewing area lens 322 may be used to scroll the information displayed in third viewing area 306. For example, a user may move panel viewing area lens 322 to the bottom of third viewing area 306 and cause the contents of third viewing area 306 to be automatically scrolled upwards. Likewise, the user may move panel viewing area lens 322 to the top of third viewing area 306 and cause the contents of third viewing area 306 to be automatically scrolled downwards. The positions of thumbnail viewing area lens 314 and sub-lens 316 are updated as scrolling occurs.
Multimedia information corresponding to the second position of panel viewing area lens 322 is then displayed in fourth viewing area 308 (step 1910). For example, video information corresponding to the second position of panel viewing area lens 322 is displayed in area 340-1 of fourth viewing area 308 and text information corresponding to the second position of panel viewing area lens 322 is displayed in area 340-2 of third viewing area 306.
According to an embodiment of the present invention, in step 1910, server 104 selects a time “t” having a value equal to either t3 or t4 or some time value between t3 and t4. Time “t” may be referred to as the “location time”. The location time may be user-configurable. According to an embodiment of the present invention, the location time is set to t4. The location time is then used as the starting time for playing back video information in area 340-1 of fourth viewing area 308.
According to an embodiment of the present invention, GUI 300 may operate in two modes: a “full update” mode and a “partial update” mode. The user of the GUI may select the operation mode of the GUI.
When GUI 300 is operating in “full update” mode, the positions of thumbnail viewing area lens 314 and panel viewing area lens 322 are automatically updated to reflect the position of the video played back in area 340-1 of fourth viewing area 308. Accordingly, in “full update” mode, thumbnail viewing area lens 314 and panel viewing area lens 322 keep up or reflect the position of the video played in fourth viewing area 308. The video may be played forwards or backwards using the controls depicted in area 342 of fourth viewing area 308, and the positions of thumbnail viewing area lens 314 and panel viewing area lens 322 change accordingly. The multimedia information displayed in panels 324 in third viewing area 306 is also automatically updated (shifted upwards) to correspond to the position of thumbnail viewing area lens 314 and reflect the current position of the video.
When GUI 300 is operating in “partial update” mode, the positions of thumbnail viewing area lens 314 and panel viewing area lens 322 are not updated to reflect the position of the video played back in area 340-1 of fourth viewing area 308. In this mode, the positions of thumbnail viewing area lens 314 and panel viewing area lens 322 remain static as the video is played in area 340-1 of fourth viewing area 308. Since the position of thumbnail viewing area lens 314 does not change, the multimedia information displayed in third viewing area 306 is also not updated. In this mode, a “location pointer” may be displayed in second viewing area 304 and third viewing area 306 to reflect the current position of the video played back in area 340-1 of fourth viewing area 308. The position of the location pointer is continuously updated to reflect the position of the video.
Ranges
According to an embodiment, the present invention provides techniques for selecting or specifying portions of the multimedia information displayed in the GUI. Each portion is referred to as a “range.” A range may be manually specified by a user of the present invention or may alternatively be automatically selected by the present invention based upon range criteria provided by the user of the invention.
A range refers to a portion of the multimedia information between a start time (RS) and an end time (RE) Accordingly, each range is characterized by an RS and a RE that define the time boundaries of the range. A range comprises a portion of the multimedia information occurring between times RS and RE associated with the range.
As depicted in
Each range specified by selecting a portion of thumbnail 2008-2 is bounded by a top edge (Rtop) and a bottom edge (Rbottom). The RS and RE times for a range may be determined from the pixel locations of Rtop and Rbottom as follows:
RS=Rtop*tsec—m
RE=Rbottom*tsec—m
It should be apparent that various other techniques may also be used for specifying a range. For example, in alternative embodiments of the present invention, a user may specify a range by providing the start time (RS) and end time (RE) for the range.
In GUI 2000 depicted
According to the teachings of the present invention, various operations may be performed on the ranges displayed in GUI 2000. A user can edit a range by changing the RS and RE times associated with the range. Editing a range may change the time span (i.e., the value of (RE−RS)) of the range. In GUI 2000 depicted in
The user can also edit a range by selecting a range in area 2010 and then selecting “Edit” button 2020. In this scenario, selecting “Edit” button 2020 causes a dialog box to be displayed to the user (e.g., dialog box 2050 depicted in
The user can also move the location of a displayed range by changing the position of the displayed range along thumbnail 2008-2. Moving a range changes the RS and RE values associated with the range but maintains the time span of the range. In GUI 2000, the user can move a range by first selecting “Move” button 2022 and then selecting and moving a range. As described above, the time span for a range may be edited by selecting “Edit” button and then dragging an edge of the bar representing the range.
The user can remove or delete a previously specified range. In GUI 2000 depicted in
As indicated above, each range refers to a portion of the multimedia information occurring between times RS and RE associated with the range. The multimedia information corresponding to a range may be output to the user by selecting “Play” button 2028. After selecting “Play” button 2028, the user may select a particular range displayed in GUI 2000 whose multimedia information is to be output to the user. The portion of the multimedia information corresponding to the selected range is then output to the user. Various different techniques known to those skilled in the art may be used to output the multimedia information to the user. According to an embodiment of the present invention, video information corresponding to multimedia information associated with a selected range is played back to the user in area 2030. Text information corresponding to the selected range may be displayed in area 2032. The positions of thumbnail viewing area lens 314 and panel viewing area lens 322, and the information displayed in third viewing area 306 are automatically updated to correspond to the selected range whose information is output to the user in area 2030.
The user can also select a range in area 2010 and then play information corresponding to the selected range by selecting “Play” button 2020. Multimedia information corresponding to the selected range is then displayed in area 2030.
The user may also instruct GUI 2000 to sequentially output information associated with all the ranges specified for the multimedia information displayed by GUI 2000 by selecting “Preview” button 2034. Upon selecting “Preview” button 2034, multimedia information corresponding to the displayed ranges is output to the user in sequential order. For example, if six ranges have been displayed as depicted in
Multimedia information associated with a range may also be saved to memory. For example, in the embodiment depicted in
Various other operations may also be performed on a range. For example, according to an embodiment of the present invention, multimedia information corresponding to one or more ranges may be printed on a paper medium. Details describing techniques for printing multimedia information on a paper medium are discussed in U.S. application Ser. No. 10/001,895, filed Nov. 19, 2001, the entire contents of which are herein incorporated by reference for all purposes.
Multimedia information associated with a range may also be communicated to a user-specified recipient. For example, a user may select a particular range and request communication of multimedia information corresponding to the range to a user-specified recipient. The multimedia information corresponding to the range is then communicated to the recipient. Various different communication techniques known to those skilled in the art may be used to communicate the range information to the recipient including faxing, electronic mail, wireless communication, and other communication techniques.
Multimedia information corresponding to a range may also be provided as input to another application program such as a search program, a browser, a graphics application, a MIDI application, or the like. The user may select a particular range and then identify an application to which the information is to be provided. In response to the user's selection, multimedia information corresponding to the range is then provided as input to the application.
As previously stated, ranges may be specified manually by a user or may be selected automatically by the present invention. The automatic selection of ranges may be performed by software modules executing on server 104, hardware modules coupled to server 104, or combinations thereof.
As depicted in
The multimedia information stored in the multimedia document is then analyzed to identify locations (referred to as “hits”) in the multimedia information that satisfy the criteria received in step 2102 (step 2104). For example, if the user has specified that one or more words selected by the user in area 2044 are to be used as the range creation criteria, then the locations of the selected words are identified in the multimedia information. Likewise, if the user has specified topics of interest as the range creation criteria, then server 104 analyzes the multimedia information to identify locations in the multimedia information that are relevant to the topics of interest specified by the user. As described above, server 104 may analyze the multimedia information to identify locations of words or phrases associated with the topics of interest specified by the user. Information related to the topics of interest may be stored in a user profile file that is accessible to server 104. It should be apparent that various other techniques known to those skilled in the art may also be used to identify locations in the multimedia information that satisfy the range criteria received in step 2102.
One or more ranges are then created based upon the locations of the hits identified in step 2104 (step 2106). Various different techniques may be used to form ranges based upon locations of the hits. According to one technique, one or more ranges are created based upon the times associated with the hits. Hits may be grouped into ranges based on the proximity of the hits to each other. One or more ranges created based upon the locations of the hits may be combined to form larger ranges.
The ranges created in step 2106 are then displayed to the user using GUI 2000 (step 2108). Various different techniques may be used to display the ranges to the user. In
As depicted in
Server 104 then determines if there are any additional hits in the multimedia information (step 2206). Processing ends if there are no additional hits in the multimedia information. The ranges created for the multimedia information may then be displayed to the user according to step 2108 depicted in
Server 104 then determines if the time gap between the end time of the range including the previous hit and the time determined in step 2208 exceeds a threshold value (step 2210). Accordingly, in step 2210 server 104 determines if:
If it is determined in step 2210 that the time gap between the end time of the range including the previous hit and the time determined in step 2208 exceeds the threshold value, then a new range is created to include the next hit such that RS for the new range is set to the time determined in step 2208, and RE for the new range is set to some time value after the time determined in step 2208 (step 2212). According to an embodiment of the present invention, RE is set to the time of occurrence of the hit plus 5 seconds. Processing then continues with step 2206.
If it is determined in step 2210 that the time gap between the end time of the range including the previous hit and the time determined in step 2208 does not exceed the threshold value, then the range including the previous hit is extended by changing the end time RE of the range to the time determined in step 2208 (step 2214). Processing then continues with step 2206.
According to the method depicted in
According to an embodiment of the present invention, after forming one or more ranges based upon the times associated with the hits (e.g., according to flowchart 2200 depicted in
In order to describe the processing performed in
As depicted in
Server 104 then determines if range Ri selected in step 2304 qualifies as a small range. According to an embodiment of the present invention, a threshold value “SmallRangeSize” is defined and a range is considered a small range if the time span of the range is less than or equal to threshold value SmallRangeSize. Accordingly, in order to determine if range Ri qualifies as a small range, the time span of range Ri selected in step 2304 is compared to threshold time value “SmallRangeSize” (step 2306). The value of SmallRangeSize may be user-configurable. According to an embodiment of the present invention, SmallRangeSize is set to 8 seconds.
If it is determined in step 2306 that the range Ri selected in step 2304 does not qualify as a small range (i.e., the time span (RE−RS) of range Ri is greater than the threshold value SmallRangeSize), then the range is not a candidate for combination with another range. The value of variable “i” is then incremented by one (step 2308) to facilitate selection of the next range in the set of “N” ranges. Accordingly, according to the teachings of the present invention depicted in
After step 2308, server 104 determines if all the ranges in the set of “N” ranges have been processed. This is done by determining if the value of “i” is greater than the value of “N” (step 2310). If the value of “i” is greater than “N”, it indicates that all the ranges in the set of ranges for the multimedia information have been processed and processing of flowchart 2300 ends. If it is determined in step 2310 that “i” is less than or equal to “N”, then it indicates that the set of “N” ranges comprises at least one range that has not been processed according to flowchart 2300. Processing then continues with step 2304 wherein the next range Ri is selected.
If it is determined in step 2306 that range Ri selected in step 2304 qualifies as a small range (i.e., the time span (RE−RS) of range Ri is less than or equal to the threshold value SmallRangeSize), the present invention then performs processing to identify a range that is a neighbor of range Ri (i.e., a range that occurs immediately before or after range Ri selected in step 2304) with which range Ri can be combined. In order to identify such a range, server 104 initializes variables to facilitate selection of ranges that are neighbors of range Ri selected in step 2304 (step 2312). A variable “j” is set to the value (i+1) and a variable “k” is set to the value “(i−1)”. A variable “j” is used to refer to a range that is a neighbor of range Ri and occurs after range Ri, and a variable “k” is used to refer to a range that is a neighbor of range Ri and occurs before range Ri.
Server 104 then determines if the set of “N” ranges created for the multimedia information includes a range that is a neighbor of range Ri selected in step 2304 and occurs before range Ri, and a range that is a neighbor of range Ri and occurs after range Ri. This is done by determining the values of variables “j” and “k”. If the value of “j” is greater than “N”, it indicates that the range Ri selected in step 2304 is the last range in the set of “N” ranges created for the multimedia information implying that there is no range that occurs after range Ri. If the value of “k” is equal to zero, it indicates that the range Ri selected in step 2304 is the first range in the set of “N” ranges created for the multimedia information implying that there is no range that occurs before range Ri.
Accordingly, server 104 determines if range Ri has a neighboring range that occurs before Ri and a neighboring range that occurs after Ri. This is done by determining if the value of “j” is less than “N” and if the value of “k” is not equal to zero (step 2314). If the condition in step 2314 is satisfied, then it indicates that the set of “N” ranges comprises a range that is a neighbor of range Ri selected in step 2304 and occurs before range Ri, and a range that is a neighbor of range Ri and occurs after range Ri. In this case, processing continues with step 2316. If the condition in step 2314 is not satisfied, then it indicates that range Ri selected in step 2304 is either the first range in the set of “N” ranges implying that there is no range that occurs before range Ri, and/or that range Ri selected in step 2304 is the last range in the set of “N” ranges implying that there is no range that occurs after range Ri. In this case, processing continues with step 2330.
If the condition in step 2314 is determined to be true, server 104 then determines time gaps between ranges Ri and Rk and between ranges Ri and Rj (step 2316). The time gap (denoted by Gik) between ranges Ri and Rk is calculated by determining the time between RS of range Ri and RE of Rk, (see
Gik=(RS of Ri)−(RE of Rk)
The time gap (denoted by Gij) between ranges Ri and Rj is calculated by determining the time between RE of range Ri and RS of Rj, (see
Gij=(RS of Rj)−(RE of Ri)
According to the teachings of the present invention, a small range is combined with a neighboring range only if the gap between the small range and the neighboring range is less than or equal to a threshold gap value. The threshold gap value is user configurable. Accordingly, server 104 then determines the sizes of the time gaps to determine if range Ri can be combined with one of its neighboring ranges.
Server 104 then determines which time gap is larger by comparing the values of time gap Gik and time gap Gij (step 2318). If it is determined in step 2318 that Gik is greater that Gij, it indicates that range Ri selected in step 2304 is closer to range Rj than to range Rk, and processing continues with step 2322. Alternatively, if it is determined in step 2318 that Gik is not greater that Gij, it indicates that the time gap between range Ri selected in step 2304 and range Rk is equal to or less than the time gap between ranges Ri and Rj. In this case processing continues with step 2320.
If it is determined in step 2318 that Gik is not greater than Gij, server 104 then determines if the time gap (Gik) between range Ri and range Rk is less than or equal to a threshold gap value “GapThreshold” (step 2320). The value of GapThreshold is user configurable. According to an embodiment of the present invention, GapThreshold is set to 90 seconds. It should be apparent that various other values may also be used for GapThreshold.
If it is determined in step 2320 that the time gap (Gik) between range Ri and range Rk is less than or equal to threshold gap value GapThreshold (i.e., Gik≦Gap Threshold), then ranges Ri and Rk are combined to form a single range (step 2324). The process of combining ranges Ri and Rk involves changing the end time of range Rk to the end time of range Ri (i.e., RE of Rk is set to RE of Ri) and deleting range Ri. Processing then continues with step 2308 wherein the value of variable “i” is incremented by one.
If it is determined in step 2320 that time gap Gik is greater than GapThreshold (i.e., Gik>GapThreshold), it indicates that both ranges Rj and Rk are outside the threshold gap value and as a result range Ri cannot be combined with either range Rj or Rk. In this scenario, processing continues with step 2308 wherein the value of variable “i” is incremented by one.
Referring back to step 2318, if it is determined that Gik is greater than Gij, server 104 then determines if the time gap (Gij) between ranges Ri and Rj is less than or equal to the threshold gap value “GapThreshold” (step 2322). As indicated above, the value of GapThreshold is user configurable. According to an embodiment of the present invention, GapThreshold is set to 90 seconds. It should be apparent that various other values may also be used for GapThreshold.
If it is determined in step 2322 that the time gap (Gij) between ranges Ri and Rj is less than or equal to threshold gap value GapThreshold (i.e., Gij≦GapThreshold), then ranges Ri and Rj are combined to form a single range (step 2326). The process of combining ranges Ri and Rj involves changing the start time of range Rj to the start time of range Ri (i.e., RS of Rj is set to RS of Ri) and deleting range Ri. Processing then continues with step 2308 wherein the value of variable “i” is incremented by one.
If it is determined in step 2322 that time gap Gij is greater than GapThreshold (i.e., Gij>GapThreshold), it indicates that both ranges Rj and Rk are outside the threshold gap value and as a result range Ri cannot be combined with either range Rj or Rk. In this scenario, processing continues with step 2308 wherein the value of variable “i” is incremented by one.
If server 104 determines that the condition in step 2314 is not satisfied, server 104 then determines if the value of “k” is equal to zero (step 2330). If the value of “k” is equal to zero, it indicates that the range Ri selected in step 2304 is the first range in the set of “N” ranges created for the multimedia information which implies that there is no range in the set of “N” ranges that occurs before range Ri. In this scenario, server 104 then determines if the value of variable “j” is greater than “N” (step 2332). If the value of “j” is also greater than “N”, it indicates that the range Ri selected in step 2304 is not only the first range but also the last range in the set of “N” ranges created for the multimedia information which implies that there is no range in the set of ranges that comes after range Ri. If it is determined in step 2330 that “k” is equal to zero and that “j”>N in step 2332, it indicates that the set of ranges for the multimedia information comprises only one range (i.e., N==1). Processing depicted in flowchart 2300 is then ended since no ranges can be combined.
If it is determined in step 2330 that “k” is equal to zero and that “j” is not greater than “N” in step 2332, it indicates that the range Ri selected in step 2304 represents the first range in the set of “N” ranges created for the multimedia information, and that the set of ranges includes at least one range Rj that is a neighbor of range Ri and occurs after range Ri. In this case, the time gap Gij between range Ri and range Rj is determined (step 2334). As indicated above, time gap Gij is calculated by determining the time between RE of range Ri and RS of Rj, i.e.,
Gij=(RS of Rj)−(RE of Ri)
Processing then continues with step 2322 as described above.
If it is determined in step 2330 that “k” is not equal to zero, it indicates that the range Ri selected in step 2304 represents the last range in the set of “N” ranges created for the multimedia information, and that the set of ranges includes at least one range Rk that is a neighbor of range Ri and occurs before range Ri. In this case, the time gap Gik between range Ri and range Rk is determined (step 2336). As indicated above, time gap Gik is calculated by determining the time gap between RS of range Ri and RE of Rk, i.e.,
Gik=(RS of Ri)−(RE of Rk)
Processing then continues with step 2320 as described above.
As indicated above, the processing depicted in
According to an alternative embodiment of the present invention, after combining ranges according to flowchart 2300 depicted in
A buffer is provided at the start of a range by changing the RS time of the range as follows:
RS of range=(RS of range before adding buffer)−BufferStart
A buffer is provided at the end of a range by changing the RE time of the range as follows:
RE of range=(RE of range before adding buffer)+BufferEnd
Although specific embodiments of the invention have been described, various modifications, alterations, alternative constructions, and equivalents are also encompassed within the scope of the invention. The described invention is not restricted to operation within certain specific data processing environments, but is free to operate within a plurality of data processing environments. Additionally, although the present invention has been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present invention is not limited to the described series of transactions and steps. For example, the processing for generating a GUI according to the teachings of the present invention may be performed by server 104, by client 102, by another computer, or by the various computer systems in association.
Further, while the present invention has been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present invention. The present invention may be implemented only in hardware, or only in software, or using combinations thereof.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims.
Patent | Priority | Assignee | Title |
10204143, | Nov 02 2011 | AUTOFILE INC | System and method for automatic document management |
10460030, | Aug 13 2015 | International Business Machines Corporation | Generating structured meeting reports through semantic correlation of unstructured voice and text data |
10460031, | Aug 13 2015 | International Business Machines Corporation | Generating structured meeting reports through semantic correlation of unstructured voice and text data |
7809792, | Mar 22 2004 | FUJIFILM Business Innovation Corp | Conference information processing apparatus, and conference information processing method and storage medium readable by computer |
7840898, | Nov 01 2005 | Microsoft Technology Licensing, LLC | Video booklet |
7954056, | Dec 22 1997 | Ricoh Company, LTD | Television-based visualization and navigation interface |
8543940, | Oct 23 2009 | Samsung Electronics Co., Ltd | Method and apparatus for browsing media content and executing functions related to media content |
8635531, | Feb 21 2002 | Ricoh Company, LTD | Techniques for displaying information stored in multiple multimedia documents |
8739040, | Feb 21 2002 | Ricoh Company, Ltd. | Multimedia visualization and integration environment |
8799781, | Oct 07 2008 | SONY INTERACTIVE ENTERTAINMENT INC | Information processing apparatus reproducing moving image and displaying thumbnails, and information processing method thereof |
8849907, | Mar 31 2006 | RPX CLEARINGHOUSE LLC | System and method for notifying participants of topics in an ongoing meeting or conference |
8995767, | Feb 21 2002 | Ricoh Company, Ltd. | Multimedia visualization and integration environment |
9881225, | Apr 20 2016 | Kabushiki Kaisha Toshiba; Toshiba Tec Kabushiki Kaisha | System and method for intelligent receipt processing |
D633917, | May 20 2008 | Adobe Inc | User interface for a portion of a display screen |
D658199, | May 27 2011 | Microsoft Corporation | Display screen with animated user interface |
Patent | Priority | Assignee | Title |
4417239, | Dec 24 1980 | International Business Machines Corp. | Interactive combination display |
4481412, | Jan 21 1982 | PIONEER ELECTRONIC CORPORATION, A CORP OF JAPAN | Interactive videodisc training system with bar code access |
4807186, | Oct 05 1984 | Sharp Kabushiki Kaisha | Data terminal with capability of checking memory storage capacity as well as program execution parameters |
4823303, | Jul 17 1986 | Kabushiki Kaisha Toshiba | Display control apparatus for use in composite document processing apparatus |
4881135, | Sep 23 1988 | Concealed audio-video apparatus for recording conferences and meetings | |
5129048, | Jul 31 1989 | Eastman Kodak Company | Efficient data storage system for gray-scale printers |
5153831, | May 29 1990 | FEP HOLDING COMPANY | Electronic text |
5250787, | Sep 14 1988 | MATSUSHITA ELECTRIC INDUSTRIAL CO , LTD | Optical-disk playback apparatus, method of optical-disk playback and combined memory medium, having control programs stored in the optical-disc and specified by barcodes stored in a barcode memory medium |
5258880, | Dec 15 1989 | Canon Kabushiki Kaisha | Video signal reproducing apparatus connectable with a printer and reproducing recorded information for printer control |
5309359, | Aug 16 1990 | Method and apparatus for generating and utlizing annotations to facilitate computer text retrieval | |
5339391, | May 14 1990 | Stovokor Technology LLC | Computer display unit with attribute enhanced scroll bar |
5349658, | Nov 01 1991 | ARC FINANCE CORP | Graphical user interface |
5382776, | Sep 14 1988 | Matsushita Electric Industrial Co., Ltd. | Combination of an optical-disk and barcode memory medium for use with an optical disk playback apparatus, having control programs stored in the optical-disk and specified by barcodes stored in the barcode memory medium |
5384703, | Jul 02 1993 | Xerox Corporation | Method and apparatus for summarizing documents according to theme |
5404295, | Aug 16 1990 | Method and apparatus for utilizing annotations to facilitate computer retrieval of database material | |
5418948, | Oct 08 1991 | WEST SERVICES INC | Concept matching of natural language queries with a database of document concepts |
5436792, | Sep 10 1993 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Pivotable docking station for use with notepad computer systems |
5442795, | May 27 1988 | GLOBAL 360, INC | System and method for viewing icon contents on a video display |
5467288, | Apr 10 1992 | AVID TECHNOLOGY, INC | Digital audio workstations providing digital storage and display of video information |
5479600, | May 14 1990 | Stovokor Technology LLC | Attribute-enhanced scroll bar system and method |
5481666, | Aug 25 1993 | Apple Inc | Object-oriented navigation system |
5485554, | Oct 29 1993 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Method and apparatus for processing an image in a video printing apparatus |
5568406, | Dec 01 1995 | Stolen car detection system and method | |
5596700, | Feb 17 1993 | International Business Machines Corporation | System for annotating software windows |
5600775, | Aug 26 1994 | BEN GROUP, INC | Method and apparatus for annotating full motion video and other indexed data structures |
5633723, | Sep 20 1993 | FUJIFILM Corporation | Video printer including a data deletion feature involving mute data |
5638543, | Jun 03 1993 | Xerox Corporation | Method and apparatus for automatic document summarization |
5675752, | Sep 15 1994 | Sony Trans Com | Interactive applications generator for an interactive presentation environment |
5680636, | May 27 1988 | Open Text SA ULC | Document annotation and manipulation in a data processing system |
5694559, | Mar 07 1995 | Microsoft Technology Licensing, LLC | On-line help method and system utilizing free text query |
5706097, | Sep 13 1996 | Eastman Kodak Company | Index print with a digital recording medium containing still images, motion sequences, and sound sequences |
5721897, | Apr 09 1996 | HANGER SOLUTIONS, LLC | Browse by prompted keyword phrases with an improved user interface |
5737599, | Sep 26 1995 | Adobe Systems, Inc | Method and apparatus for downloading multi-page electronic documents with hint information |
5745756, | Jun 24 1996 | International Business Machines Corporation | Method and system for managing movement of large multi-media data files from an archival storage to an active storage within a multi-media server computer system |
5748805, | Nov 19 1991 | Xerox Corporation | Method and apparatus for supplementing significant portions of a document selected without document image decoding with retrieved information |
5751283, | Jul 17 1996 | Microsoft Technology Licensing, LLC | Resizing a window and an object on a display screen |
5758037, | Oct 25 1994 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Print controller with simplified video data processing |
5761655, | Jun 06 1990 | NetApp, Inc | Image file storage and retrieval system |
5778397, | Jun 28 1995 | Xerox Corporation | Automatic method of generating feature probabilities for automatic extracting summarization |
5781785, | Sep 26 1995 | Adobe Systems Incorporated | Method and apparatus for providing an optimized document file of multiple pages |
5784616, | May 02 1997 | Microsoft Technology Licensing, LLC | Apparatus and methods for optimally using available computer resources for task execution during idle-time for future task instances exhibiting incremental value with computation |
5790114, | Oct 04 1996 | Steelcase Inc | Electronic whiteboard with multi-functional user interface |
5809318, | Nov 18 1994 | DATACLOUD TECHNOLOGIES, LLC | Method and apparatus for synchronizing, displaying and manipulating text and image documents |
5819301, | Sep 26 1995 | Adobe Systems Incorporated | Method and apparatus for reading multi-page electronic documents |
5832474, | Feb 26 1996 | MATSUSHITA ELECTRIC INDUSTRIAL CO ,LTD | Document search and retrieval system with partial match searching of user-drawn annotations |
5838317, | Jun 30 1995 | Microsoft Technology Licensing, LLC | Method and apparatus for arranging displayed graphical representations on a computer interface |
5857185, | Oct 20 1995 | Fuji Xerox Co., Ltd. | Method and system for searching and for presenting the search results in an attribute that corresponds to the retrieved documents |
5860074, | Sep 25 1995 | Adobe Systems Incorporated | Method and apparatus for displaying an electronic document with text over object |
5870770, | Jun 07 1995 | GOOGLE LLC | Document research system and method for displaying citing documents |
5873107, | Mar 29 1996 | Apple Computer, Inc | System for automatically retrieving information relevant to text being authored |
5892536, | Oct 03 1996 | TIVO INC | Systems and methods for computer enhanced broadcast monitoring |
5894333, | Jan 30 1996 | Mitsubishi Denki Kabushiki Kaisha | Representative image display method, representative image display apparatus, and motion image search appratus employing the representative image display apparatus |
5895476, | Sep 09 1996 | Microsoft Technology Licensing, LLC | Design engine for automatic reformatting for design and media |
5898166, | May 23 1995 | Olympus Optical Co., Ltd. | Information reproduction system which utilizes physical information on an optically-readable code and which optically reads the code to reproduce multimedia information |
5898709, | Jun 02 1994 | Olympus Optical Co., Ltd. | Information recording medium and information reproducing apparatus |
5933829, | Nov 08 1996 | NM, LLC | Automatic access of electronic information through secure machine-readable codes on printed documents |
5933841, | May 17 1996 | RPX Corporation | Structured document browser |
5943679, | Oct 30 1996 | Xerox Corporation | Multi-page document viewer having a focus image and recursively nested images of varying resolutions less than the resolution of the focus image |
5946678, | Jan 11 1995 | MOBILE ENHANCEMENT SOLUTIONS LLC | User interface for document retrieval |
5950187, | Nov 30 1995 | Fujitsu Limited | Document retrieving apparatus and method thereof for outputting result corresponding to highlight level of inputted retrieval key |
5978773, | Jun 20 1995 | NM, LLC | System and method for using an ordinary article of commerce to access a remote computer |
5986692, | Oct 03 1996 | TIVO INC | Systems and methods for computer enhanced broadcast monitoring |
5987454, | Jun 09 1997 | Red Hat, Inc | Method and apparatus for selectively augmenting retrieved text, numbers, maps, charts, still pictures and/or graphics, moving pictures and/or graphics and audio information from a network resource |
5999173, | Apr 03 1992 | Adobe Systems, Inc; Adobe Systems Incorporated | Method and apparatus for video editing with video clip representations displayed along a time line |
6005562, | Jul 20 1995 | Sony Corporation | Electronic program guide system using images of reduced size to identify respective programs |
6006218, | Feb 28 1997 | Microsoft Technology Licensing, LLC | Methods and apparatus for retrieving and/or processing retrieved information as a function of a user's estimated knowledge |
6021403, | Jul 19 1996 | Microsoft Technology Licensing, LLC | Intelligent user assistance facility |
6026409, | Sep 26 1996 | System and method for search and retrieval of digital information by making and scaled viewing | |
6028601, | Apr 01 1997 | Apple Computer, Inc.; Apple Computer, Inc | FAQ link creation between user's questions and answers |
6055542, | Oct 29 1997 | International Business Machines Corporation | System and method for displaying the contents of a web page based on a user's interests |
6061758, | Dec 22 1989 | AVID TECHNOLOGY, INC | System and method for managing storage and retrieval of media data including dynamic linkage of media data files to clips of the media data |
6094648, | Jan 11 1995 | MOBILE ENHANCEMENT SOLUTIONS LLC | User interface for document retrieval |
6098082, | Jul 15 1996 | AT&T Corp | Method for automatically providing a compressed rendition of a video program in a format suitable for electronic searching and retrieval |
6101503, | Mar 02 1998 | International Business Machines Corp. | Active markup--a system and method for navigating through text collections |
6108656, | Nov 08 1996 | NM, LLC | Automatic access of electronic information through machine-readable codes on printed documents |
6115718, | Apr 01 1998 | Xerox Corporation | Method and apparatus for predicting document access in a collection of linked documents featuring link proprabilities and spreading activation |
6125229, | Jun 02 1997 | U S PHILIPS CORPORATION | Visual indexing system |
6151059, | Aug 06 1996 | Rovi Guides, Inc | Electronic program guide with interactive areas |
6160633, | Aug 07 1996 | Olympus Optical Co., Ltd. | Code printing apparatus for printing an optically readable code image at set positions on a print medium |
6182090, | Apr 28 1995 | Ricoh Company, Ltd. | Method and apparatus for pointing to documents electronically using features extracted from a scanned icon representing a destination |
6193658, | Jun 24 1999 | Method and kit for wound evaluation | |
6199048, | Jun 20 1995 | NM, LLC | System and method for automatic access of a remote computer over a network |
6211869, | Apr 04 1997 | CERBERUS BUSINESS FINANCE, LLC, AS COLLATERAL AGENT | Simultaneous storage and network transmission of multimedia data with video host that requests stored data according to response time from a server |
6222532, | Feb 03 1997 | U S PHILIPS CORPORATION | Method and device for navigating through video matter by means of displaying a plurality of key-frames in parallel |
6262724, | Apr 15 1999 | Apple Inc | User interface for presenting media information |
6340971, | Feb 03 1997 | U S PHILIPS CORPORATION | Method and device for keyframe-based video displaying using a video cursor frame in a multikeyframe screen |
6369811, | Sep 09 1998 | Ricoh Company Limited | Automatic adaptive document help for paper documents |
6421067, | Jan 16 2000 | JLB Ventures LLC | Electronic programming guide |
6430554, | Feb 01 1999 | NM, LLC | Interactive system for investigating products on a network |
6434561, | May 09 1997 | NM, LLC | Method and system for accessing electronic resources via machine-readable data on intelligent documents |
6452615, | Mar 24 1999 | FUJI XEROX CO , LTD ; Xerox Corporation | System and apparatus for notetaking with digital video and ink |
6504620, | Mar 25 1997 | FUJIFILM Corporation | Print ordering method, printing system and film scanner |
6505153, | May 22 2000 | DATA QUILL LIMITED | Efficient method for producing off-line closed captions |
6518986, | Oct 17 1997 | Sony Corporation; Sony Electronics Inc. | Method and apparatus for providing an on-screen guide for a multiple channel broadcasting system |
6529920, | Mar 05 1999 | LIVESCRIBE, INC | Multimedia linking device and method |
6535639, | Mar 12 1999 | FUJI XEROX CO , LTD ; Xerox Corporation | Automatic video summarization using a measure of shot importance and a frame-packing method |
6542933, | Apr 05 1999 | NM, LLC | System and method of using machine-readable or human-readable linkage codes for accessing networked data resources |
6544294, | May 27 1999 | Write Brothers, Inc.; SCREENPLAY SYSTEMS, INC | Method and apparatus for creating, editing, and displaying works containing presentation metric components utilizing temporal relationships and structural tracks |
6546385, | Aug 13 1999 | International Business Machines Corporation | Method and apparatus for indexing and searching content in hardcopy documents |
6567980, | Aug 14 1997 | VIRAGE, INC | Video cataloger system with hyperlinked output |
6596031, | Apr 04 1997 | CERBERUS BUSINESS FINANCE, LLC, AS COLLATERAL AGENT | News story markup language and system and process for editing and processing documents |
6608563, | Jan 26 2000 | MQ GAMNG, LLC; MQ Gaming, LLC | System for automated photo capture and retrieval |
6623528, | Jul 22 1998 | Eastman Kodak Company | System and method of constructing a photo album |
6628303, | Jul 29 1996 | CERBERUS BUSINESS FINANCE, LLC, AS COLLATERAL AGENT | Graphical user interface for a motion video planning and editing system for a computer |
6636869, | Dec 22 1989 | Avid Techhnology, Inc. | Method, system and computer program product for managing media data files and related source information |
6647535, | Mar 18 1999 | Xerox Corporation | Methods and systems for real-time storyboarding with a web page and graphical user interface for automatic video parsing and browsing |
6651053, | Feb 01 1999 | NM, LLC | Interactive system for investigating products on a network |
6675165, | Feb 28 2000 | NM, LLC | Method for linking a billboard or signage to information on a global computer network through manual information input or a global positioning system |
6684368, | Nov 13 1998 | Ricoh Company, LTD | Method for specifying delivery information for electronic documents |
6728466, | Feb 27 1998 | FUJIFILM Corporation | Image processing apparatus, image printing control apparatus, print image designation method and image printing control method |
6745234, | Sep 11 1998 | RPX Corporation | Method and apparatus for accessing a remote location by scanning an optical code |
6750978, | Apr 27 2000 | LEAPFROG ENTERPRISES, INC | Print media information system with a portable print media receiving unit assembly |
6760541, | May 20 1998 | DROPBOX INC | Information processing device and method, distribution medium, and recording medium |
6766363, | Feb 28 2000 | NM, LLC | System and method of linking items in audio, visual, and printed media to related information stored on an electronic network using a mobile device |
6781609, | May 09 2000 | International Business Machines Corporation | Technique for flexible inclusion of information items and various media types in a user interface |
6865608, | Mar 31 2000 | NM, LLC | Method and system for simplified access to internet content on a wireless device |
6865714, | Sep 22 1999 | Siemens Corporation | Automatic generation of card-based presentation documents from multimedia data |
6993573, | Jun 06 2003 | NM, LLC | Automatic access of internet content with a camera-enabled cell phone |
7131058, | Dec 01 1999 | Silverbrook Research Pty LTD | Method and system for device control |
20010005203, | |||
20010013041, | |||
20010037408, | |||
20010043789, | |||
20010044810, | |||
20020010641, | |||
20020036800, | |||
20020047870, | |||
20020048224, | |||
20020059342, | |||
20020070982, | |||
20020095460, | |||
20020099452, | |||
20020116575, | |||
20020135808, | |||
20020169849, | |||
20020171857, | |||
20020185533, | |||
20020199149, | |||
20030007776, | |||
20030052897, | |||
20030065665, | |||
20030076521, | |||
20030117652, | |||
20030156589, | |||
20030184598, | |||
20030220988, | |||
20040006577, | |||
20040008209, | |||
20040015524, | |||
20040037540, | |||
20040064338, | |||
20040064339, | |||
20040071441, | |||
20040095376, | |||
20040098671, | |||
20040100506, | |||
20040103372, | |||
20040143602, | |||
20040175036, | |||
20040247298, | |||
20040249650, | |||
20050064935, | |||
20070033419, | |||
20080106597, | |||
EP248403, | |||
EP378848, | |||
EP459174, | |||
EP737927, | |||
EP762297, | |||
EP788063, | |||
EP788064, | |||
EP802492, | |||
GB2137788, | |||
GB2156118, | |||
GB2234609, | |||
GB2290898, | |||
JP2000253337, | |||
JP2000516006, | |||
JP2001111963, | |||
JP2001176246, | |||
JP2001326910, | |||
JP2002158936, | |||
JP2004023787, | |||
JP2004199696, | |||
JP4021165, | |||
JP5081327, | |||
JP8297677, | |||
RE36801, | Apr 18 1996 | Motorola Mobility LLC | Time delayed digital video system using concurrent recording and playback |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 08 2002 | GRAHAM, JAMEY | Ricoh Company, LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 012697 | /0158 | |
Feb 21 2002 | Ricoh Company, Ltd. | (assignment on the face of the patent) | / | |||
May 08 2002 | GRAHAM, JAMEY | Ricoh Company, LTD | CORRECTIVE ASSIGNMENT TO CORRECT THE EXECUTION OF THE ASSIGNOR, PREVIOUSLY RECORDED ON REEL 012697 FRAME 0158 | 013487 | /0763 |
Date | Maintenance Fee Events |
Jan 08 2010 | ASPN: Payor Number Assigned. |
Mar 14 2013 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
May 12 2017 | REM: Maintenance Fee Reminder Mailed. |
Oct 30 2017 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Sep 29 2012 | 4 years fee payment window open |
Mar 29 2013 | 6 months grace period start (w surcharge) |
Sep 29 2013 | patent expiry (for year 4) |
Sep 29 2015 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 29 2016 | 8 years fee payment window open |
Mar 29 2017 | 6 months grace period start (w surcharge) |
Sep 29 2017 | patent expiry (for year 8) |
Sep 29 2019 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 29 2020 | 12 years fee payment window open |
Mar 29 2021 | 6 months grace period start (w surcharge) |
Sep 29 2021 | patent expiry (for year 12) |
Sep 29 2023 | 2 years to revive unintentionally abandoned end. (for year 12) |