A methodology may be provided that automatically annotates web conference application sharing (e.g., sharing scenes and/or slides) based on voice and/or web conference data. In one specific example, a methodology may be provided that threads the annotations and assigns authorship to the correct resources.
|
13. A computer-implemented web conference system in which at least one page of a presentation is shared remotely between at least a first user and a second user during a web conference, the system comprising:
a processor; and
a memory storing computer readable instructions that, when executed by the processor, implement:
an input element configured to receive spoken input from at least the first and second users;
a transcription element configured to: (a) transcribe the received spoken input from the first user into first user transcribed text; and (b) transcribe the received spoken input from the second user into second user transcribed text;
an analysis element configured to analyze the page to identify a location of at least one feature of the page;
a correlation element configured to: (a) correlate a plurality of first user text elements from the first user transcribed text to the feature of the page; and (b) correlate a plurality of second user text elements from the second user transcribed text to the feature of the page; and
an annotation element configured to: (a) annotate the page by displaying in the vicinity of each location of the feature the correlated first user text elements; and (b) annotate the page by displaying in the vicinity of each location of the feature the correlated second user text elements;
wherein the location is an X-Y coordinate location;
wherein the annotating associated with the first user text elements displays a plurality of annotations;
wherein the annotating associated with the second user text elements displays a plurality of annotations; and
wherein the annotations are displayed in a threaded manner for each of the first user and the second user in a synchronous order and such that each subsequent annotation is displayed at least partially overlapping a previous annotation.
1. A method implemented in a computer system for annotating during a web conference, wherein the web conference is carried out via a web conference application in which at least one page of a presentation is shared remotely between at least a first user and a second user during the web conference, the method comprising the steps of:
receiving, with the computer system, spoken input from at least the first and second users;
transcribing, with the computer system, the received spoken input from the first user into first user transcribed text;
transcribing, with the computer system, the received spoken input from the second user into second user transcribed text;
analyzing, with the computer system, the page to identify a location of at least one feature of the page;
correlating, with the computer system, a plurality of first user text elements from the first user transcribed text to the feature of the page;
correlating, with the computer system, a plurality of second user text elements from the second user transcribed text to the feature of the page;
annotating, with the computer system, the page by displaying in the vicinity of the location of the feature the correlated first user text elements; and
annotating, with the computer system, the page by displaying in the vicinity of the location of the feature the correlated second user text elements;
wherein the location is an X-Y coordinate location;
wherein the annotating associated with the first user text elements displays a plurality of annotations;
wherein the annotating associated with the second user text elements displays a plurality of annotations; and
wherein the annotations are displayed in a threaded manner for each of the first user and the second user in a synchronous order and such that each subsequent annotation is displayed at least partially overlapping a previous annotation.
9. A non-transitory computer readable storage medium, tangibly embodying a program of instructions executable by the computer for annotating during a web conference, wherein the web conference is carried out via a web conference application in which at least one page of a presentation is shared remotely between at least a first user and a second user during the web conference, the program of instructions, when executing, performing the following steps:
receiving, with the computer, spoken input from at least the first and second users;
transcribing, with the computer, the received spoken input from the first user into first user transcribed text;
transcribing, with the computer, the received spoken input from the second user into second user transcribed text;
analyzing, with the computer, the page to identify a location of at least one feature of the page;
correlating, with the computer, a plurality of first user text elements from the first user transcribed text to the feature of the page;
correlating, with the computer, a plurality of second user text elements from the second user transcribed text to the feature of the page;
annotating, with the computer, the page by displaying in the vicinity of the location of the feature the correlated first user text elements; and
annotating, with the computer, the page by displaying in the vicinity of the location of the feature the correlated second user text elements;
wherein the location is an X-Y coordinate location;
wherein the annotating associated with the first user text elements displays a plurality of annotations;
wherein the annotating associated with the second user text elements displays a plurality of annotations; and
wherein the annotations are displayed in a threaded manner for each of the first user and the second user in a synchronous order and such that each subsequent annotation is displayed at least partially overlapping a previous annotation.
3. The method of
the analyzing, with the computer system, the page to identify the location of at least one feature of the page comprises: (a) analyzing, with the computer system, the page to identify a first location of a first feature of the page; and (b) analyzing, with the computer system, the page to identify a second location of a second feature of the page;
the correlating, with the computer system, the plurality of first user text elements comprises correlating each of the plurality of first user text elements to one of the first and second features of the page;
the correlating, with the computer system, the plurality of second user text elements comprises correlating each of the plurality of second user text elements to one of the first and second features of the page;
the annotating, with the computer system, the page by displaying the correlated first user text elements comprises displaying in the vicinity of each location of each of the first and second features one of the correlated first user text elements; and
the annotating, with the computer system, the page by displaying the correlated second user text elements comprises displaying in the vicinity of each location of each of the first and second features one of the correlated second user text elements.
4. The method of
5. The method of
6. The method of
7. The method of
10. The computer readable storage medium of
11. The computer readable storage medium of
the analyzing, with the computer, the page to identify the location of at least one feature of the page comprises: (a) analyzing, with the computer, the page to identify a first location of a first feature of the page; and (b) analyzing, with the computer, the page to identify a second location of a second feature of the page;
the correlating, with the computer, the plurality of first user text elements comprises correlating each of the plurality of first user text elements to one of the first and second features of the page;
the correlating, with the computer, the plurality of second user text elements comprises correlating each of the plurality of second user text elements to one of the first and second features of the page;
the annotating, with the computer, the page by displaying the correlated first user text elements comprises displaying in the vicinity of each location of each of the first and second features one of the correlated first user text elements; and
the annotating, with the computer, the page by displaying the correlated second user text elements comprises displaying in the vicinity of each location of each of the first and second features one of the correlated second user text elements.
12. The computer readable storage medium of
14. The system of
15. The system of
the analysis element is configured to: (a) analyze the page to identify a first location of a first feature of the page; and (b) analyze the page to identify a second location of a second feature of the page;
the correlation element is configured to: (a) correlate each of the plurality of first user text elements to one of the first and second features of the page; and (b) correlate each of the plurality of second user text elements to one of the first and second features of the page; and
the annotation element is configured to: (a) annotate the page by displaying in the vicinity of each location of each of the first and second features one of the correlated first user text elements; and (b) annotate the page by displaying in the vicinity of each location of each of the first and second features one of the correlated second user text elements.
|
The present disclosure relates generally to the field of web conferencing.
In various examples, automated collaborative annotation of converged web conference objects may be implemented in the form of systems, methods and/or algorithms.
Often when collaborating with others, a web conference is an ideal tool to share a screen and collaboratively edit. The primary presenter must typically coordinate the audio and visual components to effectively collaborate. The audio and visual components are often referred to as converged. With multiple participants, the primary presenter's job is typically unwieldy and requires a high level of coordination.
In various embodiments, methodologies may be provided that automatically annotate web conference application sharing (e.g., sharing scenes and/or slides) based on voice and/or web conference data.
In one specific example, methodologies may be provided that thread the annotations and assign authorship to the correct resources.
In one embodiment, a method implemented in a computer system for annotating during a web conference, wherein the web conference is carried out via a web conference application in which at least one page of a presentation is shared remotely between at least two people during the web conference is provided, the method comprising: receiving, with the computer system, spoken input from at least one of the two people; transcribing, with the computer system, the received spoken input into transcribed text; analyzing, with the computer system, the page to identify a location of at least one feature of the page; correlating, with the computer system, at least one text element from the transcribed text to the feature of the page; and annotating, with the computer system, the page by displaying in the vicinity of the location of the feature the correlated text element.
In another embodiment, a computer readable storage medium, tangibly embodying a program of instructions executable by the computer for annotating during a web conference, wherein the web conference is carried out via a web conference application in which at least one page of a presentation is shared remotely between at least two people during the web conference is provided, the program of instructions, when executing, performing the following steps: receiving, with the computer, spoken input from at least one of the two people; transcribing, with the computer, the received spoken input into transcribed text; analyzing, with the computer, the page to identify a location of at least one feature of the page; correlating, with the computer, at least one text element from the transcribed text to the feature of the page; and annotating, with the computer, the page by displaying in the vicinity of the location of the feature the correlated text element.
In another embodiment, a computer-implemented web conference system in which at least one page of a presentation is shared remotely between at least two people during a web conference is provided, the system comprising: an input element configured to receive spoken input from at least one of the two people; a transcription element configured to transcribe the spoken input into transcribed text; an analysis element configured to analyze the page to identify a location of at least one feature of the page; a correlation element configured to correlate at least one text element from the transcribed text to the feature of the page; and an annotation element configured to annotate the page by displaying in the vicinity of the location of the feature the correlated text element.
Various objects, features and advantages of the present invention will become apparent to one skilled in the art, in view of the following detailed description taken in combination with the attached drawings, in which:
In one example, one or more systems may provide for automated collaborative annotation of converged web conference objects. In another example, one or more methods may provide for automated collaborative annotation of converged web conference objects. In another example, one or more algorithms may provide for automated collaborative annotation of converged web conference objects.
In one specific example, methodologies may be provided that automatically annotate web conference application sharing (e.g., sharing scenes and/or slides) based on voice and/or web conference data.
In another specific example, methodologies may be provided that thread the annotations and assign authorship to the correct resources.
For the purposes of describing and claiming the present invention the term “web conference” is intended to refer to a remote sharing of information (e.g., audio and video) between two or more people (e.g. via use of web browsers).
For the purposes of describing and claiming the present invention the term “web conference objects” is intended to refer to features having associated annotations positioned on a canvas, screen or page which is presented.
For the purposes of describing and claiming the present invention the term “transcribing spoken input into text” is intended to refer to converting spoken voice into corresponding text.
For the purposes of describing and claiming the present invention the term “text element” is intended to refer to one or more words or word fragments in transcribed text.
For the purposes of describing and claiming the present invention the term “feature” (as used in the context of a feature of a page) is intended to refer to one or more identifiable characteristics (e.g., content, shape, object name and/or object description).
For the purposes of describing and claiming the present invention the term “correlate” (as used in the context of correlating a text element to a feature) is intended to refer to associating a text element to a feature as a result of a match between the text element and the feature. In one example, the match may be a literal exact match. In another example, the match may be based on an idea or a concept.
Referring now to
In one example, any steps may be carried out in the order recited or the steps may be carried out in another order.
Referring now to
Still referring to
Of note, while
In addition, in other examples, various entities (e.g., presenter(s) and/or attendee(s)) may be associated with the same organization or one or more different organizations.
Referring now to
Referring now to
Referring now to
Still referring to
In another example, the speaker associated with each annotation may be identified (e.g., indicating the speaker's name along with each annotation).
In another example, a plurality of pages of a presentation may be shared remotely between at least two people during a web conference. In this case, the following may be provided: (a) analyzing (e.g., with a computer system), a first one of the plurality of pages to identify a location of at least one feature of the first one of the plurality of pages; (b) correlating (e.g., with a computer system), at least a first text element (e.g., associated with transcribed text) to the feature of the first one of the plurality of pages; (c) annotating (e.g., with a computer system), the first one of the plurality of pages by displaying in the vicinity of the location of the first feature the first correlated text element; (d) analyzing (e.g., with a computer system), a second one of the plurality of pages to identify a location of at least one feature of the second one of the plurality of pages; (e) correlating (e.g., with a computer system), at least a second text element (e.g., associated with transcribed text) to the feature of the second one of the plurality of pages; and (f) annotating (e.g., with a computer system), the second one of the plurality of pages by displaying in the vicinity of the location of the feature the second correlated text element.
In another example, a page of a presentation may be shared remotely between a presenter and a plurality of attendees during a web conference. In this case, the following may be provided: (a) receiving (e.g., with a computer system) spoken input from each of the plurality of attendees and transcribing the spoken input from each of the plurality of attendees into respective transcribed text; (b) correlating (e.g., with a computer system), at least one text element from the transcribed text to the feature of the page; and (c) annotating (e.g., with a computer system), the page by displaying in the vicinity of the location of the feature the correlated text element.
Referring now to
Reference will now be made to operation according to one embodiment. In this example, operation may proceed as follows: (a) a Presenter begins a web conference; (b) a User1 and a User2 join the web conference; (c) the Presenter, User1 and User2 join an audio discussion; (d) the Presenter displays Slide1 in the web conference; (e) the User1 starts a discussion, for example, by commenting verbally to the effect that “The logo is not correct”; (f) the User2 adds a comment (e.g., by verbally agreeing or verbally disagreeing with User1); and (g) the Presenter sees an overlay of a threaded annotation (e.g., containing the text of User1 and User2 verbal comments) over the logo on the Slide1.
In one example, the embodiment described above may be implemented as follows: (a) a Presenter starts a web conference using, e.g., IBM SmartCloud Meeting or any other appropriate mechanism; (b) the Presenter shares a screen or slide; (c) the Presenter joins an audio conference bridge; (d) the Presenter links the audio conference to the web conference; (e) an Attendee (e.g., User1, User2) navigates to the web conference page; (f) each Attendee enters profile information (e.g., name, phone, email); (g) each Attendee joins the conference (audio and web). In one specific example, a system such as Premiere Conference may be used (whereby an audio conference system can be managed via a web interface and whereby a user is enabled to be able to tell who is speaking).
In one example, the embodiment described above may be implemented by utilizing one of more of the following: (a) an Annotation Service (e.g., a computer implemented software service); (b) a Scene Analysis Service (e.g., a computer implemented software service); and/or (c) a Slide Analysis Service (e.g., a computer implemented software service).
In one specific example, an Annotation Service may operate as follows: (a) a Presenter activates the Annotation Service; (b) the Presenter changes a scene/slide; (c) the system detects the scene/slide change—if a slide, the system submits the slide to a slide analysis service (discussed in more detail below); if a scene, the system captures the image, and submits the captured image to a scene analysis service (discussed in more detail below); (d) the system returns one or more annotation markers (e.g., X-Y coordinates of one or more web conference objects and related reference data, such as the topic of each web conference object); (e) the Attendee(s) and Presenter(s) speak; (f) the system transcribes the voices to text; (g) the system analyzes the text of the voices and detects one or more coincidences of reference data and text; (h) the system marks the appropriate X-Y coordinate(s); (i) the system puts text into the appropriate annotation marker; (j) the system continues to thread Attendee/Presenter speech-to-text. In one example, while there is no new coincidence of reference data and speech-to-text, the system may continue to thread. In another example, if there is a scene change or slide change, the system begins a new analysis. In yet another example, the system may asynchronously load the annotations as the analysis completes.
In another specific example, a Scene Analysis Service may operate as follows: (a) the system analyzes an image for one or more recognized objects; (b) the system determines where the object(s) are on the page, the system also determines the name of the object(s); (c) the system returns one or more annotation markers (e.g., X-Y coordinates of one or more web conference objects and related reference data, such as the name of each web conference object); (d) the system returns to the annotation service.
In another specific example, a Slide Analysis Service may operate as follows: (a) the system analyzes the markup of the slide (e.g., ODF, HTML, PDF); (b) the system checks the shapes on the slide; (c) the system determines the X-Y Coordinates of each shape and text of each shape; (d) the system returns to the annotation service.
In another example, a system (or corresponding method or algorithm) may provide for presenter/attendee coordination of actions.
In another example, a system (or corresponding method or algorithm) may provide for effective collaboration between/among a plurality of generated annotations.
In another example, a system (or corresponding method or algorithm) may provide for annotations related to words, shapes and/or actions.
In another example, a system (or corresponding method or algorithm) may provide for real time annotations.
In another example, a system (or corresponding method or algorithm) may provide for relevant context in a live threading environment.
In another example, a system (or corresponding method or algorithm) may provide for threading and placement of annotated threads for a group of people participating in a presentation or discussion associated with the presentation.
In another example, a system (or corresponding method or algorithm) may be provided for aiding collaborative screen sharing, comprising: a web conference application available to multiple participants sharing a common collaborative screen; one or more slides and/or screens having voice and/or web data associated therewith and comprising the web conference application; at least a portion of a respective one or more of the slides/screens trackable for discussion thread(s) associated therewith; and annotations comprised of the associated discussion thread(s) provided to the participants in the shared collaborative screen.
In another example, a system (or corresponding method or algorithm) may provide for automatically annotating a multimedia presentation based on comment(s) by user(s).
In another example, a system (or corresponding method or algorithm) may provide for automatically identifying and annotating content in a multimedia presentation, based on user comments converted from voice to text.
In another example, a system (or corresponding method or algorithm) may store annotated updates for replay, or further review.
In another example, the user may have the option to make changes to the presentation material based on the annotation in real time, or make changes later during playback.
In another example, a system (or corresponding method or algorithm) may scope annotations to subconferences or groups of users. In one specific example, scopes may be used to hide annotations from others. In another specific example, scopes may be used to automatically limit annotations. In another specific example, scopes may be automatically expanded or limited to a dynamic set of users.
In another example, a system (or corresponding method or algorithm) may enable drag-and-drop for generated annotations.
In another example, a system (or corresponding method or algorithm) may enable modifications of generated annotations.
In another example, a system (or corresponding method or algorithm) may use modifications to further enhance the system (or corresponding method or algorithm).
In another example, a system (or corresponding method or algorithm) may scope enhancements to users.
In another example, a system (or corresponding method or algorithm) may enable type-ahead (e.g., for frequent authors of an annotation).
In another example, a system (or corresponding method or algorithm) may further define keywords per slide or scene or overlay at the change of scene or slide.
In another example, a system (or corresponding method or algorithm) may prompt a user to redact annotations.
In another example, a system (or corresponding method or algorithm) may define a charge back model for the annotation feature.
In another example, a system (or corresponding method or algorithm) may define a policy to enable the annotation feature for a select set of users.
In another example, a system (or corresponding method or algorithm) may display annotations beyond the time of scene or slide change in order for users to appropriately respond to an annotation in text or verbal form.
In another example, a system (or corresponding method or algorithm) may generate a graphical report which summarizes the action items generated through the annotations at the end of the meeting and send it to all the meeting attendee(s) and/or presenter(s). In one specific example, the report can contain information such as: User 1 and User 2 will meet regarding subject A; User 3 needs to follow up with User 4 via email regarding subject A(1).
In another example, a system (or corresponding method or algorithm) may further be used to generate enhanced reporting, thereby providing a speaker/presenter/company information on the items which are of the most importance regarding discussion/annotations. Reports may be given for individual objects and shapes based on the location of the annotations.
In another example, a system (or corresponding method or algorithm) may enable selective enablement of the web conference annotation.
In another example, a system (or corresponding method or algorithm) may autogenerate tags, whereby the system (or corresponding method or algorithm) or users may correlate annotations across slides and scenes.
In another example, once a system (or corresponding method or algorithm) analyzes the annotation and associates it with the object on the screen, the system (or corresponding method or algorithm) may display the object to be edited in a lightbox so the user(s) can focus on editing that object directly.
In another example, a system (or corresponding method or algorithm) is not limited to displaying the annotations a particular way. The system (or corresponding method or algorithm) may display these annotations as cards, which can popup and be iterated through.
In another example, a system (or corresponding method or algorithm) may generate threads with topics, in/not in synchronous order.
In another example, a system (or corresponding method or algorithm) may add recommendation and/or voting to the generated topics, objects and/or threads.
As described herein, various embodiments may provide one or more of the following advantages: (a) improved user experience; (b) reduced coordination of user data and voice input; and/or (c) automated threading of discussions.
In other embodiments one or more of the following may be used to implement various aspects: (a) the IBM AbilityLab Voice Transcription Manager Research Project (http://www-03.ibm.com/able/accessibility_research_projects/vtm.html); (b) IBM SmartCloud Meeting/Web Dialogs/Cisco WebEx/Citrix Meetings; (c) Optical Character Recognition (OCR); (d) Image/Object Recognition (e.g., http://bing.com); and/or Dragon Naturally Speaking.
In other examples, any steps described herein may be carried out in any appropriate desired order.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any programming language or any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like or a procedural programming language, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention may be described herein with reference to flowchart illustrations and/or block diagrams of methods, systems and/or computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus or other devices provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowcharts or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is noted that the foregoing has outlined some of the objects and embodiments of the present invention. This invention may be used for many applications. Thus, although the description is made for particular arrangements and methods, the intent and concept of the invention is suitable and applicable to other arrangements and applications. It will be clear to those skilled in the art that modifications to the disclosed embodiments can be effected without departing from the spirit and scope of the invention. The described embodiments ought to be construed to be merely illustrative of some of the features and applications of the invention. Other beneficial results can be realized by applying the disclosed invention in a different manner or modifying the invention in ways known to those familiar with the art. In addition, all of the examples disclosed herein are intended to be illustrative, and not restrictive.
Lu, Fang, Loredo, Robert E., Bastide, Paul R., Broomhall, Matthew E., LeBlanc, Ralph E.
Patent | Priority | Assignee | Title |
10630738, | Sep 28 2018 | RingCentral, Inc | Method and system for sharing annotated conferencing content among conference participants |
11201906, | Aug 29 2017 | International Business Machines Corporation | Providing instructions during remote viewing of a user interface |
11206300, | Aug 29 2017 | International Business Machines Corporation | Providing instructions during remote viewing of a user interface |
Patent | Priority | Assignee | Title |
4339745, | May 14 1980 | General Electric Company | Optical character recognition |
6100882, | Jan 19 1994 | International Business Machines Corporation | Textual recording of contributions to audio conference using speech recognition |
6237025, | Oct 01 1993 | Pragmatus AV LLC | Multimedia collaboration system |
6546405, | Oct 23 1997 | Microsoft Technology Licensing, LLC | Annotating temporally-dimensioned multimedia content |
6917965, | Sep 15 1998 | Microsoft Technology Licensing, LLC | Facilitating annotation creation and notification via electronic mail |
7028253, | Oct 10 2000 | Monument Peak Ventures, LLC | Agent for integrated annotation and retrieval of images |
7269787, | Apr 28 2003 | International Business Machines Corporation | Multi-document context aware annotation system |
7296218, | Feb 08 2006 | NORTHWEST EDUCATIONAL SOFTWARE, INC | Instant note capture/presentation apparatus, system and method |
7305436, | May 17 2002 | SAP SE | User collaboration through discussion forums |
7412383, | Apr 04 2003 | Microsoft Technology Licensing, LLC | Reducing time for annotating speech data to develop a dialog application |
7562288, | Feb 08 2006 | NORTHWEST EDUCATIONAL SOFTWARE, INC | System for concurrent display and textual annotation of prepared materials by voice-to-text converted input |
7733366, | Jul 01 2002 | ZHIGU HOLDINGS LIMITED | Computer network-based, interactive, multimedia learning system and process |
8315430, | Nov 07 2007 | Google Technology Holdings LLC | Object recognition and database population for video indexing |
8495496, | Mar 02 2011 | International Business Machines Corporation | Computer method and system automatically providing context to a participant's question in a web conference |
20020087595, | |||
20020099552, | |||
20040205547, | |||
20060031755, | |||
20060064342, | |||
20060265665, | |||
20070100937, | |||
20080008458, | |||
20080098295, | |||
20090089055, | |||
20090210778, | |||
20090306981, | |||
20100306018, | |||
20110307550, | |||
20110307805, | |||
20110313754, | |||
20120005588, | |||
20120005599, | |||
20120039505, | |||
20120260195, | |||
20130091205, | |||
20130091440, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 07 2012 | LOREDO, ROBERT E | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028219 | /0657 | |
May 07 2012 | BROOMHALL, MATTHEW E | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028219 | /0657 | |
May 07 2012 | LU, FANG | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028219 | /0657 | |
May 08 2012 | BASTIDE, PAUL R | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028219 | /0657 | |
May 08 2012 | LEBLANC, RALPH E , JR | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 028219 | /0657 | |
May 16 2012 | International Business Machines Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Aug 19 2019 | REM: Maintenance Fee Reminder Mailed. |
Feb 03 2020 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Dec 29 2018 | 4 years fee payment window open |
Jun 29 2019 | 6 months grace period start (w surcharge) |
Dec 29 2019 | patent expiry (for year 4) |
Dec 29 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 29 2022 | 8 years fee payment window open |
Jun 29 2023 | 6 months grace period start (w surcharge) |
Dec 29 2023 | patent expiry (for year 8) |
Dec 29 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 29 2026 | 12 years fee payment window open |
Jun 29 2027 | 6 months grace period start (w surcharge) |
Dec 29 2027 | patent expiry (for year 12) |
Dec 29 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |