A system and methodology for collaboration. The illustrated system is comprised of a plurality of computing appliances, each respective one of the plurality of computing appliances, having a user input apparatus to provide respective annotation data as input for the respective user to permit the respective user to provide annotations which appear within a display presentation as an image for viewing and representative of the annotations. A first subset of the plurality of computing appliances, comprising at least two of the computing appliances, form members of a group; wherein separate selective communication of the annotation data is provided separately among the members of the group. There is selective display of the annotation data to selected ones of the members of the group. The annotation data is selectively coupled for distribution to other ones of the plurality of computing appliances in the group for selective viewing at said certain other ones of the plurality of computing appliances. The system is further comprised of selection logic permitting at least one of the users' to selectively choose which of the other users within the group are selected ones that are to have their corresponding annotations viewed as part of the display presentation for the at least one of the users. The selective display is visible only at the computing appliances of the selected ones of the members of the group.
|
14. A method for use by a plurality of users, each of the users having a display apparatus and an input apparatus, the method comprising:
associating a plurality of the users into at least one group, each said group comprised of at least two of the users;
providing a display presentation to all of the users in all of the groups, of a same base image display;
permitting concurrent input of annotations by a plurality of the users, while the users are viewing the same base image display;
structuring storage areas in a non-transitory memory as a plurality of addressable areas of memory;
associating each said addressable area of memory with one said user, for each of a plurality of the users;
storing the annotation data for a respective said user in the addressable area of memory associated with said user, for each of the users;
for each said group, selecting the addressable areas of memory associated with the users in the group as a selected set of areas associated with the group;
for each said group, generating a modified display presentation shown to at least two of the users in said group, responsive to the selected set of areas associated with said group.
1. A system for user by a plurality of users, each of the users having a display apparatus and an input apparatus, the system comprising:
grouping logic associating a plurality of the users into at least one group, each said group comprised of at least two of the users;
display presentation logic providing a display presentation to all of the users in all of the groups, of a same base image display;
input control logic permitting concurrent input of annotations by a plurality of the users, each said input generating corresponding annotation data, wherein said input is made while the users are viewing the same base image display;
non-transitory memory structured as storage comprised of a plurality of addressable areas of memory;
mapping logic associating each said addressable area of memory with one said user, for each of a plurality of the users;
storage logic storing the annotation data for a respective said user in the addressable area of memory associated with said user, for each of the users, responsive to the mapping logic and the input control logic;
group mapping logic, providing for each said group, selection of the addressable areas of memory associated with the users in said group to comprise a selected set of areas associated with said group;
group display presentation logic, providing for each said group, generation of a modified display presentation shown to the users in said group, responsive to the selected set of areas associated with said group.
10. A system for use by a first user to collaborate with at least one other user, each of the users having an input apparatus and a display apparatus, the system comprising:
display presentation logic providing a display presentation of a same base image display to each of the users;
input control logic permitting concurrent input via the input apparatus of user annotations by each of the users, each said input generating corresponding annotation data, wherein said input is made while the users are viewing the same base image display;
wherein said user annotations having an associated image representative of the annotations appearing at a user-defined position relative to the display of the base image;
non-transitory memory structured as storage comprised of a plurality of addressable areas of memory;
mapping logic associating each said addressable area of memory with one said user, for each of the users;
communications logic providing selective communication of the annotation data of the first user to the at least one other user and providing communication of the annotation data of the at least one other user to the first user;
storage logic storing the annotation data for each said user, in the addressable area of memory associated with the user, responsive to the mapping logic, the communications logic and the input control logic;
selection logic, selecting at least two of the addressable areas of memory as a selected set of areas of memory; and
display presentation logic, generating a combined presentation of the base image combined with the associated images for the user annotations responsive to the annotation data from the selected set of areas of memory.
2. The system in
communications logic providing selective communication of the annotation data associated with the at least two of the users among a plurality of the users of at least two said groups;
wherein the group display presentation logic provides a display of the modified display presentation on the display apparatus of at least one of the users of the group.
3. The system as in
inter-group logic associatively mapping at least one of the users of each of the groups to also be a user of an inter-group group; and,
inter-group display presentation logic generating a modified display presentation shown to at least two of the users in said inter-group-group, responsive to the selected set of areas associated with the users in said inter-group-group.
4. The system as in
wherein the addressable areas of the memory are comprised of a plurality of mapped data layers of storage, each of the data layers associatively mapped to store the annotation data input by a respective one of the users of the group;
wherein each of the data layers stores the annotation data as input by the associated users responsive to the mapping logic;
wherein the annotation data that is stored in at least two of the data layers is combined with the based data and utilized by the display generation logic to generate a local said display presentation at the display apparatus for at least two of the users of the group.
5. The system as in
display control logic enabling at least one of the users to determine which of the data layers are utilized to generate the modified display presentation that is provided to at least one of the users.
6. The system as in
wherein at least one said member for each of the groups also provides communication of at least some of the annotations for said group to at least one said member of at least one other of the groups,
wherein the group display presentation logic generates a separate said modified display presentation for viewing on the display apparatus of said at least one said member of said at least one other of said groups, responsive to the communication.
7. The system as in
wherein at least one of the plurality of users in a first said group controls selection of selected ones of the annotations to select associated said annotation data of the users of said first said group, to be used in generating the display presentation for at two of the users in the first said group.
8. The system as in
user input control logic controlling selection of at least two of the users as selected users for which said input apparatus is permitted to provide the input of the annotations.
9. The system as in
wherein the display of the annotations appear in the modified display presentation at a position as input by the user relative to the base image, said position comprised of at least one of: a position relative to a page in a document comprised of multiple pages; a position relative to a frame in a movie comprised of multiple frames; a position relative to an image in a gallery of multiple images; a position relative to a top portion of a page in a document; a position relative to a display from an interactive game; and a position relative to a bottom portion of a page in a document.
11. The system as in
wherein the selection logic selects selected annotations from the addressable areas of memory associated with the annotations from users from within a plurality of the groups;
wherein the group display presentation logic generates a separate said modified display presentation for each said group, responsive to the selected annotations.
12. The system as in
grouping logic associating a plurality of the users into a plurality of groups, each said group comprised of at least two of the users;
display presentation logic providing a display presentation to all of the users in all of the groups, of a same base image display;
wherein the mapping logic is further provides associating of each said addressable area of memory with one said user, for each of a plurality of the users for a plurality of the groups;
the system further comprising:
group mapping logic, providing for each said group, selection of the addressable areas of memory associated with the users in said group to comprise a selected set of areas associated with said group;
group display presentation logic, providing for each said group, generation of a modified display presentation shown to the users in said group, responsive to the selected set of areas associated with said group.
13. The system as in
wherein the base image display is comprised of a display presentation from an interactive game.
15. The method as in
generating multiple different display presentations responsive to utilizing a plurality of different combinations of the annotation data that are combined with the base image, to generate the multiple separate display presentations.
16. The method as in
wherein the annotations have an associated display image and an associated location determined responsive to the user;
wherein each said associated display image appears at the associated location relative to the display of the base image in the modified display presentation.
17. The method as in
wherein the plurality of addressable areas of memory are structured as a plurality of data layers;
wherein each said addressable area of memory is comprised of one said data layer from the plurality of data layers;
wherein said associating each said addressable area of memory is comprised of associating each said data layer with one said user, for each of a plurality of the users.
18. A method as in
providing separate local non-transitory memory,
associating each said separate local memory with one said user for at least two of the users, said local memory comprising a plurality of local data layers,
wherein in each set of the local memory, each one of the users is associated with at least one said local data layer providing for storage of the annotation data for the annotations as input at said one of the users, and
wherein each said local memory stores the annotation data for its said associated user in a first said data layer, and stores the annotation data associated with at least one other said user in associated said local data layers that are each associated with the annotations as input by said user of said another computing appliance in at least one other data layer,
the method further comprising:
displaying a local presentation on the display apparatus to at least one said user responsive to the local data layers in the local memory that is associated with said user.
19. The method as in
wherein the modified display presentation is comprised of the display of the base image combined with a display of the annotations as made by a plurality of the users from a plurality of the groups.
20. The method as in
wherein for each of the groups, the modified display presentation is provided for viewing by the users in the group, each said group is provided with a different display presentation that is separately generated;
wherein each said different display presentation is comprised of the display of the base image combined with a display of the annotations as made by a plurality of the users from the group.
|
Not Applicable.
Not Applicable
The present invention relates generally to the use of computer systems and applications as a tool in working with documents, and more particularly to a family of systems, methods and apparatus for facilitating and managing a complete and thorough manner to concurrently view and collaborate on a document (or documents), and provide navigation, editing of images and providing user interfaces, and providing data storage and management infrastructures and mechanisms, such that the present invention provides for multiple user real-time collaboration, and to apparatus, systems and methods for multiple individual users each separately and concurrently being able to be modifying as a group a core graphical image, and selectively choosing and displaying chosen ones of the users' modifications along with the core graphical image.
There are computer programs that permit a single user to type text and/or draw via a computer keyboard and/or mouse or other pointing device. An example is a Word Processor (such as Word by Microsoft Corporation, Redmond, Wash., as well as other programs such as WordPerfect, OpenOffice, etc.).
These Word Processor programs often permit the use of tracking of changes made by a user to a document. Thus, a first version of a base document from a first user can be saved as a new and separate document file (a base version of the base document), which file is then shared with a second user (or multiple other users). Then, that second user creates and saves a new and separate document file (a new second version of the base document), wherein that second user can make edits to the base document with tracking turned on so that it creates that new second version of the base document which is a red-lined markup version of the first version of a base. Then, a next user (such as either the first user or a third user) can receive and open that new and separate document file (the second version of the base document) and that next user creates and saves a new and separate document file (a new third version of the base document), wherein that next user can make edits to the second version of the base document with tracking turned on so that it creates that new third version of the base document which is a red-lined markup version of the second version of a base document. And this process can keep repeating over and over, and so on and so on, creating more and more new and separate document files (a new next version of the base document), wherein the next user makes edits to the previous version of the base document with tracking turned on so that it creates that new next version of the base document which is a red-lined markup version of the previous version of the base document. Then, when desired, at some point in this process, a latest one of the red-lined document versions can be “accepted” and saved as a new and separate document file which is a clean version of that latest red-lined version but with no red-ling showing, only the final result of deletions and additions of the totality of red-lining in the accepted version.
During this process, there are multiple new and separate document files created, one new and separate document file taken for each turn by each user for the set of separate edits made by that user during that turn by that user. And, this process inherently causes delays because there is a need to wait for each turn of a user to be completed before a next user can begin his/her turn of making edits and inputs.
Furthermore, there is also the case where the base version of the base document goes to multiple other users. Then, each one of the multiple other users individually and separately creates his/her own new and separate document file (creating multiple ones of a second version of the base document), wherein each one of the multiple other users makes his/her own set of separate edits to the base version of the base document (making the edits with tracking turned on) so that he/she creates a different one of multiple ones of a second version of the base document, each one of which is a red-lined markup version of the first version of the base document. At that point, there are real problems, because now each and every one of the multiple users needs to look at each different one of multiple ones of a second version of the base document for each of the other ones of the multiple users, while also looking at their own separate one of the second version of the base document, in order to understand the inputs made by each of the multiple users. This is a slow, inefficient and frustrating manner to work. And it leads to a loss of momentum and to confusion. This process is again a step at a time, back and forth, seriatim, and not concurrent.
An alternative to this process with Word Processor and tracking, and sending new and separate document file versions of a base document version back and forth, is to work online as a group with a shared file that keeps being updated with changes as they are occurring, but still occurring with only one user in control (making his/her edits/inputs at a time, in a seriatim usage manner. [An example of such a tool with one user in control at a time, and seriatim use, is “GoogleDocs”, available at docs.google.com, or at www.google.com, owned by Google, Inc., of Mountain View, Calif.]
Initially, a first version of a base document from a first user is saved as a new and separate document file (a base version of the base document), which file is then centrally stored on a Google computer server, which file is then shared via that server and an Internet coupling with multiple other users). Any one of the other users can select to take control and make an edit to the shared document. As the edit is made, the shared file on the server is updated to create and save a new and separate document file (anew second version of the base document) that contains those edits to the base document. Then, a next user (such as either the first user or a third user) takes control and he/she can edit that shared server document file (the second version of the base document), and when those edits are made, the shared file on the server is again updated to create and save another new and separate document file (a new third version of the base document) that contains those edits to the base document. And this process can keep repeating over and over, and so on and son on, creating more and more new and separate document files (a new next version of the base document).
During this process, there are multiple new and separate document files created and saved and stored on the server, one new and separate document file for each turn taken by each user. And, this process inherently causes delays because there is a need to wait for each turn of a user to be completed before a next user can begin his/her turn of making edits and inputs. And, it leads to a loss of momentum and to confusion. This process is again a step at a time, back and forth, seriatim, and not concurrent.
There are drawing programs and illustration programs that are single user with a single document on a single computer, which permit multiple layers to be utilized to create an image. However, these are for single user use, and do not work for multiple user collaboration. [An example of such a tool with one user in control at a time, and seriatim use, is “Photoshop”, available from Adobe at www.adobe.com (Adobe Systems Incorporated, of San Jose, Calif.] This is a slow, inefficient and frustrating manner to work. This process is again seriatim, and not concurrent.
There are also programs that permit communications via email permitting sending and receiving of communications (text with or without attached files) to be sent back and forth between users. [An example of such a tool with one user in control at a time, and seriatim use, is “Thunderbird”, available from www.Mozilla.org.] This is a slow, inefficient and frustrating manner to work. And, it leads to a loss of momentum and to confusion. This process is again a step at a time, back and forth, seriatim, and not concurrent.
There are also programs that permit communications via instant messaging to permit multiple users to text message back and forth. These communicate text messages back and forth, but do not permit collaborative work upon a common base document text or image. This is a slow, inefficient and frustrating manner to work. And, it leads to a loss of momentum and to confusion. This process is again a step at a time, back and forth, seriatim, and not concurrent.
There are also programs that permit conferencing communications via voice (using a microphone and speaker) or via video (using a video or still camera) among multiple users. These permit voice communication or camera-based video communication in a very limited manner, but do not permit concurrent collaborative edits and inputs in real-time to be performed upon a common base document (text or image).
There are also problems that permit communications wherein there is conferencing where one specific user at a time is in control (often referred to as that user having the token), wherein that one specific user can show what is on his/her computer screen to be viewed by other viewing users who can only passively watch based upon that one specific user's display. [An example of such a web-conferencing tool with one user in control at a time, and seriatim use, is “WEBEX” at www.webex.com, owned by Cisco Systems, Inc., of San Jose, Calif.] At some point, that one specific user can decide to give up control, and can decide to select a document file stored on that one specific user's computer, or that one specific user can choose to save a first version of a base document from that one specific user's computer, and that first version of a base document is then shared with one or multiple other users.
Then, the control (the token) is taken over by another user. That other user can then show what is on his/her computer screen to be viewed by other viewing users who can only passively watch based upon that specific another user's display. That display can be something independent of what the first user was showing, or can be a display of the first version of a base document from that one specific user's computer.
At some point, that specific another user can decide to give up control, and can decide to select a document file stored on that specific another user's computer, or that one specific user an choose to save another version of the base document (which is an edited version of the first version of a base document (which is an edited version of the first version of a base document from that one specific user's computer), and that another version of a base document can then be shared with one or multiple other users. And, this process can keep repeating over and over, and so on and so on, creating more and more new and separate document files (a new another version of the base document), wherein a next another user makes edits to the previous version of the base document, so that it creates that new next another version of the base document.
This alternative is a low, inefficient and frustrating manner to work. And, it leads to a loss of momentum and to confusion. This process is again a step at a time, back and forth, seriatim, and not concurrent.
This invention provides for an efficient, real-time document collaboration system that provides an unique ability to separate the input of users and provide customized and dynamic presentations of the document with edits to each user.
Document collaboration (“DC”) is a powerful paradigm. Document collaboration provides a unique vehicle, to concurrently work with others, (1) in simultaneously viewing a same-base document image, (2) any or all users can annotate at the same time, and (3) all users can see the real-time annotations of all other ones of the users that are in a same group. Its embodiment is a powerful tool to its users. It provides a new user interface paradigm—like FaceBook. Document collaboration is an enabling medium upon which can be built a set of usage practices and protocols to allow the medium to be adapted to the operations of a target use.
In accordance with another aspect of the present invention, the concurrent use of document collaboration is used in conjunction with and concurrently with conferencing (such as audio, video, screen sharing, application sharing, etc.).
In accordance with another aspect of the present invention, “document collaboration” is combined in various permutations with the “conferencing solutions” and results in special synergy.
The document collaboration solution works with a wide-range of many different target markets (each which for separate reasons cares about document collaboration features). With document collaboration, users can focus on working directly on the core base document. Each user can write, draw or type text as user annotations that appear in the display presentation that is made viewable to all users in the working group/team.
The legal market is a good fit, because they are not focused on giving of presentations, but rather the focus is on working with documents and tracking of “who said what”
As used herein, the term, “conferencing solutions”, refers primarily to a screen sharing and/or and audio/video conferencing tool.
As used herein, the term “screen sharing” refers to the user that is a presenter has a selected window of the screen display image on their computer desktop as displayed on their desktop screen display is communicated to and displayed upon the displays of all other users.
As used herein, an “audio/video tool” provides all equipment and tools for people to be connected to one another ranging from using web-cams and microphones to audio-only phone calls. Just as there can be split-screen video of different users' subsets of annotations, there is a parallel analogy in the audio and audio/video areas (such as using multiple channels (switched/controlled) for multiple chats at once. Audio can be separately sent to other people on the team through the computing system hosting the document collaboration or via a separate phone conference (e.g., POTS (Plain Old Telephone System) or Internet or cable).
As used herein, “document collaboration” permits users (each at a separate computer display) to all commonly view, collaborate upon and work on (discuss and annotate) documents, and manage a library of documents.
All users commonly view the same image display for an underlying document being worked on. As annotations are made by a first user that appear in that first user's display as markings showing atop the image of the underlying document, the annotations can also be seen simultaneously by other users as appearing in each of the other users' displays also aligned for viewing atop the image of the underlying document.
Conferencing solutions are about people interaction and transitory visuals that are momentarily displayed or audio sounds that are momentarily played. Conferencing solutions do not permit management of documents or groups of documents.
With document collaboration, the users are concerned about the development of a document.
With conference solutions, the users' concern is to discuss something (e.g., a subject or document).
The collaboration technology of the present invention maintains information about the development and evolution of the document (layers of annotation data mapped and stored by User Identification and by Annotation Timing).
With the present invention, each user's annotations are logically mapped and correlated to a document. This is one focus of the collaboration. However, another novel perspective is how the annotations are correlated to the document and presented to the users.
Annotations for each of the users are stored and separated into user layers or user Data Layers. In addition to the annotation data stored in the user layers, there is also stored meta-data as to when (date, time) those annotations were made, and by whom. This provides a time line and ownership of annotation data, a related meta-data defining how the documents were created and evolved.
Another novel area is utilizing the perspective and paradigm of working on the viewable image of a document and using “images of annotations” aligned atop the image of the document. The annotation data is thus representative of the display presentation for annotations of a respective user as aligned to and written atop the underlying document.
With user ID and timing information for the annotation data, it makes it easier to be able to reconstruct “who said what when”, and to maintain information about the development and evolution of the document.
This information (about the development and evolution of the document) that is maintained can then be utilized for selective viewing of annotations by user or group or sub-group of users, and/or by time of entry or by other criteria.
By contrast, “The conferencing solutions” are one user at a time. All annotations are in the same single layer in a conferencing solution. There are limited tracking of annotations made by members of the conference and if they are available it isn't easy to separate various users from the final result because they do not maintain each user's markings in a separate Data Layer. The conferencing solutions may not even provide a final markup of a document. It does not even provide any concurrent markup of a document.
Conferencing solutions may permit a user who is presenting at the time to markup or annotate his/her screen. Others may be given the opportunity to further markup the document but it doesn't get maintained in the document. The screen that was annotated can be saved outside the document on a “per page” basis. These saved screens with annotations must be manually correlated later by the user who maintains the work product.
Conferencing solutions do allow all annotations to be placed on one layer over an underlying document. Individual annotations can be removed but since all users contribute to the same single layer, the individual contributions by user is lost and the order of the development of the document with annotations is lost.
The collaboration technology of the present invention is a better tool than standard conference tools in those instances where the document itself is the center of attention (that needs to be changed) [rather than the focus being a presentation and personal interaction].
There are other document collaboration tools available other than the collaboration technology of the present invention but many do not provide real-time editing. None allow for multiple Data Layers for separating the user's additions or for creating a new Base Data Layer from previous edits. They provide a single Data Layer for providing the annotations, markings and edits to the existing document.
Also, the collaboration technology of the present invention is a peer-to-peer solution whereas the conference solutions and other document collaboration solutions are a client/server model. (The collaboration technology of the present invention can also be implemented in a client/server model as well.) The collaboration technology of the present invention can operate on a local area network where every appliance can communicate with any of the other appliances. This provides the flexibility to operate, even when Internet access or is not available.
One of the difficulties with a peer-to-peer solution is that communications over the Internet is conducted in a server-to-server or client-to-server manner. Client-to-client communications are generally not possible directly on the Internet for a variety of practical and technical reasons. The collaboration technology of the present invention avoids this by using an Internet server that the peer-to-peer clients connect to. Then the server allows messages from one client to be passed to other clients that are connected to the same server. The peer-to-peer messages are maintained but the client-to-client connection is simulated by the client/server connections. The server does not have significant computational requirements so it can handle more clients. The server is also not storing large amounts of information for all the users. This is maintained on the user's systems. Security issues with centralized storage of information is minimized. Still the redundancy of the data is maintained at each user's local appliance. This allows each user to access the documents, albeit maybe not in direct collaboration with others, any time, anywhere.
Our document collaboration system provides control of the operation in each system with data also flowing between systems. Client/server systems generally require that you be able to access the server to access the documents unless local storage for all documents are provided on the user's appliance. This either makes it impractical to access the documents reduces the benefits of a centralized storage of document information. The server in addition controls most of the aspect of the system. This does allow for better centralized control of the use of the system but it also puts a “middle-man” between the user and getting their job done. If control of the users is minimized in a centralized server then the benefits of this control is minimized for a central server and a peer-to-peer solution is more appropriate.
When our systems are used in a local network the bandwidth is not limited by Internet connectivity or server bandwidth, only by the user appliances. On the other hand our system and all peer-to-peer systems have more synchronization issues of data since the data is generally replicated across several systems. If a system is not connected to the team and activities take place, the system must be brought up-to-date to the rest of the team and the team must be brought up-to-date to changes made by the user.
Even where the collaboration technology of the present invention is a better solution for a customer than conferencing above, there are also many situations where some or all of the features in the conferencing (e.g., audio and/or video) solution are beneficially added to the document collaboration. This combines the best of conferencing and document collaboration to allow real-time discussions to occur as the documents are edited. This minimized miss-communication and provides instant feedback.
In its simplest form, the document collaboration is utilizable as a tool used in conjunction with a concurrent use of a telephone or video conference. Alternatively, an online conferencing can be utilized for voice or video.
For instance, Skype is an online audio and/or video conferencing system that provides the typical conference solutions. This could be done in a split screen mode with the document collaboration on one part of the split-screen display and the conferencing being shown in another part of the split-screen display. Furthermore, if desired, the base document being worked on (e.g., a Word document) can be shown in yet another part of the split-screen display.
A starting document provides an underlying canvas referenced and utilized in common by all users for joint collaboration. The starting document can be a Word document file, or Excel file, or image file (e.g., JPEG, PDF) or any computer file. The starting document is converted into an importable format for an equivalent image file for its associated display presentation.
This starting document has a respective associated display presentation, which forms the underling image file utilized as the underlying canvas for the collaborative display of the underlying image of the starting document.
In document drafting, the starting document file format is usually a text or word processing [“.doc”—Word] file, such as for the Word text document corresponding to the associated display presentation.
This is the base document which has a corresponding underlying image upon which are overlaid all annotations thereafter drawn, written, typed-in, or otherwise provided, responsive to input by each of multiple users each at a respective one of the plurality of computing systems. This collaboration continues iterating, continuing to generate a respective updated version of a collaborative display output and continuing until the result evolves into a final consensus of what the document should be (as shown in its updated version of a collaborative display output having an associated display presentation wherein, ultimately consensus is reached in the form of a final collaborative display output with associated display presentation).
This results in generating a final agreed-to document [whether it be an agreement, a patent application, a prospectus, litigation papers, one or more drawings, or any multimedia object (audio and/or video-visual)], which also are provided in the generation of the display presentation for final collaborative display output, which comprises as the underlying image the respective associated display presentation for the original starting (base document), upon which is overlaid the respective associated display presentations for each of multiple image layers, each comprised of a semi-transparent overlay image for the video presentation representative of respective annotations for each user as stored in a respective one of the multiple layers which annotations are drawn, written and typed-in by each of the multiple users.
The end result of this joint collaboration with multiple users at multiple respective computing systems is to provide a final result obtained by consensus reached by collaboratively annotating relative to the image of the starting document, and relative to the overlaid annotations of each and all of the multiple users. This final result is provided as a video presentation that is the final collaborative display output representing the END RESULT of multiple users annotating relative to a common display of a current updated version of a collaborative display output progressing to generate the display presentation for the final collaborative display output.
With our technology, all users make modifications individually and in parallel, concurrently, in real-time, that are stored in an associatively mapped data layer in memory as associated with the respective user making the input of annotations. Those modifications may or may not be sent to all users. The modifications may or may not be sent to one or more other users. The annotation data for the modifications (or annotations) are selectively sent to other users based on what the defined Role of each of the other users in the defined Team.
For instance, in an Education Team, the appliances with a defined Role of students only send those modifications (edits) made by the Student appliances to the appliances with a defined role of Teachers. The Student appliances do not send to other Students' appliances in terms of communication).
The third component is that each of those appliances can merge the modifications that they have for display on a local display. An example would be in the education mode, if you are a student role, then your appliance always displays the teacher layers/images and your appliance also always displays your own user layer. Whereas only, if you are a teacher role, the user of the teacher appliance can either select to see only the display of the teacher layer (or layers) or alternatively, the display can be of the image from the teacher layer along with the image from the respective student layer(s) for a Student, or you can see all of the displays present at each of the students' appliances shown in multiple small images plus the display for the teacher layer selectively merged with the core/base document along with each individual students' appliances shown in small picture images. The teacher can select from three different views (+/views as design choice), or the teacher's appliance can display the choice of the user of the teacher appliance for each of those views, whenever you are in one of those views, that also determines what modifications you can make to the displayed document from that view. In the first example, where you are looking at teacher-only display, you are able to modify the teacher layer. If you are viewing a display for a particular student, then you as the teacher are able to do editing or modifying of that respective student's layer. If you are viewing a display on the teacher's appliance of all the students, then you cannot make any modifications to any of them from the teacher's appliance in this mode.
With a Team, there are three things: we have a team made up of multiple members or users (at least two), each user at a respective appliance. Each member has a role in the team, and has capabilities that are permitted based on the role of the team. Examples of these capabilities include: 1) what modifications a user can make based on their specific role; and 2) what sort of data level mergers are going to be made in providing a display; and 3) who (to which other members/users) do they communicate their modifications to, and 4) in what specific data layers are their modifications stored and what data is stored in which data layers. It is not just about managing which appliances to send the modifications to, but it is what layers to send the modifications to be stored in the receiving appliance, and which data layer the respective data is to be stored in on the receiving appliance.
Consider an example of an education team with teacher and student roles. There are multiple views where the view that is taken as a subset determines other feature sets. For example, the teacher can write and display but the feature set that the teacher activates is dependent on the mode or the view that the teacher puts their appliance into. So if the teacher goes into a teacher mode, then the teacher is going to view a display presentation using only the teacher layer but the modifications the teacher makes will be sent to all students' appliances to be stored in a teacher data layer therein and to be provided as a part of a display presentation thereto. In fact, all other appliances of all students and all teachers will store and display the modifications. Whereas, checkerboard multiple student and teacher mode, then the display presentation shown to each teacher will see a checkerboard layout of screen displays for all students and teachers, each screen display shown in a thumbnail or filmstrip-type display, but only the teacher sill be able to select one of the checkerboard images to bring to full screen and switch to a one-on-one interactive mode. In the multiple screen view mode, there is selection of a screen and/or viewing mode, but there are no modifications or changes or layers that are communications. The teacher can simply move to another view with the exception that any modifications that are happening in real time on a screen being viewed will be shown on the checkerboard. So the teacher can see the other modifications happening in real time, but in multiple screen view-mode, the teacher cannot make any modifications in her role.
A third role for the teacher is a private communications where the teacher touches the screen for one of the students' thumbnails (instead of the teachers). By touching a specific student's thumbnail, the teacher selects a one-on-one mode, where the teacher's modifications appear on that student's screen for viewing by the student in real time, even if the student is also concurrently writing. The student can erase what the teacher has written when in the one-on-one mode, but not what the teacher has written during when the teacher was in the teacher layer mode. The role of teacher and the mode of operation not only affects what the teacher as the appliance in that view can do, but it also affects the rights and privileges of what the receiving appliance can do with the modifications it receives. So if the teacher sent the modification in the teacher mode with the teacher layer only, and with the changes made, then the student, when they receive it, cannot erase or change that modification. But, if the teacher is in a one-on-one private network and makes a change, the student can erase that change.
On the student appliance, the user can always modify the students' layer and can communicate the student layer changes/modifications to all the teachers. So, the student's role does not change, and its user always can selectively see displayed the changes of the student layer. However, teacher changes can select different modes of which data layer the teacher's appliance it is modifying from the teacher layer in teacher mode to a particular student's layer in the one-on-one mode.
On a one-on-one network, the teacher is modifying the student's layer and therefore, the student has access to un-modify it. This determines which layers are being modified.
In different embodiments of the present invention, the roles determine the specific limitations on what layers are modified, what modifications to which layers are sent to other appliances, and what other appliances provide as a display from the merger of which layers are utilized to form the image of the collaboratively determined document.
When an ad-hoc meeting is conducted for collaborative work, each appliance has one user layer file. The user of any appliance can select a portion of said user layer file that is the user layer file for that respective appliance can be stored locally at that respective appliance or stored as the user layer file for that respective appliance but stored in one or more of any of the plurality of the appliances. Each appliance has its own respective associated user layer file for storing data for that user/appliance. Each appliance has a respective one user data layer file (out of a plurality of data layers stored in a layer storage memory) associated with that respective appliance. In user edit mode, the respective appliance can make modifications to its own one respective user data layer.
A communicating user of an appliance can select one of the other appliances, and they can select a portion of their own data layer to communicate to one of the other appliances and then the communication select portion is stored in the respective associated layer in the appliance to which communicated. In the one mode, the selected user data layer file is stored in respective data layers at each appliance as it receives data. Thus, the received data is stored in the respective one user data layer file as associated by the receiving appliance. So it is receiving appliance that stores received edit data for each respective user in respective user data layers.
Everyone has their own user selected. They draw a lasso around some annotations that they have made on their display and select them. That is the selected portion. Then, they click on “send” and they can send the selected portion to one or all of the appliances that are in networked communications. The receiving appliance takes those annotations and puts those annotations in the same data layer that the other appliances are currently using for storage of similarly originated edits, for that said user. So the user can thereafter delete them or edit them from there.
User 1 is using an appliance (appliance 1). Appliance 1 has an associated storage user data layer (named “D”). User 2 is using Appliance 2. Appliance 2 has its own associated user layer, a user layer name “R”. The two appliances communicate. Appliance 2 makes edits to Page 4 of his music. Appliance 2's user layer R (associated with Page 4) is selected and that portion is sent to Appliance 1. That portion is written into Appliance 1's user layer D (associated with Page 4). Appliance 1 does not write user data layer R on to Appliance 1, and then do a merge of data layer R (for Page 4) and data layer D (for Page 4). This could however also be done, in order to gain the ability to keep the edits sent to you separate from the edits that you were making [In the music mode, we do that. Essentially that. For the feature set of being able to distinguish edits shared in the document in real time, then wouldn't you want to be able to do what I said, which is to associate the changes you made to a portion of modifications you made for selected portion of your user layer that is associated with a respective selected portion of a common underlying document and your user layer would be communicated and stored as a separate user layer, the R layer for example.] So, Appliance 1, when connected to Appliance 2, would add an R layer and to Appliance 1 and would add a D layer to Appliance 2. So all the appliances would contain data layers for all networked appliances in collaboration so that users could do and undo the edits as they needed to.
Let us suppose a person, user A made changes to a number of particular pages, and has sent the annotation data for the modifications for a first page to another person (user B). User A also sends the annotation data for modification to page two to user C. User A also sends annotation data for modifications to the third page of modifications. User A asks all three of those people, users B, C and D, to modify the pages. Users B, C and D have also have been making some changes, independently, on their own. With the present invention, all that individual input collaboratively can be integrated and work together to result in a complete record of the activity. Each user's annotations are stored in an associatively mapped one data layer. All input of annotations (edits) by a user persists in the storage of the respective annotation data in the respective one data layer. The result of the combining the layers is the completed all-in-one document. Each user sends back to the other users, the annotation data as stored in that user's respective said data layers. By creating user data layers on each appliance, each appliance has a mirror of the content in the set of data layers in local memory of all the other appliances and the users can track changes by user and by time back and forward.
Thus, in the example of document sharing by users where after multiple users entries, a user is unable to allow you to individually take and remove certain edits made by one user or another, such as because the shared document just keeps adding all users changes in time to a single common layer. By contrast, in accordance with one aspect of the present invention, the documents' modifications are separated and stored and organized by user data layer. Prior art document sharing creates a layer for all users' environment. Whereas in accordance with this aspect of the present invention, in an ad-hoc mode, every user that logs on has their own user layer that they are editing. They may also get some edits form some other people and some not. An, everybody actually has a chance to have their set of edits on that document. With prior art document sharing a letter with some blanks left in it, is a common base document that everyone sees and everyone makes their own independent (concurrent) edits to that document. Let us suppose that everyone is customizing the document for a letter which will be going out to a companies a, b, and c. Each user has a, b, and c companies that they are sending it out to. They are making their own custom edits to change that. There is no set of associative data layer storage to individually store annotations. Rather one data layer overwrites all users annotations to a common element atop one another. If user A selects a word “Red”, and user B changes it to “Blue”, and user C changes it to “Brown” and user D changes it to “Green”, then user A sees only “Green” and none of the other comments from other users Wheres with the present invention and associatively mapped data layer storage, each user's edits are viewable and can be turned “on” or “off” selectively. You can make changes and send that change to everybody or just certain people in certain groups and change that clause in there and everybody gets it.
For example, for a particular company, somebody else might want to go and override and change it back because of some contractual relationship or something. That is a peer-to-peer communication implemented by each user having a layer and communicating.
Data layer storage can be centralized or distributed with each local appliance having a layer locally stored. Each appliance could have an associated one of the data layers that is for storing that user's annotation and then centrally stored (in a set of data layers).
In an appliance mode embodiment, each local appliance has computing power and stores the database in the set of data layers (locally storing the multiple data layers in that database).
In a centrally stored embodiment, it would require that the central database be maintained so that the contained layers (or sets of layers) are stored for every user and a merged output is provided to each user comprised of the global or common layer, plus either the individual layer for just that user or the individual layers for all users or a subset of users. This would require that the system differentiate based on who/which user appliance is communicating and if displayed through a browser as to what it would provide an output part of the merged database is to be locally stored for display.
The advantage of a central server is that it always has all the storage for all the appliances available to it. IT could actually potentially do a few more things. It also has not synchronization issues, since with a central server, all storage is all in one place, or a few places, with fewer synchronization issues. There is also fewer synchronization issues because the data layers are all stored in a central place. The disadvantage is that the system always has to be connected to that central server in order to do anything. Users cannot work independently at all. There are also some potential speed issues because you have to connect and go over a communication line to the central server.
In a preferred embodiment, there is also provided for at least two of the users' voice communications provided concurrently with the collaboration via annotations. This is a further parallel of a same-place/same-time work environment.
A voice communication happening at the same time as the document collaboration provides for discussing the collaboration and the suggested changes while they are being made by the user viewing the computer display presentation.
In one embodiment, a PDF file of the printout of the final collaborative display output represents the END RESULT of using the collaboration system in collaborative sessions by multiple users concurrently viewing and annotating relative to a same video display presentation to create a then version of final collaborative display presentation output.
In a preferred embodiment, the collaboration system continuously updates from its initial (or then current) starting document to utilize a new next starting document image that is to be utilized as an underlying display presentation from which to create a next final collaborative display presentation output representative of the next current base underlying image.
In one embodiment, an administrative user who is typing revisions to the starting document file, (e.g., in Word format), can utilize a split-screen display presentation, displaying the final collaborative output display on one part of a large LCD screen in a split screen mode and on the other part of the large screen (in the LCD split screen mode of the display apparatus) there is displayed the display presentation for the in-process starting document file [e.g., such as an open Word document running in Word as the starting document].
Using one part of the display presentation screen for the Word document, then the other part of the display presentation screen is used to see the results of the collaboration and what the substance of the annotations are in the final collaborative display presentation output. The administrative user can utilize that information to decide what changes are needed to be made to the starting document and then to actually make those changes to the starting document itself (e.g., the Word document) so that it corresponds to the final collaborative display presentation output.
Where annotations made by a user during collaboration are input by typing of text, then that text is stored in the collaborative document file format as a portion thereof that is usable in text format (e.g., to copy/paste between documents). The collaborative system permits the administrative user who is revising the starting document to copy and paste to or from any text that was typed by any of the multiple users into the collaborative file and to permit that user to take the copied text and to move it from within the file format of the collaboration technology of the present invention, and thereafter pasting the copied text into a Word (or other) document as text in the proper respective location in the starting document relative to the same corresponding location in the final collaborative display presentation output [since that “output” represents the image of the display presentation of the respective starting document]. Alternatively, or additionally, text can be copied from an external document (e.g., Word, Excel, text, or other document, or an Internet web-page), and pasted into a document in the collaboration technology of the present invention.
This saves a lot of time (especially for longer phrases) both in eliminating the retyping the text, and in eliminating having to re-proof the re-typed text in the Word document.
Consider the case where a Word file corresponds to an original base or starting document, from which an initial version of a collaborative display presentation output was obtained as the underlying image.
Starting from this initial version of a collaborative display presentation output (having a corresponding respective data file format structure and logic), and after many hours of multiple users annotating and creating an updated multilayer file version having a continuously updated collaborative display presentation output, continuing to be updated, until a final consensus is reached having a corresponding respective final collaborative display presentation output.
In one embodiment, there are multiple users who are respective primary deal workers and one or more administrative support workers. The primary deal workers are the ones that concurrently use the collaborative document system to create the final collaborative display presentation output.
A server or server-less networking system can be used.
The collaboration technology of the present invention bridges the “physical presence gap” that makes it hard to work with same documents with multiple users in different locations as compared to working with those documents in the same way as if the multiple users were all in a single room in the same location working on the same physical document(s).
The collaboration technology of the present invention utilize the starting document to provide an underlying canvas upon which the multiple users can reference to individually or concurrently work upon to annotate an overlaid image layer that is visually aligned relative to the respective associated display presentation for the starting document. The corresponding display presentation output is created by layers of overlaid user annotations relative to the underlying image. The final collaborative display presentation output is created utilizing the underlying canvas representative of, and overlays representative of, a final collaborative mutual consensus and agreement, which is represented by the display presentation of the final collaborative display output.
The collaboration technology of the present invention enables users at remote locations (“remote users”) to concurrently work together on a common document as though the remote users were physically in the same location working on the same physical document. And, beyond just working on a static common underlying document, there is additionally provided a sense of images of an evolving (with users' annotations overlaid upon the common document).
In accordance with another aspect of the collaboration technology, users in a same physical location (“local user”) are able to precisely, visually communicate with annotations overlaid atop a specific visually seen location in a selected document. This is a new level that does not even exist without the collaboration technology of the present invention.
The collaboration technology of the present invention provides a user the ability to precisely communicate exactly specific thoughts as a visual overlay of that user's input of annotations appearing at the user's selected specific location/position within (aligned atop the display presentation of) a document being worked on collaboratively by a plurality of users as a group (or team, or additionally, members of a group within groups-sub-groups; and members of teams within teams-sub-teams). A user can instantly highlight for all to see, and let other users know specific selected location (words that bother me in a sentence that has been written). A user can look at an image such as a CAT-Scan, and the user can circle or highlight to effectively point all other users to focus on a specific selected region of the CAT-Scan image and this enables each user to immediately mark-up and communicate to other users looking at that same common core image document (in this example, the CAT-Scan image). This would also be highly beneficial to a remote user(s) linked to other users and/or databases (resident local), and/or remote databases that can be accessed. For example, an emergency worker could get access to each work with a schematic of wiring of a phone closet, or of the water piping or ventilation in a public building, etc. This provides an ability to collaborate in an emergency situation occurs such as between firemen and crew, through working and discussing with someone how to fix a problem.
The collaboration technology of the present invention can also be used in business to communicate among remotely located individuals [such as for use in a shareholders' meeting, such as to communicate with the officers of the company, by a plurality of users in (each) participating via a computer display subsystem to collaborate at a conference, providing a way for the attendees to communicate with the presenters, and vice-versa.]
It can also be used for purposes, such as meetings (or conference calls/collaboration sessions) for document preparation, use in lawsuits and discovery (where there are thousands of pages of documents and the collaboration technology of the present invention provides the ability to precisely pinpoint exactly from which discovery step, or which word, or which part of what drawing, or what part of an agreement).
In each of these cases, there is a common core document where the document can be an agreement (or proposal, or legal brief, or prospectus, or marketing plan, etc.), where there is specific language that is not acceptable, the collaboration technology of the present invention allows each and all of multiple users to show other multiple users precisely the location (circle or highlights, etc.) what is not acceptable, and to also precisely correct it (e.g., insert typed or written annotations with arrow pointing to location point for insertion of the annotations), and let all users see instantly what correction is being suggested (and, precisely showing the other users (in their display presentations) what the suggested correction looks like and where it is to appear within the document).
In a preferred embodiment, each user is provided with (voice or video with voice conferencing) concurrently with being provided a related collaborative document display presentation. Multiple people/users can all concurrently type or hand-write via stylus or otherwise annotate, or provide markings of their own ideas, atop the display presentation of the core document. This allows each and all of them to instantly share their ideas with other users. This also provides a display presentation wherein some or all users (on the team) can instantly see all the other users' ideas and specific suggestions.
The collaboration technology of the present invention creates a new environment, a new paradigm that did not previously exist. It creates the ability to collaborate, enhanced in ways that enables concurrently markings of each and all the users, and provides a display of the markings that concurrently appear within the display presentation provided to all the users. The display presentation is shared for viewing by a selected one, some or all users (in real-time, preferably).
In a preferred embodiment, the collaboration technology of the present invention provides for concurrent entry by each user of that users' annotations for a plurality of the users, and provides a concurrent updated display presentation comprised of a base core document image and an image of the users' annotations (appearing aligned atop the core document image within the display presentation (of a combined image display presentation)).
In a preferred embodiment, each user's markings are uniquely identifiable (e.g., by an assigned respective color to identify the respective users' markings) within a combined display presentation provided to all the users within the group/team. The ideas of the users are concurrently expressed, displayed (in a way that identifies them with the user), and shared (ideally in real-time), wherein the annotations as integrated into an updated combined display provided to multiple selected users for all to see concurrently with the ongoing voice and/or video-conference discussion in the shared document (image).
Thus, each user is provided with precision of communication at levels of clarity and achievement that were not really attainable heretofore.
Best of all, the collaboration technology of the present invention provides a key team tool to allow opposing teams of people (users) to actually resolve all issues to closure for a shared document, such as representative of text, graphic, image, multi-media, etc.).
The collaboration technology of the present invention provides a way to synchronize and track user markings/edits, in a time-stamped tracking mode, and can maintain a continuous history of at least to all one user's activity in the collaboration technology of the present invention.
All the people working together on documents (and also concurrently talking either in person in a same room, or remotely talking via phone or video conference) can collaborate in real-time via hand-written annotations, typed edits, inserted images and/or talking [while each and all are viewing (concurrently) the same physical documents display images]. People can collaborate together whether within a same room with all local users, or remotely linked to collaboratively couple users at multiple different locations to collaborate together (with one or more at each sites (e.g., local, or remote relative to each other user. This can be used within a company or law firm, or any group or organization. This can also be used between different groups or organizations. Teams can be formed to communicate within the team with one another. Sut-teams with members from within the team can be formed, to communicate among members of the sub-team, independently of other communications between the team's members. And, there can be multiple sub-teams within a team.
Another use of this technology is to facilitate group collaboration, annotation and use of large amounts of documents. Any user in the group can use bookmarks to locate (mark a location for later reference) things for later discussion (label/name them for use in a table of bookmarks). Thereafter, all the users benefit from this locating and labeling as bookmarks, which simplifies document review so that any user can use only the labeled (e.g. 50) bookmarks to find things instantly and benefit from that organization. Thus, only the relevant 50 bookmarks are needed and used, instead of having to physically go through 10,000 or 100,000 pages. This has valuable beneficial uses in law, research of any kind, engineering, marketing, sales, medicine, music, etc. Plus, it allows any one user of the group to be a leader of the group and to control which page is to be displayed at one, or all, of the users' displays, such as controlling a jump to page X for everyone to be at the same place in the same document (e.g., Go To Bookmark, or Go To Page #).
With the collaboration technology of the present invention, any user can quickly (nearly instantly) find and share with other users on the same team. They can share everything they want. In a preferred embodiment, each user is collaboratively linked together. The users can be in the same room, and/or users can be at remote locations. In a preferred embodiment, each user is also coupled for communicating (e.g., voice, video with the users in the group). For example, each user can be coupled on to a conference call (e.g., video or voice only) with other users on the same team (or on the same sub-team within a team). Alternatively, if all users are local, they can sit in a room together and talk and concurrently provide annotation input via a respective user input responsive to each single user's control. Each user has their own said respective computing appliance (e.g., laptop PC, tablet PC, desktop PC, tough-screen PC systems, etc.) Each user can create and use bookmarks by marking a location in the present display presentation (or use section marks or page jumps) to instantly jump to any place within a library of stored documents, providing for a local display presentation to that user (or to all users including that user). Then, any user can mark annotations via their user input and using their local touchscreen display relative to a base display image. A user can start with a highlighter transparency setting and select as the pen type=marker, and mark up what it is wanted by the presenter to be seen by others in a group of users (e.g., while drawing a circle, saying, “see this area here” (within the commonly displayed image of a specific page of a specific document). Prior solutions required saying “third line on the page, the 12th sentence”. However, now with the collaboration technology of the present invention, a user can simply mark its location (such as circle it) (and everyone instantly sees the marking) and say “see this”, and everybody is in the same place. When one user on a team marks the display presentation (e.g., highlights it), everyone on the same team get highlights on all users' display presentation screens.
Alternatively or additionally, each user can have one or multiple separate open display presentation windows, with one or more application software display presentation windows using the collaboration technology of the present invention, and with one or more other application software display presentation windows, (such as running a Word processor (e.g., Word, NotePad, TextEdit, Quark Express), image processor (e.g., Adobe PhotoShop, PreView, Adobe Acrobat, Adobe Illustrator, Corel Draw, PowerPoint, and image viewer (e.g., PreView, Acrobat, QuickTime, etc.).
A user can also select an area to copy from a document in one window (either) and paste it into another document within another window, such as pasting it into a collaborative work in the window that is being worked on.
A user can also cite to the bookmarks within the collaborative document (or otherwise) or to a page number associated with the bookmark.
In accordance with one embodiment, any one (to all) of the users can operate with multiple windows open on that user's computer display, comprised of a display presentation in one window using the collaboration technology of the present invention, and a document, or graphic, or image (e.g., Word, PhotoShop, etc.) display presentation in another window.
The system described is composed of plurality of appliances on a network forming a team that is working together on a common project. Users collaborate with the team by interacting with Layer Data which is stored in Data Layers that are shared with other team members. Each user accesses a display of information that can be composed of image, video, text, audio and other forms that the appliance can provide and the user view. Each user views a customized display of the information based on selectively accessing the Data Layers. These Data Layers are combined responsive to the Layer Data. The display is thus customized for each user based on the ordering of the Data Layer and the selection of Data Layers. The storage of Layer Data in Data Layers and their selective combination based on the user provides for a flexible and powerful mechanism to facilitate collaboration in the team and meet the changing requirements for each user's need to view different Layer Data and the collaboration occurs.
The Layer Data in the Data Layers is composed of Layer Data Elements which include two items: context and content information. The latter, is the content that is displayed for the user. The content information can be in the form of vector line drawings, graphics, images, tables of data, text, audio, video, gaming data, and other data. The context information is used to provide the display logic the information to properly display the content. Context information provides context parameters for the display of content information in either relative or absolute terms to other Layer Data Elements, in the same or another Data Layer, or to an entire Data Layer. The context parameters can be spatial locations or references to other Layer Data Elements that imply ordering of Data Layer elements. For instance, several Data Layers containing text, e.g., “This i”, “s a te”, “st.”, could be referenced in a particular order using context information. The display would then display them to the user as “This is a test”. Also, a Data Layer element could contain an X,Y coordinate that refers to a location on the screen of the display or it could refer to an offset from another Layer Data Element. The context information could also contain other information such as a name which could be used by the display generation to include or eliminate Layer Data Elements based on the context information. If context parameters provide invalid information, such as an X,Y coordinate that would be off screen or a reference to another Data Layer element that is not visible or no longer existing then the display generation can choose to include the said Data Layer element with default information or not include it in the display.
The embodiment of Data Layer structure is illustrated in
An alternate embodiment would implement as illustrated in
The context information can include the time that a document was created, the time that an annotation was created, modified or accessed, the visibility of an annotation, the size of an annotation, image properties, user name, page number, section, chapter, bookmark, Layer Data Element that it is linked to (may be in another data layer), company name, physical location of appliance, location relative to another Layer Data Element, color, font, font size, font style, line width, . . . .
An appliance may be associated with more than one Data Layer. This allows many possibilities for the operation of a team. A team could be composed of multiple subteams. Each subteam would have members to that team. Each team would only view the Data Layers used by their subteam and a few other Data Layers used to collaborate between teams. A coordinator appliance would control those Data Layers. This would allow several teams to operate independently but can publish their results on Data Layers that are visible to other teams. This provides both security and minimizes distractions as multiple teams as subteams work together as one large team. Teams can include more subteams as needed and can be included in larger teams with changing their structure.
Each appliance is assigned a “Role” that it performs in the team and each team is defined by its “Team Type”, e.g., how the team should function together. The Team Type and Role provide the rules for the appliance to manage the appliance's use Data Layers and the rules for combining the Data Layers for the display. This includes managing which Data Layer a user can edit and modify, which Data Layers an appliance combines for the display, options for the Data Layer combination, what part of the Data Layer the appliance uses for the display.
Lawyers routinely work with documents that shared with others and many times need to discuss and modify the document before an agreement can be concluded. Many times everyone is not physically available in the same location. They are also not necessarily available at the same time. The use of the collaboration technology of the present invention allows both of these restraints to be eliminated while allowing simultaneous collaboration regardless of location and time. The modifications to the document would have the opportunity to record a time line of what modifications were made by whom and when. This time line could also be erased when the parties make an agreement and this time line is no longer necessary.
Contracts, litigation and licensing have at least two parties, typically composed of lawyers, plaintiff, defendants and clients. Each party needs its own subgroup to discuss, modify and propose changes to the contract, suit or patent. The proposed changes by a party is then communicated to the other parties in the contract in their own private discussions. The lawyer tends to be the person that proposes the changes to the other subgroups and communicates the reasons and significance to the clients in their subgroup. Therefore the lawyer will input data in two layers, one private layer for communicating with the others in their subgroup, and a second subgroup layer for communicating with the other subgroups. Everyone would be able to view the subgroup layer of all subgroups. Everyone would be able to view the private layer for their own subgroup only.
Patents require the development of documents that require the input of the patent attorney and the inventors. Notes by the inventor and review of the patents with the patent attorney can be conducted in real time, despite location differences.
Litigation also provides a unique opportunity for providing a real-time discussion of court documents in a trial. The judge would have access to all layers which provide input from all the lawyers in the courtroom. Each lawyer and the judge would have their own layer that they could provide input. The layers that the jury could see is controlled by the judge so that only the appropriate layers for the jury are shown to them after approved by the judge for their viewing.
Discovery using the collaboration technology of the present invention would allow all parties to review documents, mark their objections and have them reviewed by the judge. This would not require that all the parties be physically in the same room so it can speed up the preparation for trial.
Doctors are increasingly dealing with documents in their practice. These documents are shared between physicians, specialists, pharmacies, insurance representatives, billing departments and of course the patient. Documents include medical records, bills, insurance forms, admitting forms, medical releases, prescriptions and medical results. Many times the people are in very different locations. A collaboration technology of the present invention medical system could ease the time required to collaborate on these documents. Also data rights management can be applied to documents so only those authorized to view a document are allowed to view it or modify it. Patients, nurses, administrators, insurance agents and doctors can fill in forms with all the information about who provided what input at what time. Physicians can collaborate in real time on x-rays, cat-scans and other medical tests.
Engineers and architects create many documents that need to be reviewed by their clients, project managers, manufacturing, construction, procurement and each other. A collaboration technology of the present invention provides that ability to have each person, regardless of location, provide real-time input, review, analysis and reference to the latest documents. In addition, each change is recorded so each party's input can be compared, reviewed and approved.
Shareholder meetings, churches, synagogues and public meetings are real-time events where there can be user interaction with the audience. Questions are routinely ask and documents are routinely shared with the audience. Both can be communicated via an appliance which has large screen or a projector that is viewable by the audience.
Live production requires a team that is in close coordination. As the saying goes, “The show must go on”, which is the result of something not going according to the rehearsed plan. The ability of a collaboration technology of the present invention to rapidly communicate the issue and then rapidly communicate the changes provides all the production team to continue while maintaining the best performance.
Other examples of the Team Type are “Music”, “Education”, “Meeting”, “Ad Hoc” and “Social”. Teams will be prefixed by their Team Type later in this description such as “Music Team”, “Education Team”, “Meeting Team”, “Ad Hoc Team” and “Social Team”.
A Music Team can have the Roles of “Leader”, “Member” and “Listener”. An Ad Hoc Team uses the same Roles as the Music Team: “Leader”, “Member” and “Listener”. The Member and Listener Roles are identical in operation when operating within a Music Team. The Leader and Member Roles are identical in operation when operating within an Ad Hoc Team. Otherwise the Roles differ in operation in an Ad Hoc Team or Music Team. The Education Team can have the Roles of “Teacher” and “Student”. The Meeting Team can have the Roles of “Presenter”, “Facilitator” and “Participant”.
The number of Roles for a Team Type is not limited and may be one, two or more. Later in this document we will refer to an appliance by its Role, e.g., Leader Appliance, Member Appliance, Listener Appliance, Teacher Appliance, Student Appliance, Presenter Appliance, Facilitator Appliance or Attendee Appliance.
The appliances working on a team must be operating in a role that is included in a Team Type and may include a “Coordinator Appliance”. A Coordinator Appliance is an appliance operating in a Role with “Coordinator” functionality that allows it to define such things as which appliances are included in the team, their Roles and access rights. A “Non-Coordinator Appliance” is simply an appliance that is not a Coordinator Appliance. The Leader Appliances, Teacher Appliances and Facilitator Appliances are examples of Coordinator Appliances for their respective Team Types. There can be many appliances operating on the network so some Team Types require that at least one appliance on the team be a Coordinator Appliance.
Each Teacher Appliance defines which Student and Teacher Appliances are on their Education Team, thereby creating various “Classrooms” of Education Teams. Student Appliances only communicate with the Teacher Appliances in their Classroom by sending their annotations in the particular student's drawing layer. Teacher Appliances communicate with all the appliances in the Classroom or a particular Student Appliance. The Teacher Appliance either sends annotations in the student's drawing layer to a particular Student Appliance and any other Teacher Appliances in the Classroom or a common teacher layer annotation to all the Student Appliances and any other Teacher Appliances in the Classroom.
Likewise, Leader Appliances define the Leader Appliances, Member Appliances and Listener Appliances on their Music Teams. The Leader sends their annotations to all the other appliances in the team in their own layer. This layer is only modifiable by the that particular Leader Appliance. The Leader Appliances can also send other messages such as page turns to the Music Team. The Member Appliances and Listener Appliances on the display the all the Leader Appliance drawing layers in their team. Member Appliances and Listener Appliances operate the same on a Music Team, their operation differs on the Ad Hoc Team described later.
Facilitator Appliances define the Presenter Appliances, Facilitator Appliances and Attendee Appliances on the Meeting Teams. The Presenter appliance has control of the other appliances in terms of what page they are viewing. The Facilitator Appliance controls which appliance is the Presenter Appliances and Attendee Appliances, what data layers are viewed by each appliance and which data layers are editable by each appliance. The Attendee Appliances may be able to turn pages on their own determined by the Facilitator Appliance. The Attendee Appliances will be controlled as to what layers they can view and edit based on control from the Presenter and Facilitator Appliances. The Ad Hoc Team has no Coordinator Appliances and is composed of Leader Appliances, Member Appliances and Listener Appliances. The Ad Hoc Team uses all the appliances available on the network of these Roles. Leader Appliances and Member Appliances can send annotations to an other appliance in the Ad Hoc Team, but the Listener Appliances only receive annotations.
By utilizing the teachings of the present invention, a method is provided for displaying collaborative work as input by a plurality of users.
In a first embodiment, the method is comprised of providing annotation data for each of the plurality of users which is representative of the respective annotations by the respective user; storing the annotation data for each respective said user in a memory as associated with said each respective said user; enabling at least one user of the plurality of users to select which of the annotations are selected annotations that are used in generating the display presentation; and providing a combined display presentation to at least one user, the combined display presentation comprised of the selected annotations combined with a base core image.
The annotation data can be provided by any of multiple means, such as via a user input apparatus such as a keyboard, mouse, touchscreen input, stylus and digitizer input, voice recognition, camera recognition, import of images, scans, vector drawings, 2D and 3D models, audio, video or text data. This is discussed in further detail with relation to
The association of each user with his/her respective annotation data is provided by mapping logic whose configuration is responsive to the mapping control that is responsive to one or more control processors. The mapping logic is discussed in further detail with relation to
The generation of the display presentation is provided by the user display which is responsive to the display logic. The display logic is responsive to the aforementioned layer storage and mapping logic. The display logic is discussed in further detail with relation to
In a further embodiment to the one embodiment, the method is further comprised of enabling at least one user of the plurality of users to select which of the plurality of users is enabled to input the respective annotations therefor.
The global control processor obtains input from the at least one user of the plurality of users and communicates with the all the mapping controls to control mapping logic and display logic for selecting the annotation that is the destination for user input from the respective input device. This is discussed in further detail with relation to
In a further embodiment to the one embodiment, the method is further comprised of enabling at least one user of the plurality of users to select which of the users are selected to have their respective annotations selected for use in generating the display presentation.
The global control processor obtains input from the at least one user of the plurality of users and communicates with the all the mapping controls to control mapping logic and display logic for selecting which of the users are selected to have their respective annotations selected for use in generating the display presentation. This is discussed in further detail with relation to
In a further embodiment to the one embodiment, the method is further comprised of enabling at least one user, of the plurality of users, to do one of selectively enable and selectively disable utilizing of selected ones of the plurality of users said respective annotations in generating the display presentation provided for viewing to at least one of the plurality of users.
The global control processor obtains input from the at least one user of the plurality of users and communicates with the all the mapping controls to control mapping logic and display logic to selectively enable and selectively disable utilizing of selected ones of the plurality of users said respective annotations. This is discussed in further detail with relation to
In a further embodiment to the one embodiment, the method is further comprised of enabling at least one user, of the plurality of users, to do one of selectively enable and selectively disable utilizing of selected ones of the plurality of users said respective annotations in generating the display presentation provided for viewing by another said user that is not the at least one user.
The global control processor obtains input from the at least one user of the plurality of users and communicates with the all the mapping controls to control mapping logic and display logic to selectively enable and selectively disable utilizing of selected ones of the plurality of users said respective annotations in generating the display presentation provided for viewing by another said user that is not the at least one user. This is discussed in further detail with relation to
In a further embodiment to the one embodiment, the method is further comprised of displaying the user annotations for each respective said user within the combined display presentation as separately identifiable with the respective user as shown in the combined display presentation.
The control processors communicate setup information to the display logic that configures the display logic to add separately identifiable information for each respective user. The separately identifiable information can be in the form of a different color, a visual label added, a mouse over popup visual label, blinking visual effects, a different text font or character effect, modifying the location of the annotation on the display, 3D layer visualizations and other forms. This is discussed in further detail with relation to
In a further embodiment to the one embodiment, the method is further comprised of generating a separate and independent version of the combined display presentation for each of at least two of the plurality of users. Each said independent version of the combined display presentation is comprised of the respective said annotations of respective selected ones of the plurality of users as combined with the base core image.
The layer storage contains the base core image that is associated with all users. The mapping logic is configured to include the base core image or common data layer and data layers for annotations of respective selected ones of the plurality of users. This is discussed in further detail with relation to
In a further embodiment to the one embodiment, the method is further comprised of associating each respective said subgroup with at least two respective ones of the plurality of individual data layers which are associated with said respective subgroup. Each respective user of the plurality of users is associated with a respective said individual data layer. The method further enables each said respective user to selectively create associated respective said annotations. And the method stores the respective said annotation data for the respective said annotations in the respective said individual data layer associated with the respective user.
The control processors contain the information associating each user with a subgroup. A data layer is associated with each user. The edit level for each respective mapping logic user is setup to point to the respective data layer for each user. This is discussed in further detail with relation to
In a further embodiment to the one embodiment, the method is further comprised of associating each respective said subgroup with at least two respective said individual data layers associated with the respective said subgroup; and associating with an editable layer with each of the respective said subgroups, selectively enabling each respective said editable layer for each of said respective said subgroups, to permit at least one user of the respective plurality of users, within the respective subgroup, to create the annotations provided for display to the users of the respective subgroup.
The control processors contain the information associating a subgroup edit data layer with a subgroup. The edit level for at least one user in the respective mapping logic is setup to point to a subgroup edit data layer. Said at least one user can provide change information to the respective subgroup edit data layer. All users associated with the said subgroup include the subgroup edit data layer in the mapping table for display of the subgroup edit data layer and other data layers as needed. This is discussed in further detail with relation to
In a preferred embodiment,
at least one user selects a plurality of the annotations for a respective plurality of users, for use in generating the combined display presentation.
At least one of the control processors is responsive to said at least one user and communicates the selection of a plurality of the annotations for a respective plurality of users. Said at least one of the control processors communicates the selection to all other control processors. The control processors setup the mapping control for the respective users for use in generating the combined display presentation. This is discussed in further detail with relation to
In the preferred embodiment, each one of the respective plurality of users is separately identifiable in the combined display presentation.
The control processors communicate setup information to the display logic that configures the display logic to add separately identifiable information for each respective user. This is discussed in further detail with relation to
In an alternate embodiment, the method is further comprised of identifying at least one sub-grouping comprised of at least two of the respective plurality of users which form members of a respective separate subgroup of users; selecting the respective annotation data for the respective separate subgroup of users for the at least one said sub-grouping of the plurality of users, to be utilized in generating a first combined display presentation presented for viewing to only those said users in the subgroup of users; generating the first combined display presentation comprising the annotations of all the members of the respective separate subgroup provided for viewing by at least one of the respective plurality of users in the respective separate subgroup; generating a second combined display presentation for viewing by at least one other one of the plurality of users that are not members of the respective separate in the subgroup, wherein the second combined display presentation is comprised of the annotations of only one of the members of said respective separate subgroup and excludes the annotations of all the members except the only one member.
Thus, there can be subgroups of users that interact within themselves independently of the larger group of multiple ones of the subgroups. Within each subgroup, the users/members can see the annotations of some or all of the other members of the subgroup. However, only the designated one (or more if so designated) can see the annotations of the other subgroups, and the other subgroups can only see the annotations of the designated one (or more if so designated) user/member of a respective subgroup. This can be done for one subgroup with multiple members, and the rest of the plurality of users are not in subgroups, or there can be multiple subgroups, each as described above, or there can be multiple subgroups, each as described above plus the rest of the plurality of users are not in subgroups. The present invention works equally well in each of these scenarios as above. This aspect of the invention is discussed in further detail with relation to
With this embodiment of subgroups, the method is further comprised of generating the first combined display presentation comprising the annotations of all the members of the respective separate subgroup provided for viewing by all of the respective plurality of users in the respective separate subgroup.
The control processors contain the information associating each user with a subgroup. A data layer is associated with each user. The mapping table for each respective mapping logic user is setup to point to the respective data layers for respective plurality of users having the same subgroup as the said user. This is discussed in further detail with relation to
In one embodiment, there are at least two separate subgroups of users comprised of at least two said sub-groupings, each said respective separate subgroup comprised of members comprising at least two of the plurality of users, and the method is further comprised of linking one said member of a first one of the at least two subgroups to a different one said member of a second one of the at least two subgroups; providing communication between the one said member and the different one said member, of the annotations of the different one said member and the one said member, respectively; generating a first linked and combined display presentation comprising the annotations of a plurality of the members of the first one of the at least two subgroups combined with the annotations of the different one member; and, displaying the combined display presentation responsive to the generating.
The control processors contain the information associating a subgroup edit data layer with a subgroup. The edit level for first said member of a first one of the at least two subgroups in the respective mapping logic is setup to point to a the first subgroup edit data layer. First said member of a first one of the at least two subgroups can provide change information to the first subgroup edit data layer. The edit level for second said member of a second one of the at least two subgroups in the respective mapping logic is setup to point to a the second subgroup edit data layer. Second said member of a second one of the at least two subgroups can provide change information to the second subgroup edit data layer. The first said member of a first one of the at least two subgroups and the second said member of a second one of the at least two subgroups include in the mapping table for display of the first subgroup edit data layer, second subgroup edit data layer and the respective data layers for respective plurality of users having the same subgroup as the said user. This is discussed in further detail with relation to
In a further embodiment to the one embodiment, the method is further comprised of generating a second linked and combined display presentation comprising the annotations of a plurality of the members of the second one of the at least two subgroups combined with the annotations of the one member; and displaying to the different one said member the second linked and combined display presentation, responsive to the generating. The education team has two groups, a teacher group and student group. The teacher can display the annotations of all the students, together with the teacher annotations on the teacher display simultaneously. This is discussed in further detail with relation to
In an addition to the first embodiment, the method is further comprised of organizing the memory as a plurality of mapped data layers; associating each individual data layer of the plurality of mapped data layers with at least one respective one of the plurality of users; storing the annotation data for each respective said user in a respective said individual data layer that is associated with the respective said user; enabling at least one user of the plurality of users to select which of the individual data layers are chosen as selected data layers that are used in generating the display presentation; and providing the combined display presentation comprised of the respective annotations for the respective said users associated with the respective selected data layers, annotations combined with the base core image, responsive to the selected data layers. The users of appliances in a social team can change settings on the appliance to hide one or all of the team member layers. This is discussed in further detail with relation to
In the one embodiment, the method is further comprised of generating a same combined display presentation for viewing by at least two users of the plurality of users for the respective subgroup. The education team has two groups, a teacher group and student group. The teacher can display the annotations of a single the student, together with the teacher annotations on the teacher display. These are the same annotations shown on the said student so each view exactly the same display. This is discussed in further detail with relation to
In the one embodiment, the method is further comprised of generating a different separate respective combined display presentation for respective viewing by at least two respective users of the plurality of users for the respective subgroup. The teacher can display the annotations of a just the teacher on the teacher display. The students display the annotations of the teacher and the student's own annotation. Every appliance has a different display. This is discussed in further detail with relation to
In a further embodiment to the one embodiment, the method is further comprised of associating each said individual user with a respective computing appliance; providing the input of the respective annotations for the respective individual user responsive to each respective said individual user; and providing the respective combined display presentation to the respective individual user associated therewith on a respective display apparatus at each respective one of the plurality of computing appliances. The teacher can display the annotations of a just the teacher on the teacher display. The students display the annotations of the teacher and the student's own annotation. Every appliance has a different display. The input device for each appliance is responsive to the respective annotation data layer that is being displayed, e.g., the teacher modifies the teacher layer and every student modifies their own layer. This is discussed in further detail with relation to
Multiple alternative systems are illustrated and described herein for implementing the one embodiment (as well as other embodiments), utilizing a plurality of computing appliances each with a display apparatus for displaying collaborative work comprised of a display presentation to at least one user of a base core image in combination with selected annotations as input by a plurality of users. One such system for the one embodiment is comprised of input apparatus providing annotation data for each of the plurality of users which is representative of the respective annotations by the respective user responsive to user input at said respective input apparatus; memory storing the annotation data for each respective said user in an area of the memory as associated with said each respective said user; logic enabling at least one user of the plurality of users to select which of the annotations for which of the plurality of users are selected annotations, that are used in generating the display presentation; and the display apparatus providing a combined display presentation comprised of the selected annotations combined with the base core image. The overall system is described in
The annotation/user selection logic enables at least one user of the plurality of users to select which of the annotations for which of the plurality of users are selected annotations, that are used in generating the display presentation. The annotation/user selection logic can be comprised of pointer lists, database tables, tables, vector lists, and is described in greater detail in relation to
The display apparatus provides a combined display presentation comprised of the selected annotations combined with the base core image. The display apparatus can be comprised of graphic adapters, software code for creating the combined image, specialized graphic processors, and is described in greater detail in relation to
In this system, at least one user selects a plurality of data layers for at least two users of the plurality of users, for use in generating the combined display presentation.
In a preferred embodiment of this system, each of the respective plurality of users is separately identifiable in the combined display presentation.
The preferred embodiment of this system is further comprised of logic for identifying a plurality of sub-groupings of the respective plurality of users as respective separate subgroups of users; memory storing data for each one of the plurality of users in a respective one of a plurality of data layers for at least one said subgroup of users, for use in generating a first combined display presentation, for viewing by only those said users in the subgroup of users; display logic generating the first display presentation comprising the annotations of all the members of the subgroup for viewing by at least one (or some or all) of the respective said users in the subgroup; and wherein the display logic generates a second combined display presentation for viewing by other users who are not in the subgroup that utilizes the annotations of only one said user of the plurality of users in the subgroup.
In an alternate embodiment of the one embodiment, there is one said separate subgroup; wherein each member of that one said separate subgroup is linked to see all members annotations from other members of that one said separate subgroup, and wherein one member of that one said separate subgroup is also linked to all at least one, to all, of the other users who are not members of that one said separate subgroup.
In another embodiment, there are at least two separate subgroups, wherein within each of the at least two separate subgroups each member of that said separate subgroup is linked to see all members annotations from other members of that said separate subgroup, and wherein one member of each of the two said separate subgroup is also linked to show its annotations from that one user/member for viewing to all at least one, to all, of the other users who are not members of that one said separate subgroup.
In yet another embodiment, there are the at least two separate subgroups, wherein within each of the at least two separate subgroups each member of that said separate subgroup is linked to see all members annotations from other members of that said separate subgroup, and wherein one member of each of the two said separate subgroup is also linked to show its annotations from that one user/member for viewing to a respective said one member of the other one of the at least two separate subgroups. Optionally, each of the one member of each of the two said separate subgroup is also linked to show its annotations from that one user/member for viewing by at least one (to all) of the other users who are not members of either one of the two said separate subgroup.
Thus, there can be multiple separate subgroups of users, wherein each of the members of each such separate subgroup interact within themselves independently of the larger group of multiple ones of the subgroups. Within each subgroup, the users/members can see the annotations of some or all of the other members of the subgroup. However, only the designated one (or more if so designated) can see the annotations of the other subgroups, and the other subgroups can only see the annotations of the designated one (or more if so designated) user/member of a respective subgroup. This can be done for one subgroup with multiple members, and the rest of the plurality of users are not in subgroups, or there can be multiple subgroups, each as described above, or there can be multiple subgroups, each as described above plus the rest of the plurality of users are not in subgroups. The present invention works equally well in each of these scenarios as above. This aspect of the invention is discussed in further detail with relation to
In yet another embodiment, the another member of the other one (the second of the at least two subgroups) of the at least two subgroups has a display presentation generated of a second linked and combined display presentation comprising the annotations of a plurality of the members of one to all of the other one(s) of the at least two subgroups, combined with a display of the annotations of the one member of the first of the at least two subgroups. This aspect of the invention is discussed in further detail with relation to
In a further extension of the system in the one embodiment, the memory is comprised of a plurality of mapped data layers. Each of plurality of mapped data layers is associated with at least one respective one of the plurality of users. The storage provides storing of the annotation data for each respective said user in a respective mapped data layer associated with the respective said user. In this further extension, the system further comprises control logic enabling at least one user of the plurality of users to select which of the mapped data layers are selected data layers that are used in generating the display presentation. The display apparatus provides a combined display presentation comprised of the selected users' annotations combined with the base core image, responsive to the selected data layers. This aspect of the invention is discussed in further detail with relation to
In a preferred embodiment of the systems illustrating the embodiments of the present inventions, there is provided the ability for a user (or more than one user) to select which user or users are permitted to make annotations. This embodiment provides input logic enabling at least one user to select which of plurality of users is enabled to input the annotations for use in generating the display presentation. This aspect of the invention is discussed in further detail with relation to
In an alternate embodiment or additional aspect of this preferred embodiment, there is provided the ability for a user (or more than one user) to select which user or users annotations (for which of the users) is to be utilized in the generation of the display presentation, either for that one user, or for one or more other users. This embodiment provides input logic enabling at least one user to select which of the users are selected to have their respective annotations selected for use in generating the display presentation. This aspect of the invention is discussed in further detail with relation to
In accordance with one aspect of this preferred embodiment, the system is further comprised of control logic enabling at least one user to selectively enable and disable selected ones of the plurality of users to have their respective annotations selected for use in generating the display presentation viewed by at least one of the plurality of users.
Thus, at least one of the users (or more) can select to turn on and off, at will, to select which of the users annotations will be utilized in generating of the display presentation for that one user or for other one or ones of the users. This aspect of the invention is discussed in further detail with relation to
In another alternate embodiment of this preferred embodiment, the user annotations for each respective said user is provided within the combined display presentation as separately identifiable with the respective user as shown in the combined display presentation. Thus, for example one user can be red, another user blue, etc. Or, users who are members of a first subgroup can have a first set of colors (either different families of colors, or different hues within a same color) and each other subgroup has its own unique and identifiable set of colors. Thus, when viewing the display presentation, it is readily and easily identifiable as to which user of which subgroup made which annotations in the combined display presentation. This aspect of the invention is discussed in further detail previously herein in regard to context information.
In one alternate embodiment, a separate and independent version of the combined display presentation is provide to at least two of the plurality of users within the respective subgroup; and each said independent version of the combined display presentation is comprised of the respective said annotations of selected ones of the plurality of users as combined with a base core image. Thus, one user can see one subset of users annotations as overlaid atop of and aligned to the base core image in a first independent version of a combined display presentation, while another user user can see yet another subset of (the same or overlapping or completely different ones of) users annotations as overlaid atop of and aligned to the base core image in in a second independent version of a combined display presentation combined display presentation. And this can be done without limitations. It can be done for users within a same subgroup, or for users within different subgroups, or for independent users not within any subgroups at all. The users of appliances in a social team can change settings on the appliance to hide one or all of the team member layers. This is discussed in further detail with relation to
In an alternate embodiment of the subgrouping aspect of the present inventions, each of the subgroups is comprised of at least two plurality of mapped data layers associated with said subgroup; and each user is associated with at least one of the mapped data layers that is an editable data layer that can be selectively enabled (or disabled) to permit (or not allow) the respective user to create the annotations for the respective user. The education team has two groups, a teacher group and student group. The teacher can display the annotations of just the teacher annotations on the teacher display or choose to display the teacher annotation a particular student data layers. The teacher's input changes from the teacher layer to the said student data layer in the latter display. This is discussed in further detail with relation to
In the alternate embodiment of the subgrouping aspect of the present inventions, the system can additionally be implemented such that each of the subgroups is associated with at least two plurality of mapped data layers associated with said subgroup; and such that each of the subgroups is associated with an editable layer that is selectively enabled to permit at least one of the respective plurality of users to create the annotations for the respective subgroup. This aspect of the invention is discussed in further detail with relation to
In an option to the alternate embodiment of the subgrouping aspect of the present inventions, a same combined display presentation is generated at, at least two of the plurality of users for the respective subgroup. Thus, those at least two of the plurality of users for the respective subgroup view the combined display presentation of selected annotations and the base core image concurrently while they work in real time together on editing/annotating relative to the same base core image, provide for a collaborative work result. This aspect of the invention is discussed in further detail with relation to
In an alternate option to the alternate embodiment of the subgrouping aspect of the present inventions, a different separate combined display presentation is generated at, at least two of the plurality of users for the respective subgroup. Thus, each of those at least two of the plurality of users for the respective subgroup view a different combined display presentation of different ones of selected annotations for respective users, combined with the display of the base core image, concurrently while they work in real time on editing/annotating relative to the same base core image, provide for a collaborative work result. Thus, each of the subgroups can view selected members/users of their respective group independently of the other subgroups' users, or different users (whether or not within a same subgroup, can view selected users annotations independently of what another user is viewing of a separate set of selected users (some the same, or all different). This aspect of the invention is discussed in further detail with relation to
Multiple embodiments as described hereinafter are illustrated in and described relative to
As illustrated in a first embodiment, there is a plurality of computing appliances. Each respective one of the plurality of computing appliances has a user input apparatus to provide respective annotation data as input for the respective user to permit the respective user to provide annotations which appear within a display presentation as an image for viewing and representative of the annotations. A first subset of the plurality of computing appliances comprising at least two of the computing appliances form members of a group. There is provided separate selective communication of the annotation data among the members of the group and there is selective display of the annotation data to selected ones of the members of the group. The annotation data is selectively coupled for distribution to other ones of the plurality of computing appliances in the group for selective viewing at said certain other ones of the plurality of computing appliances. Selection logic is included that permits at least one of the users' to selectively choose which of the other users within the group are selected ones that are to have their corresponding annotations viewed as part of the display presentation for the at least one of the users. The selective display is visible only at the computing appliances of the selected ones of the members of the group. The subset groups in an education team are the teacher/student subgroups and the teacher/classroom subgroups as illustrated in
In a second embodiment, there are a plurality of separate groups, each said separate group comprising at least two of the plurality of computing appliances. Each of the plurality of computing appliances is associated with a respective separate one of the plurality of users. Annotations made by members of a respective said separate group are communicated only amongst the respective said members of the respective said separate group, for selective storage in an associatively mapped data layer in memory, and for utilization in the display presentation of the annotations made by members of a same respective said separate group. A separate display presentation for that group is generated and is commonly viewable at each said respective computing appliance for each said respective user that is a member of the group, responsive to the communicated annotations.
In a further embodiment, logic associatively maps at least one said member of each of the separate groups as also being an inter-group member of an inter-group group Annotation data for each said inter-group member is communicated to the respective members of the same said separate respective group as the respective inter-group member and said annotation data is also communicated to other ones of the inter-group members associated with other said separate groups, for each of the other separate groups. Control logic associatively maps members of at least two separate groups into respective separate teams, each said respective said team comprising a separate respective plurality of members of the respective group. Communications among the members of the separate team is provided and used in selective generation of a local display presentation for each respective member of the respective separate team. Also, at least one of the members for each separate team is also an inter-team member that concurrently communicates with all users within the group.
In one embodiment, a coordinator function permits a user to select (e.g., touch) the text that someone else typed (or select the markings of someone else) as selected data, and have the selected data become sub-team-to-sub-team or team-to-team communicating data that is sent (such as by simply touching it, selecting it, and have a button to touch to send it). In an alternate embodiment, at least one of the plurality of users acts as a coordinator which selects selected annotations from the user annotations for each of the plurality of users within a separate group within a group (e.g., such as illustrated in
In a preferred embodiment, there are at least two groups. Each said group is comprised of members from the plurality of users, and communications of respective said annotation data is provided between the members of the group, for each of the groups. A common display presentation is provided to each of the members of a said group, which display presentation is viewable in common by members of that said group only, and is not for viewing by any said members of any other said group, respectively, for each of the groups, respectively.
For each of the groups, there is at least one member that also functions as a representative for the respective group to provide group-to-group communication of selected ones of the annotations for each said respective group and viewed by all members of other said groups, for all the groups working in collaboration.
In an alternate embodiment, one of the users of a respective group coordinates and controls selection of which, if any, of the respective annotation data for the respective users of the respective group is utilized in generating of a combined display presentation for each of the respective users in the respective group. Alternatively or additionally, at least one user of the plurality of users controls selection of and combination of annotation layers used in generating the display presentation for each of the computing appliances to provide at each of the computing appliances a respective display presentation comprised of selected ones of the annotation layers combined together to generate a display presentation comprising a display image combined atop a core-base image as an overlay atop it, of all of the users' annotations shown as aligned relative to and atop of an underlying image of the core base image display presentation.
In another embodiment, a respective user can control how many separate split-screen display screens are displayed as the display presentation, at that user's computing appliance only, or at each and all respective said computing appliances in a group.
In a further alternate embodiment, one user controls which combination of user annotations (preferably selected “by user”) are combined and overlaid with the core image and integrated into a combined video display presentation for each of the split-screen display presentations, and which of the users' annotations are combined in which of the display presentations of the split-screen display presentations.
In still another embodiment, there are a plurality of separate screen display presentations provided in a split-screen combined presentation to each of the respective users at each said respective computing appliance. Coordinator logic selects which of the users' annotations are utilized in generating each of the display presentation for each of the respective users' computing appliance. In a further embodiment, a plurality of the screen display presentations are generated in separate (display) windows comprising at least one window displaying just the respective users' annotations combined with the core image as utilized to generate the respective display presentation, for all of the members of the respective group, responsive to collaborating together. A display presentation is generated in a second (display) window of the separate windows, displaying the annotations as made by less than all the members of the respective group, shown as aligned atop the underlying core-base image. In a further preferred embodiment, at least one user provides selection of users for selective groupings or sub-group(s) within a group of users within the groups (or sub-groups). The second window provides a display presentation of the image of the annotations of only the users of the respective sub-grouping combined and aligned atop the image of the underlying core-base image for inclusion within the generated display presentation. There can be multiple windows, of multiple combinations and permutations of user(s) annotations display, and even some windows displaying concurrently displayed additional application software (such as web-browsers, word processing, etc.).
In another embodiment, the display presentations of annotations made by the respective members of a respective group are shown on the screen display presentation for all of the plurality of screen display presentations for the members of the respective group, so that all have the same said display presentation. In an alternate embodiment, at least one of the members of the group is provided with a display of a screen display presentation that is at least in part different than the display presentation provided to other members of the respective group. In another alternate embodiment, the display presentation is comprised of annotations for all members of the respective group as results from concurrently collaborating together. The display presentation is provided as a common display presentation of said annotations provided to all members of the respective group. In yet another alternate embodiment, only the annotations of the selected ones of the members of the respective group are utilized in generating the combined display presentation. In still yet alternate embodiment, only the annotations of a selected said subset of users are utilized in generating the combined display presentation utilized in at least one of said separate windows of the plurality of display presentations.
In another alternate embodiment, multiple different ones of the windows of the display presentation are generated utilizing the respective annotations for respective ones of multiple different selected sub-groupings of users within the respective grouping, to provide the combined display presentation for each of the multiple different ones of the windows.
In a further preferred embodiment, logic controls selective enablement (to permit input of annotations) as to a respective at least one of the plurality of users, to permit creation by that user of associated annotations made relative to a base image display presentation, and provides control of communication of data (and display) representative of the annotations. The annotations are input by each respective user while that user is concurrently being shown a display presentation of the base image. An image of the annotations that are made concurrently by the users of at least two of the computing appliances can also be displayed (while being made, and selectively thereafter) for viewing shown as aligned atop of the display presentation of the base image.
The coordinator logic controls enabling as to each of the users, to permit said user to input annotations and generate annotation data, and controlling determining whether and when to utilize the respective annotation data in the generation of the respective display presentation for each said respective user.
Group control logic controls the selective enabling of use of the data layers and the determining of which ones of the users (of the multiple ones of the users) are selected members that are part of a respective group, and which of the data layers (and therefore which associated users) to use for permitting edit input, and which of the data layers to (use to generate display presentations).
In a fourth embodiment, means for collaboration is provided among a plurality of users at a plurality of computing appliances, includes displaying a display presentation, inputting respective user annotation data, viewing of user annotation data, storing in a memory, selecting the user layer data and generating a combined display presentation. The display presentation is of a base image to be viewed. User annotation data is generated (via user input) by each of a plurality of users, each associated with a respective one of the plurality of computing appliances, each with its own input apparatus to permit input of respective user annotations made relative to the display presentation of the base image. User annotation data is generated by respective computing appliances and viewed by a respective user (each of a plurality of respective users), each associated with a respective one of the plurality of computing appliances, each with its own input apparatus to permit input of respective user annotations made relative to the display presentation of the base image. The respective user annotation data for each respective user is stored in a memory as respective user (layer) data in a respective data layer that is mapped to and associated with the respective user. The user layer data is selected for the associated said respective user for at least two of the computing appliances. A combined display presentation is generated that utilizes a combination of the base image display presentation aligned to and overlaid with the display presentation of the image of the annotation data generated by the at least two of the plurality of users.
In one embodiment, a presentation is displayed to at least one user (and preferably at least two) of multiple separate display windows of display presentation, as a split-screen display. In a further embodiment, a presentation is displayed to at least one user of multiple separate display windows of display presentation, as a split-screen display, and also shows different combinations of multiple users' annotations combined with the base image display, as separate ones of the multiple separate display windows.
The present invention may be better understood by reference to the following drawings along with the detailed description of the following drawings.
The synchronization logic can operate in different synchronization modes. There are 3 illustrated modes: one is a change sync mode, which allows all the changes made to the respective Layer Data in a respective Data Layer responsive to a respective user's edits, to be communicated to all layer storage locations that are storing the same layer regardless of user. In the full sync layer mode, all Layer Data in a respective Data Layer is communicated to all layer storage locations that are storing the same layer regardless of user. This mode is usually used only infrequently and at one location. There are times when the synchronization fails due to network failures, some layer storage not being available during operation and other failures. In this case, the best copy of the layer storage needs to be communicated with the layer storage that is not synchronized with the other layer storage, i.e., the layer storage is not an exact duplicate and the synchronization logic has lost track of what to do. This mode restores the synchronization and then the mode can be returned to the normal state of Change Sync Mode. The Full Sync Mode requires significant bandwidth to communicate all Layer Data to all the duplicate copies and is generally impractical to always perform so is only done when necessary. The last mode is the No Sync Mode which turns off the synchronization for a particular layer storage location. This could be used for a graceful way to remove layer storage from the system or to make changes offline and then implement them quickly by making changes to this layer storage then changing the mode to Full Sync Mode, and finally back to Change Sync Mode. Each layer storage location and Data Layer stored in said location can be set to one of these modes independently. However usually all are set to Change Sync Mode.
This illustration shows both voice and video used simultaneously with the display computing appliances. However, only the voice or the video could be used with the display computing appliance. The connections 2306, 2356, 2308, 2358 and 2310 can be separate physical connections or can be combined in a single, fewer, or networking type physical network connections. The signals 2306, 2356, 2308 and 2358 can be analog or digital in nature. Other multimedia collaboration systems can be added with voice and video capabilities. In cases of 3 or more multimedia collaboration systems, the microphone and camera signals, similar to 2306 and 2308, can be optionally coupled to the display and speakers of one or a plurality of other multimedia collaboration systems providing video conferencing capabilities for more the one user. In addition, the source for the audio could be provided by another source other than a microphone such as a digital audio file, video file, tape recorder, MIDI file or other audio source. Likewise, the video does not need to be supplied by a video camera and could be provided by a digital video file, video tape, or other video source.
The tables are coupled together by common columns. When a row in one table and a row in another table have a common column and the values are the same then the other columns in each of the tables are related. This is called a join in a relational database. The doc table is coupled, 4515, to the docpage table via a common column docid. The docpage table is coupled, 4525, to the value table, 4530, and pageimage table, 4550, via a common column pagid. The pageimage table is coupled, 4555, to the image table, 4560, via the common column imgid. The rows of the doc table represent a set of documents in a particular order, much like books carefully placed in a bookshelf. The columns in the doc table, 4511, are the docid, title, visible and time. The docid is a unique identifier of the document. The title is the title of the document. Visible determines whether the document will be shown or hidden. Time is the date the document was created. Other document attributes can also be stored in other columns. The rows of the docpage table represent a page in each document and the order of the rows put the pages in order as they are in a book. The columns are docid, pagid, and visible. There are usually many rows with the same docid, one for each page in the document but each row has a different pagid. This allows pages to be quickly reordered in using just this table. The visible column allows an individual page to be shown or hidden. The value table has the following columns: pagid, pageno, split, doodle_dir, doodle_esf, doodle_file, section and pagemark. The pagid is used to couple the rows of the docpage table and the pagid values are unique in each row of the value table. The pageno column provides a page number for the page which is used for the display for the user. The split value is provides a location on the page where it can be split between the top and bottom halves for display. The doodle_dir provides a directory in the file system, 4540, where the a plurality of Data Layer store their layer data for this page in a doodle file. The doodle_esf provides for a container file like zip that contains the doodle file. The doodle_file provides the name of the file in the file system or in the doodle_esf container. Section provides a descriptive name of this page and all pages that follow in this document until a new section is provided for the user. Pagemark provides a descriptive name of this page only for the user. The pageimage table, 4550, has two columns: pagid, imgid. The pagid values are unique in each row of the pageimage table. The imgid usually is unique but not always. This allows an image to be duplicated by using the same imgid value for two different pagid's. The image table, 4560, provides the location and attributes of the base common images used as the common layer data for the page. The columns in the image table are imgid, imgdir, imgname, imgesf, imgwidth, imgpageno, time. The imgid is a unique identifier for a set representations of a visually identical image to the user. There are multiple images stored at different resolutions but visually look the same to the user. This reduces the computations required by the appliance to resize an image to fit on a particular display, the closest best fit can be selected that has been precomputed and stored. The imgdir provides the directory in the file system where the image is located. Imgesf is the file name of a container that can hold many files like a zip file. Imgname is the file name of the image on the file system directly or the name in the imgesf if provided. imgwidth and imgheight provide the width and height of the image so the optimum image can be selected.
Since there can be multiple rows with the same value of imgid, the image index indicates which of the rows contains the selected image for a page and display. Find best image, 4460, uses the imgid, 4565, matching image table rows, 4562, and the desired size, 4471, to find the best image location, 4467. The location is combined in the full_path_name, 4545. The file system, 4540, is responsive to the full_path_name coupled by 4543 and outputs the file data, 4541, via 4542 which is the image data to be sent to the display. The “Is page info in library?”, 4450, is responsive to the pagid coupled by 4525 and 4451. The output, 4457, provides information about the page including the Layer Storage except base Common Data Layer (doodle) location. The full_path_name, 4545 is responsive to the page information and creates a description of the location of the Layer Storage for the file system, 4540. The file system is responsive to the coupling signal 4543, and outputs via 4542 to the file data, 4541, which is the Layer Storage except the base Common Data Layer for processing by the display logic in the coordinator control logic or non-coordinator control logic.
The my team table has one column which is teammember. This table has a list of team members on their team. This table is only present on a coordinator appliance. The program table has two columns, prog_param and prog_value. Each row of the program table has a prog_param value which defines parameter for the application. The prog_value in the same row is the value for the prog_param is the value for the parameter. These parameters are used to define a number of items for the appliance like its name, the current document and page being displayed and many other items that need to be remembered when the appliance is turned off and then back on. The registry table has two columns, reg_param and reg_value. It is similar to the program table but is used for installation parameters and on some cases may be stored in a different manner than the other tables. The key table has the following columns: docid, appid, time, key. This table is used to store encryption keys that some of the content may be using. The key column contains the key for a document specified by docid and for a specific appliance specified by appid. A group of documents may have been created at the same time and can have the same key so they are identified with the time column.
In a music team, there can be a leader appliance, 4710, which is the coordinator, and the leader appliance can make changes to and communicate these changes (ranging from edits to a display document, to page jumps, to document ordering, to import or export, etc.) with all the other appliances. There may be multiple leader appliances on a team.
A member appliance, 4720, is one that generally listens to what the leader appliance is doing but may be able to have some other limited functionality (such as making local-only edits, or local-only page jumps, or local-only document order changes).
A listener appliance, 4730 and 4740, is similar to a member appliance, but it is strictly only able to listen to commands and input from the members and leaders, and cannot make any changes by itself.
Sitrick, David H., Fling, Russell T.
Patent | Priority | Assignee | Title |
10067731, | Jan 05 2016 | Quirklogic, Inc. | Method and system for representing a shared digital virtual “absolute” canvas |
10129335, | Jan 05 2016 | QUIRKLOGIC, INC | Method and system for dynamic group creation in a collaboration framework |
10324618, | Jan 05 2016 | QUIRKLOGIC, INC | System and method for formatting and manipulating digital ink |
10346532, | Feb 02 2016 | Activewrite, Inc.; ACTIVEWRITE, INC | Document collaboration and consolidation tools and methods of use |
10402485, | May 06 2011 | COLLABORATION TECHNOLOGIES, LLC | Systems and methodologies providing controlled collaboration among a plurality of users |
10430924, | Jun 30 2017 | QUIRKLOGIC, INC | Resizable, open editable thumbnails in a computing device |
10592595, | Sep 29 2017 | DROPBOX, INC | Maintaining multiple versions of a collection of content items |
10592680, | Nov 08 2013 | ExactTrak Limited | Data accessibility control |
10755029, | Jan 05 2016 | QUIRKLOGIC, INC | Evaluating and formatting handwritten input in a cell of a virtual canvas |
10872163, | Sep 29 2017 | DROPBOX, INC.; DROPBOX, INC | Managing content item collections |
10922426, | Sep 29 2017 | DROPBOX, INC.; DROPBOX, INC | Managing content item collections |
11030585, | Oct 09 2017 | Ricoh Company, Ltd.; Ricoh Company, LTD | Person detection, person identification and meeting start for interactive whiteboard appliances |
11038973, | Oct 19 2017 | DROPBOX, INC.; DROPBOX, INC | Contact event feeds and activity updates |
11062271, | Oct 09 2017 | Ricoh Company, Ltd.; Ricoh Company, LTD | Interactive whiteboard appliances with learning capabilities |
11080466, | Mar 15 2019 | Ricoh Company, Ltd. | Updating existing content suggestion to include suggestions from recorded media using artificial intelligence |
11120342, | Nov 10 2015 | Ricoh Company, Ltd. | Electronic meeting intelligence |
11222162, | Sep 29 2017 | DROPBOX, INC. | Managing content item collections |
11250209, | Feb 02 2016 | Activewrite, Inc. | Document collaboration and consolidation tools and methods of use |
11263384, | Mar 15 2019 | Ricoh Company, Ltd. | Generating document edit requests for electronic documents managed by a third-party document management service using artificial intelligence |
11270060, | Mar 15 2019 | Ricoh Company, Ltd. | Generating suggested document edits from recorded media using artificial intelligence |
11307735, | Oct 11 2016 | Ricoh Company, Ltd.; Ricoh Company, LTD | Creating agendas for electronic meetings using artificial intelligence |
11392754, | Mar 15 2019 | Ricoh Company, Ltd. | Artificial intelligence assisted review of physical documents |
11573993, | Mar 15 2019 | Ricoh Company, Ltd. | Generating a meeting review document that includes links to the one or more documents reviewed |
11593549, | Sep 29 2017 | DROPBOX, INC. | Managing content item collections |
11611595, | May 06 2011 | COLLABORATION TECHNOLOGIES, LLC | Systems and methodologies providing collaboration among a plurality of computing appliances, utilizing a plurality of areas of memory to store user input as associated with an associated computing appliance providing the input |
11630909, | Sep 29 2017 | DROPBOX, INC. | Managing content item collections |
11645630, | Oct 09 2017 | Ricoh Company, Ltd. | Person detection, person identification and meeting start for interactive whiteboard appliances |
11720741, | Mar 15 2019 | Ricoh Company, Ltd. | Artificial intelligence assisted review of electronic documents |
11983637, | Nov 10 2015 | Ricoh Company, Ltd. | Electronic meeting intelligence |
Patent | Priority | Assignee | Title |
3648245, | |||
3955466, | Jul 02 1974 | Goldmark Communications Corporation | Performance learning system |
4012979, | Mar 03 1975 | Computeacher Limited | Music teaching apparatus |
4260229, | Jan 23 1978 | Creating visual images of lip movements | |
4350070, | Feb 25 1981 | Electronic music book | |
4386551, | Aug 21 1978 | Method and apparatus for teaching musical instruments | |
4468204, | Feb 25 1982 | Scott Instruments Corporation | Process of human-machine interactive educational instruction using voice response verification |
4484507, | Jun 11 1980 | Nippon Gakki Seizo Kabushiki Kaisha | Automatic performance device with tempo follow-up function |
4500879, | Jan 06 1982 | GENERAL CONSUMER ELECTRONICS, INC | Circuitry for controlling a CRT beam |
4521014, | Sep 30 1982 | Video game including user visual image | |
4527980, | Apr 07 1983 | Flight simulating video game | |
4547851, | Mar 14 1983 | Integrated interactive restaurant communication method for food and entertainment processing | |
4553222, | Mar 14 1983 | Integrated interactive restaurant communication system for food and entertainment processing | |
4572509, | Sep 30 1982 | Video game network | |
4591928, | Mar 23 1982 | Wordfit Limited | Method and apparatus for use in processing signals |
4646609, | May 21 1984 | Nippon Gakki Seizo Kabushiki Kaisha | Data input apparatus |
4688105, | May 10 1985 | Video recording system | |
4694723, | May 07 1985 | Casio Computer Co., Ltd. | Training type electronic musical instrument with keyboard indicators |
4698460, | Aug 26 1986 | Tektronix, Inc.; TEKTRONIX, INC , 4900 S W GRIFFITH DR , P O BOX 500, BEAVERTON, OR 97077 A CORP OF OR | Touch panel system |
4698461, | Aug 26 1986 | Tektronix, Inc.; TEKTRONIX, INC , 4900 W S GRIFFTH DR , P O BOX 500, BEAVERTON, OR 97077 A CORP OF OR | Touch panel with automatic frequency control |
4707845, | Aug 26 1986 | Tektronix, Inc.; TEKTRONIX, INC , 4900 S W GRIFFITH DRIVE, P O BOX 500, BEAVERTON, OREGON 97077 A OREGON CORP | Touch panel with automatic nulling |
4745836, | Oct 18 1985 | Method and apparatus for providing coordinated accompaniment for a performance | |
4766581, | Aug 07 1984 | DANIEL J EDELMAN, INC | Information retrieval system and method using independent user stations |
4779510, | Nov 20 1987 | Electronic apparatus for displaying music | |
4823367, | Aug 07 1987 | Rikagaku Kenkyujyo and Hochiki Corp. | Method and apparatus for automatic lap counting |
4827532, | Mar 29 1985 | Cinematic works with altered facial displays | |
4843568, | Apr 11 1986 | Real time perception of and response to the actions of an unencumbered participant/user | |
4942551, | Jun 24 1988 | WARNER BROS ENTERTAINMENT INC ; WARNER COMMUNICATIONS INC | Method and apparatus for storing MIDI information in subcode packs |
4976182, | Oct 15 1987 | Sharp Kabushiki Kaisha | Musical score display device |
5001632, | Dec 22 1989 | HEARTBEAT CORP A DE CORPORATION | Video game difficulty level adjuster dependent upon player's aerobic activity level during exercise |
5002491, | Apr 28 1989 | BETTER EDUCATION INC | Electronic classroom system enabling interactive self-paced learning |
5046004, | Dec 05 1988 | RICOS CO , LTD | Apparatus for reproducing music and displaying words |
5053757, | Jun 04 1987 | Tektronix, Inc. | Touch panel with adaptive noise reduction |
5054360, | Nov 01 1990 | International Business Machines Corporation | Method and apparatus for simultaneous output of digital audio and midi synthesized music |
5107443, | Sep 07 1988 | Xerox Corporation | Private regions within a shared workspace |
5117726, | Nov 01 1990 | INTERNATIONAL BUSINESS MACHINES CORPORATION A CORPORATION OF NEW YORK | Method and apparatus for dynamic MIDI synthesizer filter control |
5126639, | Jun 04 1991 | Zenith Electronics Corporation | Sequential scan system changes for multiple frequency range oscillator and control |
5136146, | Feb 22 1991 | Hewlett-Packard Company | Terminal apparatus with removable memory device |
5142620, | Nov 14 1984 | Canon Kabushiki Kaisha | Image processing system |
5146833, | Apr 30 1987 | KAA , INC | Computerized music data system and input/out devices using related rhythm coding |
5148154, | Dec 04 1990 | Sony Electronics INC | Multi-dimensional user interface |
5149104, | Feb 06 1991 | INTERACTICS, INC | Video game having audio player interation with real time video synchronization |
5153829, | Nov 08 1988 | Canon Kabushiki Kaisha | Multifunction musical information processing apparatus |
5166463, | Oct 21 1991 | Motion orchestration system | |
5176520, | Apr 17 1990 | Computer assisted instructional delivery system and method | |
5194682, | Nov 29 1990 | Pioneer Electronic Corporation | Musical accompaniment playing apparatus |
5204969, | Dec 30 1988 | Adobe Systems Incorporated | Sound editing system using visually displayed control line for altering specified characteristic of adjacent segment of stored waveform |
5225618, | Aug 07 1989 | Method and apparatus for studying music | |
5240417, | Mar 14 1991 | MIDWAY GAMES WEST INC | System and method for bicycle riding simulation |
5247126, | Nov 27 1990 | Pioneer Electronic Corporation | Image reproducing apparatus, image information recording medium, and musical accompaniment playing apparatus |
5250747, | Jul 31 1991 | Ricos Co., Ltd. | Karaoke music reproduction device |
5270475, | Mar 04 1991 | Lyrrus, Inc. | Electronic music system |
5315911, | Jul 24 1991 | Yamaha Corporation | Music score display device |
5341133, | May 09 1991 | President and Fellows of Harvard College | Keyboard having touch sensor keys for conveying information electronically |
5400687, | Jun 06 1991 | Kawai Musical Inst. Mfg. Co., Ltd. | Musical score display and method of displaying musical score |
5488196, | Jan 19 1994 | Electronic musical re-performance and editing system | |
5521323, | May 21 1993 | MAKEMUSIC, INC | Real-time performance score matching |
5553864, | May 22 1992 | HANGER SOLUTIONS, LLC | User image integration into audiovisual presentation system and methodology |
5590282, | Jul 11 1994 | MANCLY REMOTE LIMITED LIABILITY COMPANY | Remote access server using files containing generic and specific music data for generating customized music on demand |
5604322, | Mar 30 1994 | Yamaha Corporation | Automatic performance apparatus with a display device |
5665927, | Jun 30 1993 | Casio Computer Co., Ltd. | Method and apparatus for inputting musical data without requiring selection of a displayed icon |
5689077, | Sep 13 1996 | Musical score display and audio system | |
5728960, | Jul 10 1996 | INTELLECTUAL VENTURES ASSETS 28 LLC | Multi-dimensional transformation systems and display communication architecture for musical compositions |
5728962, | Mar 14 1994 | Airworks Corporation | Rearranging artistic compositions |
5760323, | Jun 20 1996 | Music Net Incorporated | Networked electronic music display stands |
5801694, | Dec 04 1995 | CALLAHAN CELLULAR L L C | Method and apparatus for interactively creating new arrangements for musical compositions |
5830065, | May 22 1992 | HANGER SOLUTIONS, LLC | User image integration into audiovisual presentation system and methodology |
5952597, | Oct 25 1996 | TIMEWARP TECHNOLOGIES, INC | Method and apparatus for real-time correlation of a performance to a musical score |
6084168, | Jul 10 1996 | INTELLECTUAL VENTURES ASSETS 28 LLC | Musical compositions communication system, architecture and methodology |
6425825, | May 22 1992 | HANGER SOLUTIONS, LLC | User image integration and tracking for an audiovisual presentation system and methodology |
6508706, | Jun 21 2001 | ELITE GAMING TECH LLC | Electronic interactive gaming apparatus, system and methodology |
6662210, | Mar 31 1997 | NCR Corporation | Method of remote collaboration system |
6675205, | Oct 14 1999 | MEC MANAGEMENT, LLC | Peer-to-peer automated anonymous asynchronous file sharing |
6785676, | Feb 07 2001 | International Business Machines Corporation | Customer self service subsystem for response set ordering and annotation |
7074999, | Jul 10 1996 | BAMA GAMING | Electronic image visualization system and management and communication methodologies |
7098392, | Jul 10 1996 | BAMA GAMING | Electronic image visualization system and communication methodologies |
7137892, | May 22 1992 | HANGER SOLUTIONS, LLC | System and methodology for mapping and linking based user image integration |
7157638, | Jul 10 1996 | INTELLECTUAL VENTURES ASSETS 28 LLC | System and methodology for musical communication and display |
7297856, | Jul 10 1996 | INTELLECTUAL VENTURES ASSETS 28 LLC | System and methodology for coordinating musical communication and display |
7353252, | May 16 2001 | Sigma Design | System for electronic file collaboration among multiple users using peer-to-peer network topology |
7418656, | Oct 03 2003 | Adobe Inc | Dynamic annotations for electronics documents |
7423213, | Jul 10 1996 | OL SECURITY LIMITED LIABILITY COMPANY | Multi-dimensional transformation systems and display communication architecture for compositions and derivations thereof |
7536637, | Feb 07 2008 | International Business Machines Corporation | Method and system for the utilization of collaborative and social tagging for adaptation in web portals |
7612278, | Jul 10 1996 | BAMA GAMING | System and methodology for image and overlaid annotation display, management and communication |
7620902, | Apr 20 2005 | Microsoft Technology Licensing, LLC | Collaboration spaces |
7647306, | Jun 28 2005 | R2 SOLUTIONS LLC | Using community annotations as anchortext |
7689682, | Aug 16 2006 | Resource Consortium Limited | Obtaining lists of nodes of a multi-dimensional network |
7734692, | Jul 22 2005 | Oracle America, Inc | Network collaboration system with private voice chat |
7792903, | May 31 2006 | Red Hat, Inc.; Red Hat, Inc | Identity management for open overlay for social networks and online services |
7809791, | Jun 10 2003 | Lockheed Martin Corporation | Information aggregation, processing and distribution system |
7827488, | Nov 27 2000 | ALTO DYNAMICS, LLC | Image tracking and substitution system and methodology for audio-visual presentations |
7867086, | May 22 1992 | HANGER SOLUTIONS, LLC | Image integration with replaceable content |
7899915, | May 10 2002 | Convergent Media Solutions LLC | Method and apparatus for browsing using multiple coordinated device sets |
7975215, | May 14 2007 | Microsoft Technology Licensing, LLC | Sharing editable ink annotated images with annotation-unaware applications |
7989689, | Jul 10 1996 | BAMA GAMING | Electronic music stand performer subsystems and music communication methodologies |
8005835, | Mar 15 2004 | R2 SOLUTIONS LLC | Search systems and methods with integration of aggregate user annotations |
8108778, | Sep 30 2008 | R2 SOLUTIONS LLC | System and method for context enhanced mapping within a user interface |
8131866, | Feb 22 2001 | Sony Corporation; Sony Electronics, Inc. | Annotations for production parts in a media production system |
8132094, | Mar 30 2005 | Amazon Technologies, Inc. | Electronic input device and method for processing an electronically-executable instruction in an annotation of a document |
8196051, | Jun 18 2002 | Microsoft Technology Licensing, LLC | Shared online experience history capture and provision system and method |
8201094, | Sep 25 2009 | WSOU Investments, LLC | Method and apparatus for collaborative graphical creation |
8209618, | Jun 26 2007 | MINDTRIG COM LIMITED LIABILITY COMPANY | Method of sharing multi-media content among users in a global computer network |
8214749, | Jan 22 2004 | RAKUTEN GROUP, INC | Method and system for sensing and reporting detailed activity information regarding current and recent instant messaging sessions of remote users |
8234688, | Apr 03 2009 | HCL Technologies Limited | Managing privacy settings for a social network |
8261182, | Oct 03 2003 | Adobe Inc | Dynamic annotations for electronic documents |
8347207, | Jul 16 2007 | International Business Machines Corporation | Automatically moving annotations associated with multidimensional data between live datacubes |
8418055, | Feb 18 2009 | Kyocera Corporation | Identifying a document by performing spectral analysis on the contents of the document |
20030009756, | |||
20030112467, | |||
20030113100, | |||
20030227479, | |||
20050080859, | |||
20050102245, | |||
20050102356, | |||
20050171799, | |||
20050172001, | |||
20050198031, | |||
20050198173, | |||
20050235038, | |||
20060048092, | |||
20070061487, | |||
20080092239, | |||
20080092240, | |||
20080109737, | |||
20080148067, | |||
20080163379, | |||
20090055477, | |||
20090063991, | |||
20090063995, | |||
20090193327, | |||
20090254843, | |||
20090265607, | |||
20090307762, | |||
20090320073, | |||
20100030578, | |||
20100042511, | |||
20100083169, | |||
20100151431, | |||
20100199191, | |||
20100262659, | |||
20100263004, | |||
20100278453, | |||
20110029883, | |||
20110030031, | |||
20110066942, | |||
20110066944, | |||
20110067066, | |||
20110067099, | |||
20110078590, | |||
20110107369, | |||
20110118619, | |||
20110154192, | |||
20110181496, | |||
20110181780, | |||
20110183654, | |||
20110184862, | |||
20110185036, | |||
20110185296, | |||
20110185312, | |||
20110239142, | |||
20110252320, | |||
20110252339, | |||
20120102409, | |||
20120144346, | |||
20120174006, | |||
20120210247, | |||
20120233544, | |||
DE3839361, | |||
DE4200673, | |||
FR2670599, | |||
FR2762130, | |||
GB2279493, | |||
GB2293915, | |||
JP1099169, | |||
JP1113785, | |||
JP5073042, | |||
JP6004071, | |||
JP60253082, | |||
JP6274158, | |||
JP7020858, | |||
JP8115081, | |||
JP8123416, | |||
JP9034446, | |||
JP9097057, | |||
JP9114453, | |||
JP9244524, | |||
WO9410680, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
May 05 2011 | FLING, RUSSELL T | SITRICK, DAVID H | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026931 | /0816 | |
May 06 2011 | David H., Sitrick | (assignment on the face of the patent) | / | |||
Nov 28 2023 | SITRICK, DAVID H | COLLABORATION TECHNOLOGIES, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 065813 | /0044 |
Date | Maintenance Fee Events |
Apr 16 2018 | REM: Maintenance Fee Reminder Mailed. |
May 11 2018 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
May 11 2018 | M2554: Surcharge for late Payment, Small Entity. |
Apr 25 2022 | REM: Maintenance Fee Reminder Mailed. |
Oct 10 2022 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Jun 26 2023 | PMFP: Petition Related to Maintenance Fees Filed. |
Sep 18 2023 | PMFS: Petition Related to Maintenance Fees Dismissed. |
Oct 10 2023 | PMFP: Petition Related to Maintenance Fees Filed. |
Nov 20 2023 | PMFG: Petition Related to Maintenance Fees Granted. |
Date | Maintenance Schedule |
Sep 02 2017 | 4 years fee payment window open |
Mar 02 2018 | 6 months grace period start (w surcharge) |
Sep 02 2018 | patent expiry (for year 4) |
Sep 02 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 02 2021 | 8 years fee payment window open |
Mar 02 2022 | 6 months grace period start (w surcharge) |
Sep 02 2022 | patent expiry (for year 8) |
Sep 02 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 02 2025 | 12 years fee payment window open |
Mar 02 2026 | 6 months grace period start (w surcharge) |
Sep 02 2026 | patent expiry (for year 12) |
Sep 02 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |