A peer-to-peer (p2p) communication system is provided. One or both of audio and video can be transferred among a number of user terminals in the p2p system. The user terminals include at least one master terminal. The master terminal is identified, based on a determined topology, using obtained information provided in a data parameters table. Each user terminal includes a control for processing video and/or audio that is sent and/or received by the user terminal. The master terminal control can process the same video and/or audio and generate different video and/or audio data to be provided to different user terminals. Such different video data is a function of the communication paths between the master terminal and the different user terminals. Such different audio data can be a function of virtual relative positions associated with the user terminals. audio volumes can also be separately controlled by each user terminal. Video and/or audio can be transferred in an aggregate manner, such as when the master terminal sends audio originating from a number of user terminals to another user terminal. This p2p system can be utilized with online game playing.
|
1. A method for communicating at least voice, comprising:
providing a first plurality of user terminals including at least a first master and a number of slaves, including at least a first slave and a second slave, at least said first master, said first slave and said second slave being parts of a particular peer-to-peer system, with said at least first master functioning differently than each of said first and second slaves, each of said plurality of user terminals having at least an audio input device and an audio output device, said first plurality of user terminals being associated with a first determined topology that relates to connections among them in said particular peer-to-peer system,
wherein each master including said at least first master is a user terminal that has at least the following master functions:
(i) provides its own audio data;
(ii) combines at least (a) audio data provided by at least one other user terminal and (b) its own audio data or audio data from a second other user terminal;
(iii)combines audio data only when said user terminal outputs processed audio data to at least one slave such that the communication path therebetween is without any other user terminal intermediate thereof; and
(iv)outputs audio data using its own audio output device;
and each slave including said first slave and said second slave is a user terminal that has at least the following slave functions:
(i) provides its own audio data;
(ii) processes only its own audio data, including not combining audio data from any other user terminal;
(iii)does not send another user terminal's audio data to any other user terminal; and
(iv)outputs audio data using its own audio output device;
processing audio data from said first slave using said at least first master to generate processed first slave audio data;
processing audio data from said second slave using said at least first master to generate processed second slave audio data;
sending at least said processed first slave audio data to said second slave using said at least first master;
sending at least said processed second slave audio data to said first slave using said at least first master; and
transitioning, while communications are occurring utilizing said first determined topology, from said first determined topology to a second determined topology associated with a second plurality of user terminals and with said second plurality of user terminals being different from said first plurality of user terminals, wherein said transitioning comprises:
dropping out by said first master from said first plurality of user terminals and with said second plurality of user terminals including said first and second slaves of said first predetermined topology and not including said first master,
determining which one of said second plurality of user terminals is said second master and in which said second master replaces said first master for communicating audio data with other of said second plurality of user terminals, wherein said second master performs at least master functions in said second determined topology that are comparable to master functions performed by said first master in said first determined topology involving communications with at least one of said first and second slaves, said determining including: (i) utilizing a first bandwidth obtained using data sent from said second master to said first slave; (ii) utilizing a second bandwidth obtained using data sent from said first slave to said second master; (iii) utilizing a third bandwidth obtained using data sent from said second master to said second slave; (iv) utilizing a fourth bandwidth obtained using data sent from said second slave to said second master; (v) utilizing a fifth bandwidth obtained using data sent from said first slave to said second slave; and (vi) utilizing a sixth bandwidth obtained using data sent from said second slave to said first slave, and
using said second determined topology for communicating by said second plurality of user terminals after said determining.
2. A method of
3. A method of
4. A method of
5. A method of
6. A method of
7. A method of
8. A method of
obtaining audio in of said first master;
generating first audio data of said first master using said first master audio in; and
generating second audio data of said first master using said audio in, wherein said first audio data is different from said second audio data based on more compression of said first master audio in to generate said first audio data than compression of said first master audio in to generate said second audio data.
9. A method of
controlling volume of audio out that is output by said audio output devices of said first and second slaves, wherein said volume that is output by said audio output device of said first slave depends on a first position associated with said first slave relative to said second slave, and said volume that is output by said audio output device of said second slave depends on a second position associated with said second slave relative to said first slave.
10. A method of
11. A method of
12. A method of
controlling separately audio volume of each of at least said first and second slaves that is to be provided to said audio output device of said first master, wherein said controlling includes controlling each of said audio volumes to be provided to said audio output device of said first master so that said audio volume of said first slave is turned off and said audio volume of said second slave is not turned off.
13. A method of
14. A method of
controlling separately audio volume of each of at least said first master and said second slave that is to be provided to said audio output device of said first slave, wherein said controlling includes controlling each of said audio volumes to be provided to said audio output device of said first slave so that said audio volume of said first master is turned off and said audio volume of said second slave is not turned off.
15. A method of
16. A method of
17. A method of
obtaining audio in from said audio input device of said first master;
generating first audio data of said first master using said first master audio in;
generating second audio data of said first master using said first master audio in, wherein said first audio data is different from said second audio data;
obtaining audio in from said first slave audio input device;
obtaining audio in from said second slave audio input device;
receiving said first slave processed firstly audio data by said first master;
receiving said second slave processed firstly audio data by said first master;
aggregating at least said second slave processed secondly audio data and said first master first audio data into a first frame using a control of said first master;
aggregating at least said first slave processed secondly audio data and said first master second audio data into a second frame using said control of said first master;
receiving said second slave processed secondly audio data and said first master first audio data by said first slave;
receiving said first slave processed secondly audio data and said first master second audio data by said second slave;
inputting an audio out, based on at least said second slave processed secondly audio data and said first master first audio data, to said first slave audio output device; and
inputting an audio out, based on at least said first slave processed secondly audio data and said first master second audio data, to said second slave audio output device.
18. A method of
19. A method of
obtaining video in from said video input device of said first master;
generating first video data of said first master using said first master video in;
generating second video data of said first master using said first master video in, wherein said first video data is different from said second video data;
obtaining video in from said first slave video input device;
generating processed video data of said first slave using said first slave video in;
obtaining video in from said second slave video input device;
generating processed video data of said second slave using said second slave video in;
receiving said first slave processed video data by said first master;
receiving said second slave processed video data by said first master;
aggregating at least said first master second video data and video data based on said first slave processed video data into at least one packet of at least a first frame using a video/audio control of said first master;
aggregating at least said first master first video data and video data based on said second slave processed video data into at least one packet of at least a second frame using said video/audio control of said first master;
receiving said first master first video data and said video data based on said second slave processed video data by said first slave;
receiving said first master second video data and said video data based on said first slave processed video data by said second slave;
inputting a video out based on at least said second slave processed video data and said first master first video data to said first slave video output device; and
inputting a video out based on at least said first slave processed video data and said first master second video data to said second slave video output device.
20. A method of
said video data based on said first slave processed video data is less than said first slave processed video data; and said video data based on said second slave processed video data is less than said second slave processed video data.
21. A method of
22. A method of
23. A method of
24. A method of
25. A method of
26. A method of
27. A method of
obtaining a first audio in from said audio input device of said first master; and
obtaining a second audio in from said audio input device of said second slave;
wherein said audio data output by said audio output device of said first master does not include using said first audio in of said first master; and
wherein said audio data output by said audio output device of said second slave does not include using said second audio in of said second slave.
28. A method of
29. A method of
30. A method of
generating first video data of said first master using said first video in of said first master and said video/audio control of said first master; and
generating second video data of said first master using the same said first video in of said first master and said video/audio control of said first master;
wherein a difference between said first video data and said second video data depends on a first communications path, having one bandwidth associated therewith, between said first master and said first slave and also depends on a second communications path, having a another bandwidth associated therewith, between said first master and said second slave, which is different than said first communications path.
31. A method of
32. A method of
33. A method of
(a) transition frame information related to said first master being dropped; (b) a transition count associated with a most recent topology that is to be used by each of said first plurality of user terminals; and (c) transition status associated with each of said first plurality of terminals that has its audio data in said audio packet data, said transition status related to a stage of said transitioning that said each of said first plurality of user terminals having its said audio data in said audio data packet has finished.
34. A method of
35. A method of
|
The present invention relates to voice and/or video communications involving at least one peer-to-peer system that includes a number of user terminals and, in particular, to providing one or both of voice and video communications among the user terminals based on their communication capabilities.
Online communications, particularly by means of the Internet, include streaming video and audio. Products that enable the presentation of such video and audio include the Adobe Flash Player. The Flash Player can be used in conjunction with a web browser to provide video and audio on a web site for selectable viewing. Also available from Adobe is its Flash Media Server, which can function as a control or central server for a number of Flash Players, such as when playing an online game. In such an application, the Flash Media Server and the Flash Players are part of a network in which users of the Flash Players play a Flash-based multi-player, multimedia game. Game play is communicated among the Flash Players utilizing the Flash Media Server as a hub.
Known peer-to-peer communication systems provide multi-party communications by means of voice-over-Internet protocol (VoIP). In that regard, a communication system has been devised for communicating video, audio and file data, among a plurality of communication devices or nodes, while not relying on a central server, as disclosed in Pub. No. US 2010/0262709 A1 to Hiie et al., published Oct. 14, 2010 and entitled “Optimising Communications.”
In view of the increasing popularity of online audio and/or video communications, it would be advantageous to utilize at least certain aspects of available technologies to effectively, and with reduced complexity and cost, provide one or more of audio and video communications in a peer-to-peer system.
In accordance with the present invention, a peer-to-peer (p2p) system is provided which includes user terminals that are able to bi-directionally communicate with each other using at least one of audio and video. A user terminal capable of video communications has a video input device, such as a video camera or webcam and a video output device, such as a display device or video monitor. A user terminal capable of audio communications has an audio input device, such as a microphone, and an audio output device, such as one or more audio output speakers. Each user terminal has a control, or a video/audio (one or both of video and audio) control, which controls the video and/or audio to be output by that user terminal to at least one other user terminal. The video/audio control preferably includes a Flash Player, or at least portions thereof or features associated therewith, which is responsible for handling or managing the video that it receives via its user terminal video input device. Typically, the Flash Player, as part of its control functions, encodes the video in so that it is in a desirable or acceptable video format for sending to another user terminal. In addition to the Flash Player, the control includes a control module that communicates with the Flash Player. The control module is basically comprised of software, as is the Flash Player. With respect to the video encoding operations, the control module has a control output or signal that it provides to the Flash Player for use in regulating the extent of encoding, such as the amount or degree of video compression including any deletion of determined video portions. Such encoding depends on the communication capabilities or characteristics associated with the communications paths or channels between the particular user terminal providing such encoded video and the user terminal that is to receive it.
In that regard, in an embodiment in which at least video is being communicated, different video data can be generated from the same video that is received or provided by the originating video input device. Such different video data can be sent to two or more other user terminals. More specifically, at least one user terminal in the p2p system is determined to be a master or control terminal, which is responsible for video to be sent to other user terminals including video that is provided originating from the master terminal's video input device. In addition to its own video data, the master terminal may receive video data from two or more other user terminals (e.g., “slave” or non-master terminals) in the system. Such video data is processed or controlled by the master terminal to generate processed video data to be sent to user terminals for use in displaying video. For example, the master terminal might be responsible for sending its own processed video data to two other user terminals, as well as sending processed video data associated with the first user terminal to the second user terminal for display and sending processed video data associated with the second user terminal to the first user terminal for display. The master terminal can process the video data so that the processed video data sent to one user terminal is different from the processed video data sent to another user terminal. The determined differences are essentially based on the communication paths between the master terminal and the other user terminals, particularly the bandwidths, latencies, user terminal computational or processing power and/or other performance characteristics or factors associated with each of such paths. Based on these factors, by way of example, the processed video data associated with the master terminal and sent by it to a first user terminal, in comparison with the processed video data also associated with the master terminal and sent by it to a second user terminal, may have experienced or undergone different processing, such as more or less video compression, which can include deletion of different video portions depending on which terminal the processed video is to be sent. As a result, the video data, which is based on the video received by the master terminal video input device, that is used by the first and second user terminals to provide video displays can be different, even though the video data used by the master terminal to generate the different processed video data is the same.
In an embodiment in which audio is output by audio output devices of the user terminals, the video/audio control of each user terminal, including the master terminal, can be used to regulate the volume of the audio that is output by the user terminal audio output devices, such as their audio speakers. Such audio volume control can result in audio volumes being dependent on virtual positions that are associated with each user terminal relative to the other terminals in the particular p2p system. When playing an online game, for example, each user terminal (and accordingly the player utilizing that user terminal) can be assigned a relative table position prior to the game, or a round of the game, being played. Second and third players, respectively, could be positioned to the left and right of a first player. A fourth player could be positioned directly across from the first player. When playing an actual game, the voice or audio received by the first player depends on a number of factors including the relative positions of the players. The first player hears voice from the second player differently than voice from the third player due, at least in part, to their virtual table positions. The user terminal control can be used to emulate such differences. Audio volume that is output by the master terminal to another user terminal can be controlled so that the audio volume associated with a first speaker for such a user terminal is different than the audio volume associated with a second speaker of that user terminal. The difference in audio volumes is based on the relative positions of that user terminal and the user terminal that is the source of the audio. By means of such position determinations and controls, audio heard by online game players emulates actual game play voice communications.
Regulation of audio also includes being able to turn off the volume associated with the audio that can be received by each player from other players, as well as adjusting the audio volume received from one or more other players to a desired level. More specifically, the level, degree or magnitude of the audio volume of one or more user terminals can be controlled. The communications user (or player when a game is being played during the communications) may want to individually, separately or independently control the audio volume it is to receive or use from one or more of the user terminals that are part of that user's particular communication system. That is, the communications user may want the level of the audio volume that is output by his or her user terminal audio output device to be different in respect to one or more of the audio volumes that it uses from other user terminals, including having the control to separately turn off (audio volume becomes essentially zero or no audio volume is able to be heard) the audio volume associated with one or more user terminals. As can be understood, controls can additionally, or alternatively, be provided that allow each user terminal to similarly turn off, not display, not send/receive, or otherwise not use one or more video images related to identity-related information associated with other user terminals. Such identity-related information can take the form of a user's face, an avatar, or any other image that the user may provide related to his or her identity.
In the embodiment in which at least audio is communicated, the user terminals process audio data. Such processing is important to achieve desired audio quality, regardless of the communications paths among the various user terminals. The master terminal receives audio data from other user terminals after being processed by them, including audio data that has been compressed based on the communication path to the master terminal. Such compressed audio data is typically decoded (e.g. decompressed) and encoded again by the master terminal control, which further encoding depends on the communications/performance capabilities/factors related to the communication paths between the master terminal and the other user terminals to which audio data is to be sent. Such audio data compression (by a user terminal), and de-compression and another, but typically different, compression (by a master terminal) contrasts with the video data processing embodiment. That is video data is compressed usually one time by producing a reduced quantity of data that represents the originally provided video data, while allowing for such reduced data to be de-compressed so that all original data is substantially recoverable with acceptable image degradation. Such compression is done by means of the user terminal from which the video data originates; however, the master terminal has the capability of deleting video data that it has received from one terminal before sending it to another terminal. Additionally, in the embodiment in which both audio and video are communicated, to assist in aligning audio data with concurrently generated video data so that the video ultimately being displayed corresponds with its associated audio output from the audio speaker(s), it is preferred that the audio data and video data, when both are being generated, be aggregated and output together. Whenever aggregated, the sending of audio data has priority over the sending of video data. Video data is included in the packets of frames of data being sent when such packets have space for the video data. Related to synchronizing audio data and its associated video data, when the master terminal encodes or compresses audio data after first decoding it, it is usually the case that the associated video data is located in one or more positions in one or more packets of one or more packet frames that are different than their position(s) when such video was received with its associated audio by the master terminal. Because of this encoding transformation, and in order to maintain an accurate sync between audio and video data, irrespective of the different video data packet locations, it is necessary to keep track of such video data in the packets by utilizing a time correlation marker or markers, a time stamp or the like, so that the user terminal receiving such data can properly match such video data with its associated audio data, regardless of their differing locations.
With regard to providing communication (one or both of audio and video) paths among particular user terminals, a peer-to-peer system or network is established using data stored in a data parameters table. Such data relates to communication factors or characteristics associated with each communication path, such as the previously noted bandwidth, latency and/or computational or processing power. Such data is used with one or more algorithms, which can be termed a “fitness” function, to determine fitness values. There is one fitness value determined for each topology that can be associated with the particular user terminals that wish to communicate with each other. Each topology for particular user terminals of a particular peer-to-peer system relates to connections among such user terminals. Based on possible connections, more than one possible topology can be defined. A determined topology, which is the topology to be used for the particular user terminals, is determined using the fitness values found using different topologies for such user terminals. From a comparison of the fitness values, a selected fitness value is determined. Based on the selected fitness value which, in one embodiment, may be the lowest or smallest fitness value of such determined fitness values, the determined topology, including its master(s), is determined or identified. Once a particular one topology is determined and communications are occurring using such topology, updated data continues to be obtained and provided with the data parameters table. Further similar determinations are made, but using such new data, related to determining a possibly different topology for the same user terminals. If or when the particular peer-to-peer system changes, e.g., one of the user terminals is no longer part of that system and/or a new user terminal is to communicate with other user terminals already communicating using the previously determined topology, another new topology is similarly determined, which is different than the immediately previous topology. With regard to changing to a different topology, whether such change is made due to the addition and/or deletion of a user terminal or due to a “better” topology being determined for the same user terminals, any communications during the transition seamlessly occur. That is, each user of the user terminals continues to communicate as if no change was occurring and each such user does not notice any change being made. Generally, each individual user terminal continues to utilize the present topology until such user terminal receives information to the effect that the topology has changed. After a particular user terminal has such information indicative of a different topology, such user terminal begins utilizing the different topology including sending communications based thereon, instead of the immediately previous topology.
The present invention, therefore, provides novel ways of communicating one or both of voice and video in a p2p system. Each p2p system is established with a number of user terminals, at least one of which is determined, based on a determined topology, to be a master terminal for handling communications involving other user terminals, including such communications to/from the master terminal itself. The user terminals communicate with each other via communication channels that can differ in their performance capabilities. Consequently, quality characteristics of the outputs from the user terminals can vary depending on such factors as bandwidths and delays. User terminal audio outputs can be controlled. Such control can include audio heard by game players, or other users, being a function of their relative positions, such as positions around a virtual game-playing table. Such control can comprise, additionally or alternatively, independently controlling the audio level associated with each audio volume from each user terminal, including turning off one or more audio volumes. Such audio control(s) can be provided with a p2p system, or can be provided in one or more systems that are not p2p systems. Transmitting audio outputs when a master terminal is used results in bandwidth savings. The master terminal is able to aggregate or combine audio from more than one slave terminal before such aggregated audio is sent to a desired slave terminal, thereby saving bandwidth. This contrasts with a p2p system in which each audio stream is sent using its own separate audio channel. Related to aggregating audio, in an embodiment when both are being generated, audio and video are combined in connection with achieving desired correspondence between them at the eventual audio and video outputs. Although other embodiments can be provided, it is preferred that, when communicating video, each user terminal control includes a Flash Player, which is already configured for use with a variety of webcams. With respect to establishing each particular p2p system, a particular topology is determined using data associated with a data parameters table. Based on the determined topology, one or more master terminals is determined or identified. Numerous applications of the present invention are contemplated including use during online game playing, as well as for typical voice, as well as voice with video, conversations (such as conference calls) over the Internet. New applications can also be developed, such as to provide communications among various users while they are playing slots in casinos.
Additional advantages of the present invention are readily apparent, particularly when taken together with following descriptions including the accompanying drawings.
With reference to
Regarding the establishment of p2p communications among the user terminals 100, the p2p server 108 communicates with the game server 104 including providing the p2p server 108 with player information, such as appropriate credentials indicative of such players being part of a group that has been set up by the game server 104 to play a particular game. The communication exchanges also include information related to the UTs 100 of the players so that the p2p server 108 can communicate directly with one of more of them, as part of the process (described later) that enables them to communicate with each other.
Referring to
With respect to the video and audio signals, each user terminal 100 can include one or more video input devices, audio input devices, video output devices and audio output devices. In one embodiment, the video input device is a digital camera or webcam 116; the audio input device is a microphone 120; the video output device is a video display or screen 124, such as a computer video screen; and the audio output device includes two stereophonic speakers 128, although such a device could include more speakers or only one speaker. The user terminal 100 also has an operating system 140, which typically is part of the communication apparatus that constitutes the user terminal 100. The operating system 140 manages the video and/or audio inputs and outputs relative to the devices 116, 120, 124, 128 and supervises the signal processing functions associated with such video and audio, which is conducted using the UT control 112. Generally, the operating system 140 manages, organizes and directs, among other things, the video and/or audio signals relative to their respective input and output devices. In that regard, video and/or audio are communicated between the operating system 140 and the video/audio control 112 of the user terminal 100, pursuant to the managing, organizing and directing of the operating system 140. Prior to being encoded, and after being decoded, using the video/audio control 112, such video and audio signals are handled by the operating system 140 as part of its responsibilities in facilitating communications from/to the video and audio input devices and output devices 116,120,124,128.
With respect to the illustrated user terminal 112, in a preferred embodiment it can be described as including a Flash Player 144 and a separate control module 148, which is compatible or workable with the Flash Player 144. The Flash Player 144 is a known and commercially available unit that executes using an ActionScript programming language. One of the conventional characteristics of this unit is that it is useful in receiving and sending video to a variety of different video input and output devices. Among its functions, the Flash Player 144 compresses and decompresses video data and is able to communicate compressed video data to other apparatuses. For example, in one prior art application, the Flash Player is able to communicate video and audio data with a Flash Media Server, which controls transfer of such data relative to other apparatuses having Flash Players. With regard to the present invention, the Flash Player 144 need not have or utilize all features or functionalities of the commercially available Flash Player 144; however, the Flash Player 144 does include functions associated with being able to handle video inputs and provide video outputs that are compatible with different video input devices, such as commercially available webcams.
In another embodiment, the user terminal control 112 does not include the Flash Player 144, or any portion thereof. Rather, the control module 148 by itself is devised to provide any functions or features that were provided by the Flash Player 144, such as encoding/decoding video information. As previously described, the user terminal control 112 is preferably a control that can control both video and audio. However, in other embodiments, the present invention could have a user terminal control that functions with video, and not audio. Likewise, a user terminal control could be provided that functions with audio, and not video.
Continuing to refer to
Regarding the basic steps related to determining which of the user terminals 100 is to be the master terminal 100-n and which are to be the slave terminals 100-1 . . . 100-m . . . , reference is made to the step diagram of
After completion of the data parameters table, the peer-to-peer server 108 initiates steps to identify a topology manager as indicated by block 208. The topology manager is one of the user terminals 100 of the p2p network and is chosen or determined by the peer-to-peer server 108. In the illustrated embodiment, the user terminal m 100-m is designated by the peer-to-peer server 108 to be the topology manager. In one embodiment, the peer-to-peer server 108 identifies the topology manager in an arbitrary or random manner. That is, any one of the user terminals 100 could be arbitrarily designated as the topology manager. In another embodiment, a determination of which user terminal 100 is to be the topology manager is accomplished by relying on one or more predetermined factors, such as related to the geographic location of such a topology manger relative to the other user terminals and/or its bandwidth capabilities. Once designated as the topology manager, the user terminal 100-m is responsible for maintaining the data parameters table of its p2p network as it may change with time. The designated topology manager is provided with the complete initial data parameters table by the peer-to-peer server 108. This process speeds up the initial creation of the p2p network. The peer-to-peer server 108 also provides information to the topology manager related to any user terminal 100 being added later to the previously established p2p network and information related to any user terminal 100 being dropped from, or discontinued as being part of, the particular p2p network.
With reference also to
It should be appreciated that fewer than all n parameters could be utilized in the analysis for determining that the master terminal is user terminal 100-n. For example, the computational power associated with each of the user terminals may be one of the n provided parameters but in some embodiments might not be used in ascertaining the master terminal Related to this, it should be appreciated that one or more data parameters might not be determined in arriving at the contents of the data parameters table. As another example, the data parameters table might comprise a single parameter, such as related to the bandwidth associated with the communication path for each pair of user terminals 100s that are part of the particular peer-to-peer network. Based on such bandwidth determinations, this stored information that makes up the data parameters table can be analyzed using the software included with the user terminal control 112 of the topology manger 100-m in order to determine that the user terminal 100-n should be the master terminal. It should be further appreciated that factors or parameters not presently used might be included in determining a selected fitness value in other embodiments, such as arriving at communication cost related data and incorporating that into the selected fitness value determination.
Subsequent to the steps for determining the topology for the particular p2p network including the master terminal indicated by block 212, the p2p network can be utilized or implemented (denoted by block 220), including sending/receiving video and/or audio from each of the user terminals 100 having the hardware and software combinations of the present invention. Generally, video and/or audio can be sent from the video and audio input devices of the respective user terminals 100 to the other of such user terminals 100 that are part of the particular p2p network. In accordance with the established peer-to-peer network of the embodiment of
Referring next to
Referring to block 300 of
With respect to such video encoding or other video processing in which certain video portions are dropped or deleted altogether using a master terminal, the processed video can be defined or characterized as being part of a “key frame” or an “inter-frame.”A key frame is a complete image. An inter-frame has only the changes since the most recent key or inter-frame. A droppable inter-frame has only the changes since the most recent key or inter-frame. The difference between an inter-frame and droppable inter-frame is that nothing references a droppable inter-frame. In other words, if a droppable inter-frame were dropped, the next frame would either be a key frame, which is a complete image, or the next frame would contain all changes since the most recent key or inter-frame and if it were displayed all information for the current image would be updated. If an inter-frame were dropped, one would need to wait for the next key frame to be able to update the image. Each such droppable inter-frame can be deleted or dropped typically to save bandwidth while preserving desired video quality. Each droppable inter-frame is typically marked as part of the operation of the video encoder/decoder so that each such droppable inter-frame is known or designated.
After the video in is controllably encoded, the resulting video data that is generated is output by the control module 148 utilizing user datagram protocol (UDP), as indicated by block 308. Such video data is sent based on the communications capabilities previously determined between the slave terminal 1 100-1 and the master terminal n 100-n.
In continuing with the representative example that includes the second slave terminal 2 100-2 and referring to block 312, like that just described concerning the slave terminal 100-1, the video input device of the second slave terminal 100-2 provides video in, by means of its operating system 140, to its user terminal control 112. The video in of this slave terminal 100-2 is processed and/or controlled to generate the video data that it will output to the master terminal 100-n, according to block 316. Like the video associated with the slave terminal 100-1, such video data output by the slave terminal 100-2, is encoded depending on the communications capabilities involving the slave terminal 100-2 and the master terminal 100-n. Such dependence involves utilization of the information in the data parameters table related to known or test data transfers between the slave terminal 100-2 and the master terminal 100-n. Such parameters can be different from those associated with the slave terminal 100-1 and the master terminal n 100-n. For example, the bandwidth may be greater resulting in a different degree of compression based on the video in from the webcam 116 of the slave terminal 100-2. The generated video data from the user terminal control 112 of the slave terminal 100-2 is output to the master terminal 100-n, as denoted by block 320.
Referring next to block 324, and continuing with the example, the master terminal 100-n video input device 116 outputs its video in, which is received by its user terminal control 112. As indicated by block 328, such video in of the master terminal 100-n is processed, or otherwise controlled, in order to generate video data-1 and also to generate video data-2. Video data-1 is subsequently sent to the slave terminal 100-1 and video data-2 is subsequently sent to the slave terminal 100-2. In connection with such processing, the control 112 of the master terminal 100-n relies on information in the data parameters table related to communications capabilities involving it and each of the slave terminals 100-1, 100-2 in order to generate the processed video data for sending to the slave terminals 100-1, 100-2, which processing or controlling steps are similar to those described in conjunction with block 224.
In addition to handling the video in from its own video camera 116, the user terminal control 112 of the master terminal 100-n is also responsible for processing or controlling the video data-1 and the video data-1 that it receives from the slave terminal 100-1 and the slave terminal 100-2, respectively. More specifically, and referring to block 340 of
With regard to possible video processing by the master terminal control 112, the slave terminal 100-1 video data-1 may have video portions that are dropped and not transferred, such as when the performance factors or characteristics associated with the communications path between the master terminal 100-n and the slave terminal 100-2 requires that less video data be sent. However, there is no further or different compression of such video data. That is, the video data-1 of the slave terminal 100-1 is not decompressed or decoded by the master terminal 100-n for subsequent different compression before sending such video, in the form of video data-2, to the slave terminal 100-2. As an alternative though, the master terminal's control 112 may simply control or pass the same video data that it receives from the slave terminal 100-1, in the form of video data-1, to the slave terminal 100-2. As another variation, instead of the master terminal's control 112 determining that video portions are to be dropped before sending the video data to the slave terminal 100-2, a different determination might be made. More specifically, the p2p network involving the controls 112 of one or more of the user terminals 100, including the control 112 of the slave terminal 100-1, could determine a less than optimum transfer, given the performance capabilities of the particular communication path, of such data from slave terminal 100-1. Rather than optimally, or substantially optimally, utilizing the communication path between the slave terminal 100-1 and the master terminal 100-n, a less than optimum or less than usual transfer might be determined so that video portions are not dropped or lost when video data from one slave terminal (e.g. slave terminal 100-1) is transferred by the master terminal to another slave terminal (e.g. slave terminal 100-2), even though on a relative basis the performance capabilities (e.g., bandwidth, delay) associated with the communication path between the one slave terminal and the master terminal is higher or greater than those between the master terminal and the other slave terminal.
Likewise and as noted by block 348, the video data-1 of the slave terminal 100-2 is also received by the master terminal 100-n using its user terminal control 112. After receiving such video data and as indicated by block 352, it is processed and/or controlled to generate video data-2 associated with the slave terminal 100-2. Such control is usually accomplished based essentially on the communications or performance capabilities involving the master terminal 100-n and the slave terminal 100-1, as found in the data parameters table of
With respect to blocks 356 and 360, the video data from these three user terminals 100-1, 100-2 and 100-n are provided to certain other of the user terminals 100-1, 100-2, 100-n. In particular, both the video data-2 associated with the slave terminal 100-1 and the video data associated with the master terminal 100-n are output to the other user terminal 100 of this representative example, namely, the slave terminal 100-2. Similarly, the video data-2 associated with the slave terminal 100-2 and the video data-1 associated with the master terminal 100-n are output to the other slave terminal 100-1, preferably both outputs use an aggregate packet transfer in which frames defined as containing the packets preferably include video data obtained from more than one user terminal and, more preferably, when audio data is also being communicated the packets include aggregated audio data and video data.
Regarding such outputs and referring to
Like the slave terminal 100-1, as indicated by block 382, the slave terminal 100-2 receives video data provided by the slave terminal 100-1 and the master terminal 100-n. As noted by block 386, its user terminal control (e.g. video/audio control) 112 processes and controls such video data. More particularly, the video data-2 associated with the slave terminal 100-1 is decoded, either substantially at the same time or at different times, with the decoding of the video data-2 associated with the master terminal 100-n. As a result of such decoding, the resulting processed information or signals (video out) can be applied to the video display 124 of the slave terminal 100-2 (referring to block 390), whereby the video originating from each of the slave terminal 100-1 and the master terminal 100-n are seen using the video display 124 of the second slave terminal 100-2.
Additionally, in furtherance of this example involving two slave terminals 100-1, 100-2 and the single master terminal 100-n, block 394 indicates that the master terminal 100-n using its video output device, such as its video display 124, displays the video originating from each of the first and second slave terminals 100-1, 100-2, based on the video out information or signals that were obtained.
In addition to providing video communications, audio can also be communicated among the user terminals 100 that are part of the inventive peer-to-peer network. Fundamental steps involved in such communications are illustrated in the diagrams of
In the embodiment in which the slave terminal 100-1 includes the Flash Player 144 and the control module 148, such encoding is done using the control module 148 and not the Flash Player 144. The audio output by the microphone 120 is managed by the operating system 140 to provide the audio in that is to be encoded using the control module 148. With respect to such encoding, unlike the video, the encoding may involve some audio compression but preferably does not involve any dropping or deleting of any audio in, except for audio losses due to use of lossy audio encoders/decoders, so that adequate and desirable audio is transferred among the different user terminals 100. Loss of audio is avoided in order to maintain desired audio quality output from the audio output devices. Furthermore, the bandwidth required for audio is significantly less than that typically required by video data.
When audio and video are being sent from a particular user terminal 100 at the same time using frames continuously provided, each of which is typically comprised of a number of data-containing packets, the determinations related to filling the packets with audio and video data depend essentially on timely transfers of the audio data. That is, the packets are first filled with audio data that achieves the adequate quality criterion for each particular or predetermined time period. Then, remaining unfilled packets can be provided with processed video data that include video corresponding, or substantially corresponding, in time to the audio data in those same packets, or packets in the same frames, to be sent. In one embodiment, one or more frames of audio data-containing packets are sent from the subject user terminal every predetermined period of time, such as every 20 milliseconds, and corresponding or related video data that fills such packets or frames is also sent at that rate. In the representative example involving the slave terminal 100-1, audio data (as well as any packet filling video data) that is generated using its control module 148 is output for sending to the master terminal 100-n, as indicated by block 418.
Comparable steps are utilized in conjunction with the audio being provided by the slave terminal 100-2, as conveyed by the blocks 422, 426, 430. That is, audio in is obtained from the microphone 120 of the slave terminal 100-2 by the operating system 140 for processing using the control module 148 of the slave terminal 100-2. The audio in for this slave terminal 100-2 is encoded to facilitate communication with the master terminal 100-n. The software of the control module 148 is used in determining the level of encoding or compression of the audio in, which can depend on factors or parameters included in the data parameters table of
In accordance with blocks 434, 438, audio can also be provided by the master terminal 100-n. Audio that is output by the microphone 120 or other audio input device of the master terminal 100-n is sent to its control module 148 utilizing its operating system 140. This audio in is processed or controlled to generate encoded audio data including audio data-1 and audio data-2. The audio data-1 from the master terminal 100-n is to be sent to the slave terminal 100-1, while the audio data-2 is to be sent to the slave terminal 100-2. Audio data-2 can be different from audio data-1 because the audio in was encoded differently based on differences in communication capabilities. The communication capabilities, such as bandwidth and/or latency, involving the master terminal 100-n and the slave terminal 100-1 may be different than that available for communications between the master terminal 100-n and the slave terminal 100-2.
In addition or alternatively, the resulting encoded audio data-1 and encoded audio data-2 may be different, even though both rely on or utilize the same audio in, in order to possibly provide different audio volumes to the slave terminals 100-1, 100-2. Such different audio volumes are based on, or are associated with, the audio being provided by, or originating from, the master terminal 100-n microphone 120. Such audio volume difference depends on, or otherwise is a function of or relates to, a determined, simulated or virtual position associated with the user terminals 100 that are part of the peer-to-peer network. More specifically, such as when the player/users of the user terminals 100-1, 100-2, 100-n are part of a group playing a game, each of the players can be determined or be defined (using one or more means, such as software being executed using one or more of the user terminals) as having positions relative to each other, e.g., around a virtual game-playing table. By way of example, a first slave terminal 100-1 player may be determined to be at a virtual position to the left of the master terminal 100-n player, while the second slave terminal 100-2 player may have a determined simulated position directly across, or opposite, from the master terminal 100-n player. In order to simulate the voice or audio from the master terminal 100-n player, which is heard by the slave terminal players and based on their relative positions, the audio volumes are different. That is, it can be beneficial to provide left-right spatial audio control and front-back spatial audio control, as well as control of the audio volume from each player or user. With respect to the first slave terminal 100-1, audio associated with or originating from the master terminal 100-n is “heard” primarily from its player's “right channel”; whereas the second slave terminal 100-2 player “hears” such audio essentially equally “from both channels or in both ears” because of the direct across virtual position. To achieve this desired “hearing”, the video/audio control 112 of the master terminal 100-n arranges or otherwise controls the audio in to develop encoded audio data that can be used by the first and second slave terminals 100-1, 100-2 to provide such desired audio outputs. In one embodiment, each of the audio output devices of the slave terminals 100-1, 100-2 includes first and second speakers, which for example are associated with right and left audio outputs, respectively. In the case of the first slave terminal 100-1 player, the audio volume is controlled such that the first slave terminal 100-1 player's “right audio output” receives a greater audio output (relatively louder) by means of controlling the output from the speaker more near or associated with this player's right audio output. Accordingly, the voice or audio heard by the first terminal 100-1 player simulates what such a player would hear when that player's position is essentially to the left of the master terminal 100-n player. Dissimilarly, because the second slave terminal 100-2 player is located across from the master terminal 100-n, its first and second speakers would output essentially the same audio volume to be heard by that player. Other potential embodiments that may benefit from the directional-related audio control by which users receive audio information based on their positions relative to other users include possible military applications. Military battle field personnel utilizing such audio features can have the ability to determine positions of their comrades relative to their own positions, including in real time and relative to the direction one's head is facing. Based on audio inputs from their comrades, determinations can be made by each particular individual related to the positions of his or her comrades that might be located along a 360 degree path defined around that particular individual. Such ability can promote desired awareness and enhance safety of military personnel. With respect to the entertainment genre, another potential application involves team-play action or adventure software games. During play it may be advantageous to have information regarding positions of various team members. Utilizing the audio control associated with player positions, as described herein, team members can be made aware of their relative positions, thereby potentially enhancing their successes as a team during the playing of the game.
With reference now to
Comparable steps are conducted related to the audio data-1 associated with or originating from the slave terminal 100-2. At block 458, the master terminal 100-n receives such audio data. Then, this audio data is processed and/or controlled, as indicated by block 460, using the video/audio control 112 of the master terminal 100-n to generate the slave terminal 100-2 audio data-2. Again, the objective is to encode, or otherwise provide, such audio data-2 so that is compatible with or acceptable for transfer via the communication path between the master terminal 100-n and the slave terminal 100-1, and/or has been properly prepared for desired audio volume output, which takes into account player virtual positions. Such encoding can include making determinations utilizing the data parameters table of
Referring now to blocks 462, 466, outputting of the audio data processed by the master terminal 100-n occurs. That is, the audio data-2 of the first slave terminal 100-1, together with audio data-2 of the master terminal 100-n, are sent, preferably aggregated so that they are transferred at essentially the same time, to the second slave terminal 100-2 for use by that terminal. Similarly, the audio data-2 of the slave terminal 100-2 and the audio data-1 of the master terminal 100-n are preferably aggregated for sending to the first slave terminal 100-1 as noted by block 466, for use by that terminal. As previously described such audio data transfers are typically accompanied by simultaneous transfers of corresponding video data that corresponds with, or properly relates to, such audio data, if or when the particular embodiment involves video data communications.
Continuing with this example involving the audio data being output by the master terminal 100-n, reference is made to
Continuing with
Completing the description of
Referring to
With respect to
From the above descriptions regarding audio communications, it should be understood that use of one or more master terminals, such as master terminal 100-n, in a p2p system results in desirable bandwidth savings due to the master terminal 100-n acting like a central controller for audio that is transmitted among numerous slave terminals. Instead of such audio information being communicated directly between each of the slave terminals, thereby requiring additional bandwidth to achieve such direct communications, the master terminal receives, processes and combines audio for sending to the slave terminals. Accordingly, such an aggregated signal to be sent to a particular slave terminal, with audio from more than one terminal (master and/or slave terminals audio), requires less bandwidth than the bandwidth required to send that same audio directly between slave terminals. Additionally, when possible and desired, the master terminal 100-n can synthesize more audio information using audio that it processes. For example, when sending aggregated audio to a particular slave terminal that has quadraphonic sound capability, the audio output by the master terminal 100-n to that slave terminal might include audio information compatible with the quadraphonic speakers of that particular slave terminal.
It should be further noted that the present invention does not require that every player or user involved in a particular group interaction, such as playing a game, have a user terminal that provides video and/or audio communications. Accordingly, one or more players not having such audio and/or video communications capabilities can play the game with players who utilize user terminals that do have such capabilities. More specifically, one or more players may not have a microphone 120 and/or camera 116. Each player that does not have a camera 116 and a microphone 120 can receive video because the player has a video display 124 and can receive audio because the player has one or more speakers 128. In such a case, the player is able to receive video and voice but cannot send his/her own video and voice. Related to that case, the option remains available to prevent or otherwise control the sending of video and/or voice to such a player, i.e., reception of video and/or voice to that player could be turned off for one or more suitable reasons, such as resulting from some rules-based or policing system.
In addition to voice being controlled using the control module 148, game audio data is also input to the control module 148 from the Flash Player 144 so that the control module 148 can be involved in regulating audio that might be output by the speaker(s) 128 and picked up the microphone 120. The control module 148 includes an echo canceller component so that any remote player's speech will be removed from the sound picked up by the player's microphone 120 and thus not returned as an echo to the remote user. Similarly, because the game data sounds are also received by the echo canceller of the control module 148, those game sounds will be removed as well from the audio picked up the player's microphone 120. Consequently, players will not hear other players' game sounds.
With reference to
Referring to
In view of the descriptions involving these different embodiments, some rules concerning slave terminals and master terminals are noted. First, a slave terminal must connect to one and only one master terminal. Related to that, a slave terminal processes only its own data and sends or outputs only its own data. Secondly, a master terminal can connect to any number of slave terminals including no (zero) slave terminal. Thirdly, a master terminal must connect to all other master terminals, with the “net” associated with master terminals being fully interconnected. Fourth, a slave terminal may connect to one or more other slave terminals, so long as they all connect to the same master terminal (hybrid embodiment). Based on these rules, the route video/audio data takes from one player/user to another is unambiguous and no video and/or audio data passes through more than two master terminals.
With respect to the embodiments of
Similar to the determinations of data parameters that were made when the initial or original of the particular network was formed, after the peer-to-peer server 108 completes creation of the data parameters table, the designated topology manager assumes responsibility for determining each master terminal and each slave terminal. Regarding one or more user terminals 100 no longer being part of the previously established peer-to-peer system, and depending on which one or more user terminals 100 are no longer part of such a network, determinations may be made with respect to the new network being established. In a previously established network in which the user terminal that discontinues being part of a second network is the designated topology manager, then a new topology manager must be selected from the user terminals that are part of the new network to be established. Similarly, if the user terminal 100 that discontinues being part of the previously established network is a master terminal, then a new master terminal is determined as previously discussed. In case of a particular slave terminal dropping out or being removed from the network, it may be that no network determinations need be made, other than that video and/or audio transfers no longer occur involving such a slave terminal.
With regard to the various stages or steps associated with video and audio communications that are illustrated by the blocks of
The following provides in step, summary, outline and/or comment form additional or further descriptive information concerning the structures, routines and/or operations of the present invention:
Players Joining & Exiting Games
P2PS—Voice & Video Peer-to-Peer Server
{ | |
unsigned short out_bandwidth[MAXNODES]; | // Node to node uplink bandwidth |
(Kbits/second) | |
unsigned short RTT[MAXNODES]; | // Round trip delay (tenths of |
milliseconds) | |
unsigned short total_in_bandwidth; | // Total downlink bandwidth |
(Kbits/second) | |
unsigned short unreliable_map; | // Map of which connections are |
unreliable | |
unsigned char horsepower; | // How long the audio thread takes to |
do a significant portion of its work | |
(in tenths of milliseconds) | |
}; | |
Establish P2P Communications Among the Nodes
Obtain Data, Populate the DPT, Send a Copy of the DPT to all Nodes
Designate a Topology Manager and Establish a Topology
Layer 1, network management, one of the following: | ||
STUN message: | ||
3 bits: 000 | ||
? bits: STUN data | ||
Measure RTT query message: | ||
5 bits: 00100 | ||
3 bits: message ID | ||
64 bits:message time | ||
8 bits: CRC-8 of previous 9 octets | ||
Measure RTT reply message: | ||
5 bits: 00101 | ||
3 bits: message ID | ||
64 bits:message time | ||
8 bits: CRC-8 of previous 9 octets | ||
Measure input bandwidth reply message: | ||
5 bits: 00110 | ||
3 bits: CRC-3 of congestion | ||
8 bits: congestion (percentage of packets not received by sender) | ||
Measure output bandwidth record message: | ||
6 bits: 001110 | ||
2 bits: 00 | reserved | |
Low bitrate media packet: 1 byte plus layers 3-5 | ||
2 bits: 01 | ||
6 bits: packet index fragment, bits 13-18 | ||
8 bits: congestion (percentage of packets not received by sender) | ||
High bitrate media packet: 4 bytes plus layers 2-5 | ||
1 bit: 1 | ||
1 bit: layer 2 (FEC) present; 0: no, 1: yes | ||
30 bits:packet index | ||
8 bits: congestion (percentage of packets not received by sender) | ||
Layer 2, forward error correction (FEC), optional: 3 bytes | ||
5 bits: number of data packets in group | ||
5 bits: number of redundant packets in group | ||
6 bits: group index | ||
8 bits: number of bytes of aggregated audio data − 1 | ||
Layer 3, frame management, one of the following: | ||
Normal frame: 2-3 bytes | ||
1 bit: 0 | ||
1 bit: first packet in frame; 0: no, 1: yes | ||
1 bit: last packet in frame; 0: no, 1: yes | ||
13 bits:frame index | ||
if first packet in frame: | ||
8 bits: subchannel | ||
Transition frame: | 2-12 bytes | |
1 bit: 1 | ||
1 bit: first packet in frame; 0: no, 1: yes | ||
1 bit: last packet in frame; 0: no, 1: yes | ||
13 bits:frame index | ||
if first packet in frame: | ||
2 bits: transition phase: | ||
00 | R | |
01 | T | |
10 | E | |
11 | X | |
1 bit: packet is whole; 0: no, 1: yes | ||
1 bit: number of node present indicators (NPIs): | ||
0 | indicators for nodes 0-5 | |
1 | indicators for nodes 0-9 | |
2 bits: NPI for node 0: | ||
00 | node not present | |
01 | node present in network | |
10 | data not present (but normally would be) | |
11 | data present in packet | |
2 bits: NPI for node 1 | ||
2 bits: NPI for node 2 | ||
2 bits: NPI for node 3 | ||
2 bits: NPI for node 4 | ||
2 bits: NPI for node 5 | ||
if number of NPIs is 1: | ||
2 bits: NPI for node 6 | ||
2 bits: NPI for node 7 | ||
2 bits: NPI for node 8 | ||
2 bits: NPI for node 9 | ||
if transition phase is E or X: | ||
8 bits: topology count | ||
1 bit: packet is pure; 0: no, 1: yes | ||
network topology, one of the following: | ||
1 bit: 0 2-6 node graph | ||
14 bits:topology lookup table index | ||
3 bits: 100 7 node graph | ||
7 bits: interior node map | ||
21 bits:edge map | ||
3 bits: 101 8 node graph | ||
8 bits: interior node map | ||
28 bits: edge map | ||
2 bits: 11 9 node graph | ||
9 bits: interior node map | ||
36 bits:edge map | ||
Layer 4, data stream | ||
Individual audio/video data, zero or more times: | ||
1 bit: 1 | ||
2 bits: data type: | ||
01 | audio data | |
10 | video data, second through last fragment | |
11 | video data, first fragment | |
if audio data: | ||
1 bit: private audio; 0: no, 1: yes | ||
4 bits: node ID (0 - 9) | ||
if private audio: | ||
8 bits: topology map of the excluded nodes (node ID | ||
bit removed) | ||
8 bits: number of bytes of audio data − 1 | ||
? bits: audio data | ||
if video data, second through last fragment: | ||
1 bit: last video fragment; 0: no, 1: yes | ||
4 bits: node ID (0 - 9) | ||
9 bits: number of bytes of video data − 1 | ||
7 bits: video fragment number | ||
? bits: video data | ||
if video data, first fragment (implied fragment number of zero): | ||
1 bit: last video fragment; 0: no, 1: yes | ||
4 bits: node ID (0 - 9) | ||
9 bits: number of bytes of video data − 1 | ||
15 bits:number of milliseconds to delay display (twos | ||
complement) | ||
? bits: video data | ||
Aggregated audio data, one of the following: | ||
aggregated audio present: | ||
? bits: aggregated audio data | ||
if second bit of aggregated audio data is 1: | ||
8 bits: topology map of the nodes with audio present | ||
(node ID bit removed) | ||
aggregated audio not present: | ||
6 bits: 100000 | ||
2 bits: 00 | reserved | |
test message: | ||
4 bits: 1000 | ||
2 bits: test command: | ||
01 send input bandwidth reply message | ||
10 send test messages, slow start | ||
11 send test messages, fast start | ||
2 bits: 00 reserved | ||
? bits: random data | ||
Layer 5, packet validity check: 4 bytes | ||
32bits: First 4 HMAC-MD5 digest bytes over the previous packet | ||
data | ||
Notes: 1) If audio is present in a frame, the first packets in the frame will | ||
contain audio data. | ||
2) The low bitrate media packet can be used when there is only one | ||
packet in a frame and FEC is not wanted. The packet index is | ||
constructed as follows: | ||
bits 0-12: frame index | ||
bits 13-18: packet index fragment | ||
bits 19-29: implied | ||
3) Audio data always starts with a zero bit. | ||
LSB of frame index: 0: Command/Notify | ||
Bits 3-1 of frame index: | 000: Change volume command | |
001: New volume value | ||
010: Delay/bandwidth measurement command | ||
011: Unreliable channel notification | ||
100: Change volume command | ||
101: New volume value | ||
110: Reserved | ||
111: Enabled video streams indication | ||
LSB of frame index: 1: Data | ||
Bits 6-1 of frame index: | Data packet index | |
0-50: Remote Volume/Delay/bandwidth data | ||
51-59: Local Volume data | ||
60-63: Reserved | ||
Change Volume command format: | ||
4 bits: 0000: | No command | |
Node ID + 1: Node of next new volume value | ||
1 bit: Packet is pure; 0: no, 1: yes | ||
1 bit: Packet is whole; 0: no, 1: yes | ||
1 bit: Network transition complete; 0: no, 1: yes | ||
1 bit: 0: | Reserved | |
New volume value format: | ||
8 bits: Volume value | ||
Delay/bandwidth measurement command format: | ||
4 bits: 0000: | No command | |
Node ID + 1: | Command originator node | |
4 bits: 0000: | Abort measurement | |
Node ID + 1: | Node to be measured | |
Unreliable channel notification format: | ||
4 bits: 0000: | No notification | |
Node ID + 1: | Notification originator node | |
4 bits: Channel ID: | Unreliable channel | |
Enabled video streams indication format: | ||
8 bits: Topology map of the enabled video output streams (with sending node bit | ||
removed) | ||
Remote Volume/Delay/bandwidth data packet: | ||
4 bits: 0000: | No data | |
Node ID + 1: | Data from remote node | |
4 bits: 0000: | Reserved | |
8 bits: CPU speed | ||
16 bits:Unreliable channel map | ||
16 bits:Total input bandwidth | ||
For each node except the remote node (9 times): | ||
16 bits:Output bandwidth to node | ||
16 bits:Delay to node | ||
For each node except the remote node (9 times): | ||
8 bits: Volume of node | ||
Local Volume data packet: | ||
For each node except the local node (9 times): | ||
8 bits: Volume of node | ||
Narrowband mono codec: | ||
1 bit: 0 | ||
1 bit: channel audio present map follows codec data; 0: no, 1 : yes | ||
1 bit: any audio present; 0: no, 1: yes | ||
1 bit: 0 | ||
? bits: codec data | ||
Wideband mono and stereo codec: | ||
1 bit: 0 | ||
1 bit: channel audio present map follows codec data; 0: no, 1: yes | ||
1 bit: any audio present; 0: no, 1: yes | ||
2 bits: 10 | ||
1 bit: 0: | monaural stream | |
1: | stereo stream | |
Audio data, one of the following: | ||
constant bit rate: | ||
1 bit: 0 | ||
1 bit: 0 | reserved | |
?/2 bits: | first 10 ms of codec data | |
?/2 bits: | second 10 ms of codec data | |
variable bit rate: | ||
1 bit: 1 | ||
1 bit: 0 | reserved | |
n*8 bits: | first 10 ms of codec data | |
? bits: second 10 ms of codec data | ||
8 bits: n | ||
Null codec: | ||
8 bits: 00011000 | ||
Industry Standard wideband codec: | ||
1 bit: 0 | ||
1 bit: channel audio present map follows codec data; 0: no, 1: yes | ||
1 bit: any audio present; 0: no, 1: yes | ||
5 bits: 11001 | ||
? bits: codec data | ||
Video Data Formats: | ||
Video codec: | ||
4 bits: 0001: intra-frame | ||
0010: inter-frame | ||
0011: drop-able inter-frame | ||
4 bits: 0010 | ||
? bits: codec data | ||
The foregoing discussion of the invention has been presented for purposes of illustration and description. Further, the description is not intended to limit the invention to the form disclosed herein. Consequently, further variations and modifications commensurate with the above teachings, within the skill and knowledge of the relevant art, are within the scope of the present invention. The embodiments described hereinabove are further intended to explain the best modes presently known of practicing the invention and to enable others skilled in the art to utilize the same as such, or in other embodiments, and with the various modifications required by their particular application or uses of the invention. By way of example only, one or both of voice and video communications based on at least certain of the technological features provided herein could be implemented in a casino environment, such as among players of slot machines. Each slot machine could have video and audio input and output devices associated therewith for use by the player thereof. It may be that, instead of a p2p system being established among a determined number of such slot machines and their users, a central terminal or server might be utilized through which all communications are directed before passing them to the desired slot machine(s). It is also intended that the claims be construed to include alternative embodiments to the extent permitted by the prior art.
Ridges, John C., Wisler, James M.
Patent | Priority | Assignee | Title |
10021177, | Feb 16 2011 | Masque Publishing, Inc. | Peer-to-peer communications |
10116715, | Mar 16 2015 | Microsoft Technology Licensing, LLC | Adapting encoded bandwidth |
10264213, | Dec 15 2016 | Steelcase Inc | Content amplification system and method |
10631632, | Oct 13 2008 | Barco NV | Egalitarian control apparatus and method for sharing information in a collaborative workspace |
10638090, | Dec 15 2016 | Steelcase Inc. | Content amplification system and method |
10884607, | May 29 2009 | Barco NV | Personal control apparatus and method for sharing information in a collaborative workspace |
10897598, | Dec 15 2016 | Steelcase Inc. | Content amplification system and method |
10925388, | Oct 12 2007 | Barco NV | Personal control apparatus and method for sharing information in a collaborative workspace |
11112949, | May 29 2009 | Barco NV | Personal control apparatus and method for sharing information in a collaborative workspace |
11190731, | Dec 15 2016 | Steelcase Inc. | Content amplification system and method |
11202501, | Oct 12 2007 | Barco NV | Personal control apparatus and method for sharing information in a collaborative workspace |
11337518, | Oct 12 2007 | Barco NV | Personal control apparatus and method for sharing information in a collaborative workplace |
11652957, | Dec 15 2016 | Steelcase Inc. | Content amplification system and method |
11743425, | Oct 12 2007 | Barco NV | Personal control apparatus and method for sharing information in a collaborative workspace |
11991474, | Oct 12 2007 | Barco NV | Personal control apparatus and method for sharing information in a collaborative workplace |
9871978, | Oct 12 2007 | Barco NV | Personal control apparatus and method for sharing information in a collaborative workspace |
Patent | Priority | Assignee | Title |
4316283, | Jun 02 1978 | Texas Instruments Incorporated | Transparent intelligent network for data and voice |
5539741, | Dec 18 1993 | IBM Corporation | Audio conferenceing system |
6230130, | May 18 1998 | FUNAI ELECTRIC CO , LTD | Scalable mixing for speech streaming |
6408327, | Dec 22 1998 | AVAYA Inc | Synthetic stereo conferencing over LAN/WAN |
6625643, | Nov 13 1998 | AKAMAI TECHNOLOGIES, INC | System and method for resource management on a data network |
7379450, | Mar 10 2006 | Meta Platforms, Inc | System and method for peer-to-peer multi-party voice-over-IP services |
7464272, | Sep 25 2003 | Microsoft Technology Licensing, LLC | Server control of peer to peer communications |
7515560, | Sep 07 2005 | F4W, INC | Apparatus and method for dynamically updating and communicating within flexible networks |
7653250, | Apr 28 2005 | Apple Inc | Adjusting sampling rate for encoding |
7692682, | Apr 28 2005 | Apple Inc | Video encoding in a video conference |
7817180, | Apr 28 2005 | Apple Inc | Video processing in a multi-participant video conference |
7852831, | Feb 22 2005 | LESAVICH HIGH-TECH LAW GROUP, SC | Method and system for providing private virtual secure Voice over Internet Protocol communications |
7864209, | Apr 28 2005 | Apple Inc | Audio processing in a multi-participant conference |
7949117, | Apr 28 2005 | Apple Inc | Heterogeneous video conferencing |
7987233, | Aug 15 2003 | I P ENGINE, INC | System and methods for facilitating a multiparty communications session with a dynamically designated session manager |
8289370, | Jul 20 2005 | VIDYO, INC | System and method for scalable and low-delay videoconferencing using scalable video coding |
8370732, | Oct 20 2006 | NETSERTIVE, INC | Peer-to-portal media broadcasting |
8452014, | Jun 24 2009 | Cisco Technology, Inc. | Group key management for mobile ad-hoc networks |
20030105812, | |||
20030112947, | |||
20030185186, | |||
20030217135, | |||
20050132154, | |||
20050201310, | |||
20050237377, | |||
20050262216, | |||
20060217201, | |||
20060245379, | |||
20060258450, | |||
20070149278, | |||
20070165106, | |||
20070237089, | |||
20080045286, | |||
20080133437, | |||
20080134170, | |||
20080172463, | |||
20090300685, | |||
20100042668, | |||
20100135094, | |||
20100262709, | |||
20100329463, | |||
20110060798, | |||
20110116409, | |||
20110252090, | |||
20110252157, | |||
20110304497, | |||
20110306290, | |||
20120155386, | |||
20120155646, | |||
20120206557, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 14 2011 | RIDGES, JOHN C | MASQUE PUBLISHING, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025819 | 0791 | |
Feb 14 2011 | WISLER, JAMES M | MASQUE PUBLISHING, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025819 | 0791 | |
Feb 16 2011 | Masque Publishing, Inc. | (assignment on the face of the patent) |
Date | Maintenance Fee Events |
Jul 05 2019 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
May 29 2023 | M2552: Payment of Maintenance Fee, 8th Yr, Small Entity. |
Date | Maintenance Schedule |
Feb 23 2019 | 4 years fee payment window open |
Aug 23 2019 | 6 months grace period start (w surcharge) |
Feb 23 2020 | patent expiry (for year 4) |
Feb 23 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 23 2023 | 8 years fee payment window open |
Aug 23 2023 | 6 months grace period start (w surcharge) |
Feb 23 2024 | patent expiry (for year 8) |
Feb 23 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 23 2027 | 12 years fee payment window open |
Aug 23 2027 | 6 months grace period start (w surcharge) |
Feb 23 2028 | patent expiry (for year 12) |
Feb 23 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |