There is disclosed a method for replicating media contents in a p2p vod system comprising a plurality of peers. The method may comprise: determining that a candidate media unit shall be replicated into a local storage of one of the peers; checking whether said local storage has enough space to store the candidate media unit; if not, selecting one media unit previously stored in said local storage to be replaced; and replacing the selected media unit by the candidate media unit.
|
1. A method for replicating media contents in a p2p vod system comprising a plurality of peers, comprising:
determining that a candidate media unit shall be replicated into a local storage of one of the peers;
checking whether said local storage has enough space to store the candidate media unit;
if not, selecting one media unit previously stored in said local storage to be replaced; and
replacing the selected media unit by the candidate media unit;
wherein the selecting further comprises:
selecting a media unit with a highest priority to be replaced from said one of the peers; and
wherein the media unit r with the highest priority is determined by r=arg maxkεQi ηk(ASRk−PBRk),
where arg means the one in Qi that has the maximum value of ηk(ASRk−PBRk); and ASRk=ΣiεSk Ui/λi,
where Sk is a set of peers from the plurality of peers, each of which has a replication for media unit k in its local storage,
Ui is an upload capacity for a peer i in Sk, and
λi is a poison approximation for a number of peers who requests the media unit k from the peer i, and
PBRk means a required playback rate for the media unit k, and
ηk represents a request rate/popularity for the media unit k.
19. A p2p vod system, comprising:
a plurality of peers; and
an information collector communicating with each of the peers and configured to determine that a candidate media unit shall be replicated into a local storage of one of the peers;
wherein said one of the peers operates to check whether said local storage has enough space to store the determined candidate media unit; if not, further to select one media unit previously stored in said local storage, and replace the selected media unit by the determined candidate media unit;
wherein, if said one of the peers determines said local storage has not enough space to store the determined candidate media unit, it selects one media unit with a highest priority previously stored in its local storage and replaces the selected media unit by the determined candidate media unit; and
wherein the media unit r with the highest priority is determined by r=arg maxkεQi ηk(ASRk−PBRk),
where arg means the one in Qi that has the maximum value of ηk(ASRk−PBRk); and ASRk=ΣiεSk Ui/λi,
where Sk is a set of peers from the plurality of peers, each of which has a replication for media unit k in its local storage,
Ui is an upload capacity for a peer i in the set of peer Sk, and
λi is a poison approximation for a number of peers who requests the media unit k from the peer i, and
PBRk means a required playback rate for the media unit k, and
ηk resents a request rate/popularity for said media unit k.
2. A method of
storing directly the candidate media unit into said local storage.
3. A method of
4. A method of
5. A method of
determining an expected aggregate service rate ASRj of the candidate media unit;
determining a required playback rate of the candidate media unit;
checking whether the expected aggregate service rate is not larger than the required playback rate; and
if yes, determining that the candidate media unit shall be replicated into the local storage.
7. A method of
9. A method of
determining an expected bandwidth gap before replacement, and an expected bandwidth gap after replacement, respectively;
determining a value of the expected bandwidth gap after the replacement is less than a value of the expected bandwidth gap before replacement; and
replacing the selected media unit by said candidate media unit.
11. A method of
12. A method of
wherein the determining is carried out by the information collector, and the checking, selecting, and replacing are carried out by at least one of the peers.
14. A method of
15. A method of
Gapb=ΣkεQiηk(PBRk−ASRk)++ηj(PBRj−ASRj)+, and Gapa=ΣkεQiηk(PBRk−ASRka)++ηj(PBRj−ASRja)+, where,
(x)+=Max{x, 0};
Gapb represents the expected bandwidth gap before replacement;
Gapa represents the expected bandwidth gap after replacement;
ηj represents a request rate/popularity for said candidate media unit j;
PBRj represents a required playback rate of the candidate media unit j;
ASRj represents an expected aggregate service rate for said candidate media unit j;
ηk, kεQi represents a request rate/popularity for the media unit k, and the media unit k belongs to the movie set Qi including all media units stored in peer i's local storage;
ηj represents a request rate/popularity for the candidate media unit j;
ASRka represents an expected aggregate service rate after replacement for the media unit k;
ASRja represents the expected bandwidth gap after replacement for said candidate media unit j.
16. A method of
removing the selected media unit from said local stored media unit set; and
adding said candidate media unit into said local stored media unit set.
17. A method of
18. A method of
updating the information maintained in the information collector upon the replacing.
20. A system of
21. A system of
a candidate media determining module configured to determine or receive the candidate media unit.
22. A system of
a maintaining module configured to maintain information for a media unit set locally stored in each of the peers.
24. A system of
26. A system of
determining an expected bandwidth gap before replacement, and an expected bandwidth gap after replacement, respectively;
determining a value of the expected bandwidth gap after the replacement is less than a value of the expected bandwidth gap before replacement; and
replacing the selected media unit by said candidate media unit.
28. A system of
29. A system of
30. A system of
31. A system of
32. A system of
a total number of the peers,
a local stored media unit set for each of the peers,
an access rate/popularity for each media unit from the local stored media unit set,
an expected aggregate service rate for said each media unit, and
a required playback rate for said each media unit.
34. A system of
35. A system of
36. A system of
Gapb=ΣkεQiηk(PBRk−ASRk)++ηj(PBRj−ASRj)+, and Gapa=ΣkεQiηk(PBRk−ASRka)++ηj(PBRj−ASRja)+, where,
(x)+=Max{x, 0};
Gapb represents the expected bandwidth gap before replacement;
Gapa represents the expected bandwidth gap after replacement;
ηj represents a request rate/popularity for said candidate media unit j;
PBRj represents a required playback rate of the candidate media unit j;
ASRj represents an expected aggregate service rate for said candidate media unit j;
ηk, kεQi represents a request rate/popularity for the media unit k, and the media unit k belongs to the movie set Qi including all media units stored in peer i's local storage;
ηj represents a request rate/popularity for the candidate media unit;
ASRka represents an expected aggregate service rate after replacement for the media unit k;
ASRja represents the expected bandwidth gap after replacement for said candidate media unit j.
|
The present application relates to a method for replicating media contents in a Peer-to-Peer (P2P) video-on-demand (VoD) system and a P2P VoD system. The present application further relates to a tracker server and a client terminal that may be deployed in the P2P VoD system.
Traditional VoD is based on a client-server approach, which is expensive and not scalable. In recent years, the P2P approach was first demonstrated to work for a live content streaming, and later for a VoD streaming as well. Various efforts are working on building a P2P-based VoD platform, for example using set-top boxes (STBs).
The VoD streaming is definitely harder to be accomplished than the live content streaming, since peers are less likely to have the same contents to share with each other. The lack of synchrony (in VoD) is compensated by following two remedial features in P2P VoD systems: a) each peer contributes more storage to replicate the VoD contents; and b) each peer is capable of uploading the contents different from what is currently consumed (downloaded) locally. The effectiveness of this remedy depends on whether the right mix of the media contents is placed at different peers, which is the P2P replication problem at hand. The P2P replication is a central design issue in the P2P VoD systems.
The problem formulation is based on a requirement on more contemporary P2P VoD systems, where the majority of the peers expect to be streaming the contents for immediate viewing. In such systems, video servers must still be deployed to ensure the service quality. A P2P network may be viewed as a mechanism used to off-load the video servers. The objective is to minimize the server bandwidth while peers' viewing quality is ensured. The streaming requirement means there needs to be a balance between the total supply of uplink bandwidth (that is the sum of server(s) and peers' uplink bandwidth) and the total demand (that is the number of viewing peers multiplied by the video playback rate). In practice, the operating regime of particular interest is when the total peer uplink bandwidth is comparable to the demand (of viewing bandwidth). In this regime, ideally the server bandwidth may be zero, if the viewing demand is deterministically known, and all the peers are replicated with the right mix of the contents so as to make full use of their upload capacities. In reality, the impredicability of user demand, and thus the imperfection content replication and service load balancing will always result in some server load.
In addition, traditional VoD systems purely rely on servers to stream video content to clients, which does not scale. In recent years, P2P VoD has been proven to be practical and effective. In P2P VoD system, each peer (or a STB) contributes some storage to store video (or segments of video) content to help the video server. The present application refers to such content replications for P2P VoD applications that help the peers to make an optimal replication decision, i.e. to decide what contents should be replicated locally, so that their upload capacity can be utilized as much as possible under dynamic movie access rate/popularity conditions.
This invention is about a content replication algorithm for supporting peer-to-peer assisted VoD systems and it is designed to work in a distributed way.
One of the goals of the content replications as claimed in the application is to do the load balance among all the existing peers. For a media unit like a movie with certain media access rate/popularity, if there are too many replication copies of this media unit in peers' local storages, the aggregated service rate for this media unit contributed by peers may exceed the total demand (required service rate), hence wasting of the peers' upload bandwidth; On the other hand, if the number of replication copy is too small, then the total required service rate cannot be satisfied by peers' uploading only, hence consuming the content server's upload bandwidth and computing resources. Therefore, how to achieve the optimal number of replication copies for each media unit so as to maximize a utilization of peers' upload bandwidth (load balance among all the peers) is an essential issue.
To this end, in one aspect, there is provided a method for replicating media contents in a P2P VoD system comprising a plurality of peers, comprising:
determining that a candidate media unit shall be replicated into a local storage of one of the peers;
checking whether said local storage has enough space to store the candidate media unit;
if not, selecting one media unit previously stored in said local storage to be replaced; and
replacing the selected media unit by the candidate media unit.
According to another aspect of the application, there is provided a P2P VoD system comprising:
a plurality of peers; and
an information collector communicating with each of the peers and configured to determine that a candidate media unit shall be replicated into a local storage of one of the peers;
wherein said one of the peers operates to check whether said local storage has enough space to store the determined candidate media unit; if not, further to select one media unit previously stored in said local storage, and replace the selected media unit by the determined candidate media unit.
In addition, there is provided a tracker server communicating with a plurality of the client terminals. The tracker server may comprise:
a candidate media determining module configured to determine a candidate media unit to be replicated into one or more of the client terminals;
a maintaining module configured to maintain information for a media unit set locally stored in each of the peers, an expected aggregate service rate for the candidate media unit, and a required playback rate for the candidate media unit; and
an updating module configured to update the information for said media unit set and the candidate media unit.
There is further provided a client peer, which may comprise:
a retrieving unit configured to at least retrieve an expected aggregate service rate and a required playback rate for a candidate media unit to be replicated into the client peer;
a first comparing unit configured to determine whether the expected aggregate service rate is not lager than the retrieved required playback rate;
a storage volume determining unit configured to determine whether said peer has enough storages for storing the candidate media unit if the retrieved expected aggregate service rate is not lager than the retrieved required playback rate; and
a replacing unit configured to store the candidate media unit in said peer if said peer has enough storages for storing the candidate media unit.
The application further provides a method for replicating media contents in a P2P VoD system comprising a plurality of peers. The method may comprise:
determining an expected aggregate service rate of a candidate media unit shall be replicated into a local storage of one of the peers;
determining a required playback rate of the candidate media unit;
determining the expected aggregate service rate is not larger than the required playback rate; and
storing the candidate media unit into said local storage.
Embodiments of the present application will be described with reference to the accompanying drawings. It should be appreciated that the drawings are present only for purpose of illustration and are not intended to limit the present invention.
1. Information Collector 100
As shown in
The information collector 100 is configured to communicate with each of the peers 200-1, 200-2, . . . , 200-i . . . 200-n, and maintain useful information of related media units (for example, movies), i.e. media unit j and all the media units in a media unit set Qi (i=1, . . . , N, where N represents the total number of peers in the system) locally stored in each of the peers.
The candidate media determining module 101 is configured to determine the candidate media unit j to be replicated into one or more of the peers 200-1, 200-2, . . . , 200-i . . . 200-n. The media unit used in the application may comprise any minimum logical unit for replacement and replication of the media contents. In one embodiment, the media unit may be a movie as the minimum logical unit. In real implementation, the minimum logical unit may be in smaller size, i.e. block/chunk based. Usually a movie can be composed of tens to hundreds of chunks (depending on sizes of chunk used).
The candidate media unit j to be replicated into peer's local storage may be determined by one of the following ways.
1) Depend on peers' viewing behavior.
In this case, all the media units (for example, movies) to be or has been replicated are fully related to the peers' viewing behaviors. A media unit, to be the candidate for being replicated by a peer, must be requested by this peer before, (i.e., to be or has been watched by this peer). In other words, those media units that the peer is not interested in or has not ever requested cannot become a replicating candidate media unit for this peer.
2) Assigned by service provider or administrator.
In this case, all the media units to be replicated or has been replicated in peers' local storage are completely irrelative to what peers are watching or have watched. Under certain situations, this mode needs coloration with the method of assigning a request rate/popularity for the candidate media unit ηj (in manually assigning mode). For example, when the service provider or administrator predicts that media unit k will become very hot in a short time and currently expected aggregate service rate ASRk for the media unit k is not large enough. By manually assigning a larger ηk and assigning the media unit k as the replication candidate media unit to certain amount of peers, the replicas of the media unit k can be quickly increased, as well as the ASRk. The request rate/popularity ηj for the candidate media unit and expected aggregate service rate ASRk will be discussed latter.
3) Mixture of methods 1) and 2)
This mode is most practical and preferred because it has the advantages of both methods.
In one embodiment, the capacity determining module 103 operates to determine a upload capacity Ui of each of the peers 200-1, 200-2, . . . , 200-i . . . 200-n, and the access rate/popularity determining module 104 operates to determine an access rate/popularity ηi, of each of the media units. Alternatively, each of the peers 200-1, 200-2, . . . , 200-i . . . 200-n may determine the upload capacity for respective peer and the access rate/popularity for each of the media units, and then send the determined information to the information collector 100. In this case, the modules 103 and 104 are not necessary and may be removed.
The maintaining module 105 is unitized to maintain the information for a media unit set Qi(Qi, i=1, . . . , N, where N represents the total number of peers in the system) locally stored in each of the peers, and the information for the candidate media unit j, such as the above mentioned ASRj and PBRj. The updating module 106 operates to update the information maintained in the maintaining module 105.
In summary, the information that is maintained in the maintaining module 105 is further discussed as below.
1) Total number of peers in the system (N)
In dynamic scenario, such as P2P live/VoD streaming system, peer churn will cause N changing with time rapidly. To release this fast changing factor, measurement methods like periodic updating, time-based averaging, etc, can be applied to release the factor, whereas in static scenario, such as IPTV (set-top boxes), peer churn will be very small.
2) The local stored media unit set of each peer (Qi, i=1, . . . , N)
This information Qi (media ID, media size, etc) will be updated only when peers send update message, for example, after the replacement, or local storage change. According to one embodiment, each peer i may obtain its own a locally and then report the same to the information collector 100. Alternatively, Qi may be determined by the information collector 100. In this case, the information collector 100 may further comprises a corresponding module (not shown) to determine Qi.
3) The access rate/popularity for each media unit (ηk, kεK, the total media set in each peer)
ηk is a very important parameter when making the decision of replacement, therefore affects the system performance (server load or burden). There are at least three ways to determine η7k in real implementation.
In one implementation, ηk is determined by directly applying the historical data, i.e., the measurement results of the access rate of the media unit k in the past. Here we illustrate an example of measuring the access rate of the media unit k during the past one day.
We denote Vk(T) to be the number of times that the media unit k has been viewed by peers in the past time duration T (here T=1 day) and the total number of viewing behaviors V(T)=ΣkεK Vk(T). Therefore, the predicted ηk(T+1)=Vk(T)/V(T), a real number between 0 and 1.
This method is very simple and easy to implement. In addition, it costs minimal computing and memory resources. But, because the media unit popularity will change (may change rapidly), some media units like movies become hot (popular) in a short time while some of the movies are gradually becoming cold (unpopular), the draw back of this method is that it cannot catch the changing trend of movie popularity, especially those movies that are continuously becoming hotter or colder.
In the other implementation, ηk may be determined by applying the prediction result on the historical data. This method is designed for catching the changing trend of media unit popularity. The history measuring results are still important. For example, if given Vk(1), Vk(2), Vk(3), three days' access rates of the media unit k, the Vk(4) can be a regression of the Vk(1), Vk(2) and Vk(3), either linear or 2nd order-polynomial, and V(4) can be predicted in a similar way. Hence, the predicted ηk(4)=Vk(4)/V(4).
In addition, ηk may be manually assigned by a service provider or administrator. In order to make ηk more accurate, manually assigning value of ηk is also a useful way. For example, when service provider want to stop providing viewing service for the media unit k, they can easily achieve this purpose by assigning ηk=0. According to our algorithm, the replicas of the media unit k in peers' local storage will automatically get removed whenever there appears a candidate the media unit to be replicated.
4) The upload capacity of each peer, (Ui, i=1, . . . , N)
This information is provided by peers. Since network conditions are various with time, peers can slowly update their measured upload capacity. However, this parameter can also be assigned manually by service provider or administrator when the whole system is in a controlled environment, like some IPTV system.
5) The expected aggregate service rate for each media unit (ASRk, kεK)
As long as there are updates of peers' local stored the media unit set, the information collector needs to re-calculate ASRk accordingly. In one embodiment, ASRk may be determined by each of the peers and then be reported to the information collector.
6) The required playback rate for each media unit (PBRk, kεK)
PBRk is media unit dependant and decided mainly by audio/video codec, but it can be easily obtained from service provider (directly provided by a content server).
Based on the previous description, the simplest way to deploy the information collector 100 is by setting up dedicated/centralized server(s) (deployed, managed and maintained by the service provider or administrator) and running the information collector 100 on it (them). It can also be implemented as a module and run on some streaming content servers.
Although setting up dedicated/centralized servers is a simple and much preferred way to implement the information collector, it is still possible to realize the information collector module without the help of dedicated/centralized server(s). We illustrate one example, some of the peers (with long online duration, more powerful CPU, etc) can be selected as Super Nodes, which can run the information collector module parallel to the original VoD streaming module. Peers can learn new peers or neighbors either by Super nodes or through gossip-based mechanism.
2. The peers 200-1, 200-2, . . . , 200-i . . . 200-n
It is worth noting that throughout this document, “peer” is used as a generalized term, which represents all types of nodes in an overlay networks that the system 1000 are focusing on, including client peers in P2P-assisted VoD systems, set-top boxes (STBs) in the IPTV systems and so on.
ASRj and PBRj as mentioned in the above are retrieved by the retrieving unit 201. The first comparing unit 202 is configured to determine whether the ASRj is not larger than PBRj. The storage volume determining unit 203 then operates to determine whether the peer i has enough storage for storing the candidate media unit j if ASRj is not lager than PBRj. The replacing unit 204 is configured to store the candidate media unit j in the peer i if the peer i has enough storage for storing the candidate media unit.
According to one embodiment, the peer may further comprise: a selecting unit 205 configured to select a media unit r with the highest priority to be replaced from a local stored media unit set Qi in peer i; a relative bandwidth gap determining unit 206 configured to determine an expected bandwidth gap before replacement gapb, and an expected bandwidth gap after the replacement gapa for the candidate media unit j, respectively; and a second comparing unit 207 configured to determine whether a value of gapb is larger than a value of gapa; if yes, the replacing unit 204 operates to remove the selected media unit r from said local stored media unit set Qi, and add said candidate media unit j into said local stored media unit set Qi.
Hereinafter, a process for a content replication according to one embodiment of the application will be discussed in reference to
As shown in
As mentioned in the above, the media unit may be of a movie. Hereinafter, for purpose of the description, the term “movie” is used to replace the media unit.
At step S1010, useful information is retrieved from (for example) the information collector 100. The useful information may be also retrieved by the retrieving unit 201 according to one embodiment of the application. According to the local information obtained in step S1005, it is necessary to retrieve useful information of the related movies, i.e. movie j and all the movies in set Qi. In one embodiment, the useful information may contain: a required movie playback rate (PBR), an expected aggregate service rate (ASR), a movie request rate/movie popularity (η), and a total number of peers in the system (N).
Note that the derivation for PBR and ASR of related movies will be described in detail later.
At step S1015, the first comparing unit 202 operates to determine whether the ASRj is not lager than PBRj. If ASRj>PBRj, which means the expected aggregated service rate (upload bandwidth) of movie j provided by the peers is larger than movie j's required playback rate, then there is no need to increase the replication copy of movie j. In this case, everything remains unchanged (as shown in step S1040) and the process ends. Otherwise, it is necessary to make more replication copy of movie j among peers (movie j becomes the replicating candidate for peer i). However, whether the movie j should be stored into peer i's local storage (or, has higher priority than the existing movies in set Qi) needs more calculation and investigation.
At step S1020, based on the local storage information of peer i, a storage volume determining unit 203 operates to determine whether there is adequate storage space to store the movie j. If the available storage size in the peer i is large enough, then directly store the copy of movie j and no existing movies in set Q, needs to be removed, as is shown in step S1050. And then, the peer i should inform the updating module 106 of the information collector 100 to update peer i's latest local storage information to the information collector 100 (step S1055) so that the information collector 100 could help other peers when they run this process 4000 later and separately.
Otherwise, further analysis needs to be made to see whether it is necessary to replace the existing movie in set Qi with movie j.
Specifically, at step S1025, the selecting unit 205, based on the retrieved movie information of local stored movies in set Qi, selects a movie r from Qi which has the highest priority among all the movies in set Qi to be removed, i.e., the candidate movie to be replaced by movie j (1025). In this step, we propose a metric called weighted redundant service rate. For example, if the movie r has the largest value of such metric among all the movies in set Qi, i.e., r=arg maxkεQi ηk(ASRk−PBRk), we consider it as the candidate movie to be replaced.
At step S1030, two key metrics, called relative expected bandwidth gap (or, server burden), Gapb (b stands for before, means with no replacement) and Gapa (a stands for after, means after doing the replacement) are calculated, for example, by elative bandwidth gap determining unit 206 by rule of:
Gapb=ΣkεQiηk(PBRk−ASRk)++ηj(PBRj−ASRj)+,
Gapa=ΣkεQiηk(PBRk−ASRka)++ηj(PBRj−ASRja)+, where (x)+=Max{x,0}.
The intuition behind this formula is that Gapb represents the expected total server burden caused by the viewing behaviors of peers who are watching movie j and all the movies in set Qi when movie r is not replaced by movie j (before replacement). Gapa has the similar meaning when movie r is replaced by movie j (after replacement). How to determine Gapb and Gapa will be further discussed latter.
At step S1035, it is determined whether the movie r should be replaced by movie j. For example, a second comparing unit 207 may be utilized to determine whether Gapa<Gapb. If Gapa<Gapb, which implies that the replacement will help to decrease expected server burden, then the replacement may be carried out at step S1045; otherwise, the movie r should not be replaced and Qi remains unchanged (step S1040). Specifically, at the step S1045, the movie r will be removed from set Qi and the copy of movie j will be stored locally by adding the movie j into the set Qi through the replacing unit 204.
Finally, at step S1055, the updating unit 106 will update the information. As long as the local stored movie set Qi has been changed (either new movies added or existing movie replaced by other movies), peer i should update its latest local storage information to the information collector 100 so that the information collector 100 could help other peers when they run this process later and separately.
Hereinafter, the more details for steps S1025-S1055 will be discussed in reference to the following pseudo code.
1:
r = arg maxkεQ
2:
λi = N ΣkεQ
3:
λia = λi + (ηj − ηr)N;
4:
for all k such that k ε Qi and k ≠ r do
5:
6:
end for
7:
8:
9:
Gapb = ΣkεQ
10:
Gapa = ΣkεQ
11:
if Gapa < Gapb then
12:
replace movie r with movie j
13:
delete r from Qi and add j into Qi
14:
end if
As is explained in step S1025, Line 1 is the code to select candidate movie r from set Qi to be replaced in highest priority.
Line 2 to Line 8 are codes of calculating the expected aggregate service rate ASR contributed by peers. Peer i can help another peer (watching movie k) only if peer i has replicated movie k in its local storage, i.e. kεQi. ηk, the movie request rate/popularity of movie k can be approximated as the probability that, among all the possible choices, a peer selects movie k to watch. Hence, the number of peers that peer i can help follows a binomial distribution with parameter pi=ΣkεQi ηk, and the expectation of such number can be easily derived by Poison approximation, λi≈Npi=NΣkεQi ηk, where N is the total number of peers in the system (Line 2). In Line 3, the calculation of λi and λia are similar:
λi=N*ΣkεQi ηk, before replacement, movie r is still in the set Qi, but movie j is not.
λia=N*ΣkεQi ηk, after replacement, movie r is no longer in the set Qi.
Therefore, mathematically, the relationship between λi and λia is just what Line 3 expresses:
λia=N*(ΣkεQiηk−ηr+ηj)=N*ΣkεQiηk−Nηr+Nηj=λi−Nηr+Nηj]
In the current stage, it is reasonable to make the assumption that each peer will equally share its service rate among all the peers that it can help. In real implementation, this assumption can be easily realized by deploying Round-Robin strategy at each peer when it is helping other peers. So, the expected aggregate service rate for movie k can be derived: ASRk=ΣiεSk(Ui/λi), where Sk is the set of peers which have replicated movie kin their local storage and Ui is the upload capacity of peer i. Since calculating ASRk needs information (U and λ) of other peers (peers in set Sk), which is not locally available for peer i, ASRk should be retrieved from the information collector 100, as stated in step s1010.
After ASRk is obtained, the remaining parameters can be derived with some simple calculation. For example, in Line 5, ASRka is derived by substituting Ui/λi with Ui/λia in ASRk. Since Line 5 is in a loop (from Line 4 to Line 6), all the ASRka, kεQi is adjusted except for the movie r. The movies r and j are special, so their ASR should be obtained separately. In Line 7, ASRra is set to be the result of ASRr minus Ui/λi because the movie r no longer belongs to the set Qi a after replacement. In the same way, ASRja is updated by adding Ui/λia to ASRj (Line 8).
Line 9 and Line 10 have been discussed in step 1030. The discussions thereto are directly focused on the key points of these two codes. Since PBRk is the minimum required download rate for watching movie k, if ASRk<PBRk, which means that the aggregate service rate contributed by the peers is still not sufficient, then in order to maintain peers' viewing quality, the content server must provide additional upload bandwidth to support for supporting movie k. On the other hand, if ASRk>PBRk, then ideally, server does not need to provide bandwidth for supporting movie k. Therefore, (PBRk−ASRk)+ includes both of the above two cases in one formula.
Line 11 refers to step S1035, and Lines 12 and 13 refer to step S1045. As have been discussed in the above, the description thereof is omitted.
Although throughout this description, term “movie” or “media unit” is used as the minimum logical unit for replacement and replication, in real implementation, the minimum logical unit can be in smaller size, i.e. block/chunk based. Usually a movie can be composed of tens to hundreds of chunks (depending on the size of chunk used). The embodiments of the present application may adapt to such smaller logical unit without any difficulty.
Features, integers, characteristics, compounds, compositions, or combinations described in conjunction with a particular aspect, embodiment, implementation or example disclosed herein are to be understood to be applicable to any other aspect, embodiment, implementation or example described herein unless incompatible therewith. All of the features disclosed in this application (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. The invention is not restricted to the details of any foregoing embodiments and extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.
Chiu, Dah Ming, Zhou, Yipeng, Fu, Zhengjia
Patent | Priority | Assignee | Title |
10165331, | Nov 05 2013 | Industrial Technology Research Institute | Method and device operable to store video and audio data |
8849990, | Feb 03 2011 | Disney Enterprises, Inc.; DISNEY ENTERPRISES, INC | Optimized video streaming to client devices |
8856329, | Feb 01 2011 | LIMELIGHT NETWORKS, INC | Multicast mapped look-up on content delivery networks |
Patent | Priority | Assignee | Title |
20030204856, | |||
20050055718, | |||
20070208748, | |||
20070250880, | |||
20080049619, | |||
20080133767, | |||
20100058405, | |||
20100306339, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Sep 28 2010 | The Chinese University of Hong Kong | (assignment on the face of the patent) | / | |||
Oct 06 2010 | ZHOU, YIPENG | The Chinese University of Hong Kong | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025162 | /0313 | |
Oct 06 2010 | FU, ZHENGJIA | The Chinese University of Hong Kong | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025162 | /0313 | |
Oct 06 2010 | CHIU, DAH MING | The Chinese University of Hong Kong | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 025162 | /0313 |
Date | Maintenance Fee Events |
Mar 23 2017 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Mar 24 2021 | M2552: Payment of Maintenance Fee, 8th Yr, Small Entity. |
Date | Maintenance Schedule |
Oct 08 2016 | 4 years fee payment window open |
Apr 08 2017 | 6 months grace period start (w surcharge) |
Oct 08 2017 | patent expiry (for year 4) |
Oct 08 2019 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 08 2020 | 8 years fee payment window open |
Apr 08 2021 | 6 months grace period start (w surcharge) |
Oct 08 2021 | patent expiry (for year 8) |
Oct 08 2023 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 08 2024 | 12 years fee payment window open |
Apr 08 2025 | 6 months grace period start (w surcharge) |
Oct 08 2025 | patent expiry (for year 12) |
Oct 08 2027 | 2 years to revive unintentionally abandoned end. (for year 12) |