Scholarly article on topic 'Fibonacci Ring Overlay Networks with Distributed Chunk Storage for P2P VoD Streaming'

Fibonacci Ring Overlay Networks with Distributed Chunk Storage for P2P VoD Streaming Academic research paper on "Computer and information sciences"

CC BY-NC-ND
0
0
Share paper
Academic journal
Procedia Computer Science
OECD Field of science
Keywords
{Peer-to-Peer / Video-on-Demand / Overlay / Ring / VCR / "Fibonacci sequence"}

Abstract of research paper on Computer and information sciences, author of scientific article — Pingshan Liu, Guimin Huang, Jiefeng Cheng, Shengzhong Feng, Jianping Fan

Abstract In the peer-to-peer video-on-demand (P2P VoD) streaming applications, providing VCR-like operations is important but much challenging. In this paper, we propose a novel P2P scheme–Fibonacci ring overlay networks with distributed chunk storage, called FiRiNet, to reduce jump latency caused by VCR-like operations and avoid an adverse impact caused by VCR-like operations. In FiRiNet, video data is divided into chunks and stored at peers’ local storage in a distributed manner. A peer can achieve fast neighbor discovery to reduce jump latency by maintaining some neighbors in a set of concentric rings with Fibonacci sequence radii. Moreover, FiRiNet constructs the overlay networks and distributes the video data based on the stored chunks, which can make FiRiNet avoid an adverse impact caused by VCR-like operations and make FiRiNet resilient to the peer churn caused by VCR-like operations. Through simulations, we demonstrate that FiRiNet is an effcient and resilient scheme with low control overhead, short jump latency, and high streaming quality.

Academic research paper on topic "Fibonacci Ring Overlay Networks with Distributed Chunk Storage for P2P VoD Streaming"

Available online at www.sciencedirect.com

SciVerse ScienceDirect PfOCSCl ¡0

Computer Science

Procedia Computer Science 9 (2012) 1354 - 1362

International Conference on Computational Science, ICCS 2012

Fibonacci Ring Overlay Networks with Distributed Chunk Storage

for P2P VoD Streaming

Pingshan Liua,b,c,d, Guimin Huangd*, Jiefeng Chengb, Shengzhong Fengb, Jianping Fanb

aInstitute of Computing Technology, Chinese Academy of Sciences, Beijing, China b Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China cGraduate School of the Chinese Academy of Sciences, Beijing, China d School of Information and Communication, Guilin University of Electronic Technology, Guilin, China

Abstract

In the peer-to-peer video-on-demand (P2P VoD) streaming applications, providing VCR-like operations is important but much challenging. In this paper, we propose a novel P2P scheme-Fibonacci ring overlay networks with distributed chunk storage, called FiRiNet, to reduce jump latency caused by VCR-like operations and avoid an adverse impact caused by VCR-like operations. In FiRiNet, video data is divided into chunks and stored at peers' local storage in a distributed manner. A peer can achieve fast neighbor discovery to reduce jump latency by maintaining some neighbors in a set of concentric rings with Fibonacci sequence radii. Moreover, FiRiNet constructs the overlay networks and distributes the video data based on the stored chunks, which can make FiRiNet avoid an adverse impact caused by VCR-like operations and make FiRiNet resilient to the peer churn caused by VCR-like operations. Through simulations, we demonstrate that FiRiNet is an efficient and resilient scheme with low control overhead, short jump latency, and high streaming quality.

Keywords: Peer-to-Peer, Video-on-Demand, Overlay, Ring, VCR, Fibonacci sequence

1. Introduction

With the widespread penetration and easy access of broadband Internet, video-on-demand (VoD) service over the Internet have become increasingly popular. The traditional client-server architecture can hardly scale to a large number of users due to the limited service capacity of the servers. In recent years, peer-to-peer (P2P) networks have been shown as a good approach to provide high scalability VoD streaming service[1, 2, 3, 4, 5].

In the P2P VoD streaming applications, supporting VCR-like operations, such as jump forward/backward and random seeking, is important but much challenging. First, VCR-like operations result in frequent random seeking which causes jump latency. We can define jump latency as the interval from the time when a video segment is requested to the time when the video segment is ready for playback. To achieve high quality viewing experience, the jump latency should be optimized as short as possible. Second, VCR-like operations impose an adverse impact

* Corresponding author

Email addresses: ps.liu@siat.ac.cn (Pingshan Liu), gmhuang@guet.edu.cn (Guimin Huang)

1877-0509 © 2012 Published by Elsevier Ltd. doi:10.1016/j.procs.2012.04.149

to the P2P schemes with the "cache-and-relay" mechanism. The "cache-and-relay" mechanism utilizes a sliding buffer window to cache video data in a peer and relay the video data to other peers (children). If a peer jumps to a new playback position in the video, it will start to receive video data which is not interested by its children. Consequently, all the children of this peer have to be abandoned because the peer can no longer provide useful data to them. Therefore, the jump forward/backward operations of a peer can prevent its children from continuing to receive its cached data. This may affect the streaming qualities of the children in turn. For ease of presentation, we refer to the above adverse impact caused by VCR-like operations as the adverse impact of children abandoning.

Reducing jump latency and avoid the adverse impact of children abandoning are two key objectives. To achieve these two objectives, we propose a novel P2P scheme-Fibonacci ring overlay networks with distributed chunk storage, called FiRiNet, to support VCR-like operations. In FiRiNet, video data is divided into many equal-size chunks and stored at peers' local storage in a distributed manner. Each peer has an unique index chunk. According to the distances between the index chunks of peers, each peer organizes its neighbors in a set of concentric rings with Fibonacci sequence radii and places all its neighbors on these rings, called Fibonacci ring structure. The ring structure used in the P2P VoD streaming applications is firstly proposed by RINDY[6], in which each peer maintains a set of concentric rings with power law radii[7], called power law ring structure in this paper. RINDY can achieve fast neighbor discovery and have a short jump latency as shown in [6]. However, in this study, we will show that the jump latency of FiRiNet is shorter than that of RINDY due to the use of the Fibonacci ring structure instead of the power law ring structure later.

On the other hand, to avoid the adverse impact of children abandoning, FiRiNet abandons the "cache-and-relay" mechanism. FiRiNet utilizes the storage of a peer to store video data, and then relay the video data to other peers (children). When a peer jumps to a new playback position, its children still keep receiving video data from it. For ease of presentation, we refer to the mechanism using storage as "store-and-relay". The "store-and-relay" mechanism is firstly used in vMesh[8, 9]. In vMesh, video segments are stored in nodes over a distributed hash table (DHT) overlay network. However, high peer dynamics will cause significant overhead for DHTs[10]. In contrast, we utilize ring-assisted overlays which have good reliability as shown in [6], instead of a DHT overlay network.

In addition, FiRiNet consists of two modules: overlay module and playback module. The overlay module is used to organize the overlay networks, and is constructed and maintained based on the index chunks of peers rather than the playback positions of peers. While the playback module is used to organize the streaming parents of peers, and is constructed and maintained based on the overlay module and the playback positions of peers. Thus, the peer churn caused by the VCR-like operations has no impact on the overlay module and only influences the playback module, which makes FiRiNet resilient to the peer churn caused by VCR-like operations.

The main contributions of this paper are as follows:

(1) FiRiNet utilizes the Fibonacci ring structure to achieve fast neighbor discovery to reduce jump latency, the jump latency of which is shorter than that of the previous efficient scheme-RINDY.

(2) FiRiNet utilizes the "store-and-relay" mechanism with distributed chunk storage, which can avoid the adverse impact of children abandoning.

(3) FiRiNet consists of two modules: overlay module and playback module. The peer churn caused by VCR-like operations only influences the playback module, which makes FiRiNet resilient to the peer churn caused by VCR-like operations.

The remainder of this paper is organized as follows. We first briefly mention the related work in Section 2. Then, we present FiRiNet in Section 3. In Section 4, we evaluate the performance of FiRiNet in comparison of RINDY[6]. In Section 5, we conclude the paper.

2. Related Work

Many P2P schemes for VoD streaming applications have been proposed to support VCR-like operations. The proposed schemes can be broadly classified into two categories: tree-based and mesh-based. The tree-based schemes have well-organized overlay structures and typically distribute video data by actively pushing data from a peer to its

children peers. In contrast, peers in the mesh-based P2P systems are not confined to a static topology. A peer dynamically connects to a subset of random peers in the system.

The tree-based schemes are first proposed to provide the P2P VoD streaming applications. P2Cast[11] and P2VoD[12] are tree-based schemes. Although they support VoD functions, they cannot support random seeking efficiently due to VCR-like operations. DTStream[13] is a tree-based scheme which supports VCR-like operations efficiently. However, high peer churn can introduce much maintaining cost.

The mesh-based schemes became popular after DONet[14] proposed it. VMesh[8, 9] utilizes a DHT overlay with a popularity-based segment storage scheme, while Bullet-Media[15] uses a DHT overlay to control video block prefetching, guaranteeing that every block has at least several replicates. BBTU[16] proposes a balanced binary tree based pre-fetching strategy to search new neighbors when VCR-like operations occur. RINDY[6] organizes peers into a ring-assisted overlay according to the playback position distances between a peer and its neighbors, where neighbors are in a set of concentric rings with power law radii. DSL[17] organizes peers into a dynamic skip list, which tries to maintain a global skip list with multi-layers. In LSO[18], the streaming layer is a mesh overlay, and the indexing layer selects relative stable peers to form a backbone overlay.

In this paper, we proposes an efficient and resilient P2P scheme-FiRiNet aiming at reducing jump latency and avoid the adverse impact of children abandoning. First, FiRiNet uses the Fibonacci ring structure instead of the power law ring structure to reduce jump latency. Second, FiRiNet uses the "store-and-relay" mechanism and constructs the overlay networks based on the index chunks of peers, which avoids the adverse impacts of children abandoning and makes FiRiNet more resilient to the high peer churn caused by VCR-like operations.

3. Scheme Description

3.1. FiRiNet Overview

FiRiNet assumes a P2P VoD streaming framework similar to that of most state-of-the-art mesh-pull based P2P streaming systems (e.g. PPLive[3], PPStream[4], UUSee[5]), which is composed of media source servers, trackers, and peers at least.

In FiRiNet, each video is divided into many equal size chunks except the last one. Each chunk is identified by its video ID and chunk ID. For example, a video is 7200 seconds long, and one chunk is 300 seconds long. Then the video can be divided into 24 chunks with chunk IDs: 1, 2, 3,..., 24. Each peer has an unique index chunk, and stores its index chunk and the chunks which it has played. When a peer doesn't finish playing a whole chunk before it jumps, it uses its residual bandwidth to download the missing part of the chunk if the missing part of the chunk is below a threshold. In FiRiNet, we set the threshold to be 30%. If a whole chunk is stored in a peer, the chunk map of the peer will be updated. Chunk maps use one bit representing a chunk, with bit 1 indicating that a chunk is available and 0 otherwise. A peer and its neighbors exchange the chunk maps through periodical gossip messages.

FiRiNet consists of two modules: overlay module and playback module. The overlay module is in charge of constructing and maintaining the overlay networks. The playback module is in charge of constructing and maintaining the streaming parents.

In the overlay module, each peer is identified by its index chunk. The distances between the index chunks of peers are computed based on the index chunk IDs. Based on the distances between the index chunks of peers, each peer organizes its neighbors in a set of concentric rings of Fibonacci sequence radii. The radii values compose a Fibonacci

sequence: 1, 2, 3, 5, 8, 13,____According to the radii values, we label the rings as Fibonacci ring 1, 2, 3,____Some

sampled neighbors are kept in an index chunk list and a Fibonacci neighbor list. The index chunk list includes the index neighbors which have the same index chunk ID and are placed on the innermost ring. The Fibonacci neighbor list of a peer includes the Fibonacci neighbors which have a distance from the peer and are placed on the outer rings. These two lists are called neighbor lists.

The Fibonacci neighbors of a peer include the forward neighbors, which have a bigger index chunk ID than its index chunk ID, and the backward neighbors, which have a smaller index chunk ID than its index chunk ID. Fig. 1 presents an example of neighbor organization over rings for a peer P. Suppose the index chunk ID of P is x. Peer A, B, C on the innermost ring have the same index chunk ID as that of P and are kept in the index chunk list of P. D, E on Fibonacci ring 1 have the index chunk ID (x + 1). H, I on Fibonacci ring 2 have the index chunk ID (x + 2). L, M on Fibonacci ring 5 have an index chunk with the chunk ID (x + 4) or (x + 5). D, E, H, I, J, K, L, M are the forward

Figure 1: neighbor organization over rings for peer P

neighbors. F, G on Fibonacci ring -1 have the index chunk ID (x - 1), and are the backward neighbors.

In the playback module, each peer should keep information about its parent peers. To this end, in FiRiNet, each peer constructs and maintains two lists: a current parent list, a next parent list. The current parent list keeps mainly the peers of which the index chunk is providing video data currently, and also keeps some peers of which the stored ordinary chunk is providing video data currently. The next-parent list keeps the peers of which the index chunk ID is next to the chunk ID of the chunk currently providing video data. These two lists are constructed and maintained with the help of the overlay module and the tracker. These two lists are called streaming parent lists.

In FiRiNet, there is a well-known rendezvous point, called tracker, which is used to help peers connect to other peers to share the same video data. The tracker keeps the information of active peers. It periodically checks the recorded peers to eliminate inactive peers.

When a new peer p joins, it first contacts with the tracker, and the tracker chooses some current parent peers and some next parent peers to it according to its playback position. Then, p contacts with these parent peers to retrieve their index chunk lists. With these parent peers and the retrieved lists, p constructs its current parent list and its next-parent list. After constructing its streaming parent lists, p begins to retrieve video data from the current parent peers.

Meanwhile, the tracker assigns an index chunk ID to p. For better load-balancing, we adopt the round-robin strategy rather than the random assignment strategy to assign the index chunk IDs. The tracker rotates the index chunk ID from 1 to the maximum chunk ID of a video. After designates an index chunk ID to p, the tracker chooses some peers with the same index chunk ID to p. The peer p contacts with the chosen peers to retrieve their index chunk lists and their Fibonacci neighbor lists. With the retrieved lists, p constructs its own index chunk list and its own Fibonacci neighbor list. Then, p uses its residual bandwidth to download the index chunk from some index neighbors in its index chunk list.

After the new peer p constructs its streaming parent lists and its neighbor lists, it joins FiRiNet.

3.3. Peer Departure

When a peer leaves FiRiNet gracefully, it notifies all its neighbors in its neighbor lists and the tracker to remove the information about it.

When a peer leaves FiRiNet without notifying any peer and the tracker due to network disrupting or computer crashing down, the tracker and the peers, which hold information about it, will detect its departure and remove the information about it by periodical checking. In FiRiNet, each peer gossips keep-alive messages to all its neighbors in its neighbor lists and the tracker to update its timestamp. When the tracker and its neighbors don't receive its keep-alive message, they can detect its departure by periodical checking its timestamp. After detect a peer's departure, the tracker and the neighbors will remove the information about the departure peer.

3.4. Gossip maintenance

In FiRiNet, each peer periodically sends gossip messages to the tracker and its neighbors in its neighbor lists. The gossip messages of a peer declare its existence and inform its chunk map. A peer can update its neighbor lists by the

3.2. Peer Join

Algorithm 1 seek new neighbors when a peer jumps 1: procedure seekPosition(prevPosition,destPosition) //prevPosition: the playback position before a peer jumps //destPosition: the destination playback position 2: Calculate the destination chunk ID k according to destPosition; 3: Calculate the previous chunk ID m according to prevPosition; 4: if the peers with chunk ID k are found in the Fibonacci neighbor list then 5: return the found peers as parents 6: else

7: djump = |k - m|;//compute jump distance

//indexID denotes the index chunk ID 8: dindex = lindexID - k|;//compute index distance

9: if djump < dindex then

10: Send a search request to one of the previous parent peers;

11: else

12: Search the Fibonacci neighbor list for the neighbor which has a chunk ID closest to chunk ID k;

13: Send a search request to the closest neighbor;

14: end if

15: end if

16: end procedure

received gossip messages. The tracker can update its peer list by the received gossip messages. In FiRiNet, we design a gossip message ALIVE. The content of ALIVE includes the following fields: GUID (global identifier), peerAddr (IP address and port), playPoint (playback position), numIdxNeighs (the number of the index neighbors), chunkMap (chunk map), TTL (the remaining hop counts of this message), timestamp (the time creating the message). Moreover, we design a similar scoped gossip protocol as that of RINDY. In our algorithm, a peer periodically sends ALIVE gossip messages to its neighbors in its neighbor lists and the tracker. When the ALIVE messages are sent to the index neighbors in the index chunk list, the TTL of the ALIVE messages is 5 or bigger. When the ALIVE messages are sent to the Fibonacci neighbors in the Fibonacci neighbor list, the TTL of the ALIVE messages is 1. The index neighbors of a peer need to forward the received ALIVE messages before TTL becomes 0, while the Fibonacci neighbors of a peer don't need to forward the received ALIVE messages. After an index neighbor receives a gossip message, it randomly selects an index neighbor from its index chunk list, decreases the TTL of the received message and forwards the received message to the selected neighbor. This procedure is executed recursively until the TTL becomes zero or the selected neighbor is the source of this message.

In the previous P2P VoD schemes, the gossip protocol is usually used with the "cache-and-relay" mechanism and buffer map[6, 14]. In FiRiNet, the gossip protocol is used with the "store-and-relay" mechanism and chunk map. The change of buffer map is much faster than that of chunk map. In the P2P VoD schemes using buffer map, in order to notify the change of buffer map timely, the gossip period should be set small. However, in FiRiNet, as the change of chunk map is much slower, the gossip period can be set bigger, compared with [6, 14].

3.5. Peer Normal Playback

When a peer p watches the video without any interactive action, the peers in its current parent list can provide it video data. When the playback position nearly changes from one chunk into next chunk, the next parent list will become the current parent list. Meanwhile, p constructs its new next parent list by sending requests to some peers in the original next parent list. Then, these peers in the original next parent list return some new next parent peers to p according to their Fibonacci neighbor lists.

3.6. Peer Jump

When a peer p jumps to a new playback position within the scope of the streaming parent lists, p retrieves video data from the streaming parent lists directly. When p jumps to a new playback position beyond the scope of the streaming

Figure 2: simulation framework and prototype implementation

parent lists, p needs to search its neighbor lists to construct its new streaming parent lists.

To construct the new streaming parent lists, p executes the procedure seekPosition in algorithm 1. In the procedure, p first searches its Fibonacci neighbor list for the peers with the destination chunk. If the peers with the destination chunk are found in the Fibonacci neighbor list, these peers are returned as the parent peers of p. If the peers with the destination chunk are not found, p needs to send a search request to another peer.

To choose which peer p sends a search request to, p first computes two distances: the distance between the index chunk ID and the destination chunk ID, called index distance, and the distance between the previous playback chunk ID and the destination chunk ID, called jump distance. According to these two distances, p decides whether to query its own Fibonacci neighbor list, or whether to send a search request to one of its previous parents, which provide video data to it before it jumps. If the jump distance is bigger than the index distance, p queries its own Fibonacci neighbor list for the neighbor which has an index chunk ID closest to the destination chunk ID, and then sends a search request to the closest neighbor. Otherwise, p sends a search request to one of the previous parents.

When the search request is received by another peer q, q executes the same procedure seekPosition, where the parameter "prevPosition" denotes the current playback position of q. First, q searches its Fibonacci neighbor list. If the desired peers are not found in the Fibonacci neighbor list of q, q sends the search request to the next chosen peer. This process continues recursively until the desired peers with the destination chunk are found in the Fibonacci neighbor list of a peer. After the desired peers are found in the Fibonacci neighbor list of a peer, the information of the found peers will be returned to p as a response message. Upon receiving the response message, p reconstructs its current parent list and its next parent list. Then, p requests video data from the new current parent peers.

4. Evaluation

The simulation platform consists of OMNeT+ + [19] with its extension modules INET[20] and OverSim[21]. OM-NeT++ is a component-based and open architecture discrete time event simulator. INET is a framework built on OMNeT++ and contains IPv4, IPv6, TCP, UDP and many other protocol implementations, and several application models. OverSim is an open-source overlay network simulation framework residing on top of the INET framework. The simulation suite provides an integrated P2P framework as shown in Fig. 2(a). Fig. 2(b) depicts the prototype implementation of FiRiNet based on Fig. 2(a). The simPlayer module simulates the behavior of streaming player. The FiRiNet module implements our scheme. We also implement the prototype of RINDY by the same way.

In the simulation, we choose a movie with 512kbps streaming rate and 120 minutes content as our testing stream. The chunk of the video is 5 minutes long, and the whole video is divided into 24 chunks. The bandwidth of links in the backbone is 10Gbps. The bandwidth of access links is 10Mbps. The peers' bandwidth and delay follow uniform distributions over the intervals [512Kbps, 2Mbps] and [10ms, 50ms], respectively. The peers join/departure pattern is based on the churn generator of OverSim. We choose the LifetimeChurn generator of OverSim. The gossip period is 60 seconds. The maximum number of the neighbors in the index chunk list is 15. The maximum layer of the ring structure is 5. The maximum number of the Fibonacci neighbors on each Fibonacci ring is 2. Thus, the maximum

Figure 3: Control overhead without VCR

Figure 4: Control overhead with VCR

number of the Fibonacci neighbors is 10. Each peer issues a jump operation according to a probability 6. If probability 6 is zero, no jump occurs. The maximum number of jump operations of a peer follows a uniform distribution over the intervals [2, 5]. The jump distance follows a uniform distribution over the interval [600 seconds, 6000 seconds]. For each experiment, we run the simulation five times and compute the average value to reduce the impact of randomization.

4.1. Control Overhead

In the simulation, we measure the control overhead of a peer by the number of its processed messages. First, we set the probability 6=0, and evaluate the case without VCR-like operations. Then, we set the probability 6=50%, and evaluate the case with VCR-like operations. Fig. 3 presents the average number of processed messages of each peer in a gossip period without VCR-like operations. Fig. 4 presents the average number of processed messages of each peer in a gossip period with VCR-like operations.

From Fig. 3, we can see the control overhead of FiRiNet is almost the same as that of RINDY in a gossip period. However, the gossip period of FiRiNet can be much bigger than that of RINDY. So, the control overhead of FiRiNet is lower than that of RINDY per second. Moreover, we can see the control overhead of FiRiNet almost keeps stable when the number of peers reaches 800. This indicates the control overhead is independent of the size of network when the size of network reaches a certain scale. The reason is that the control overhead depends on the number of neighbors in the neighbor lists which has a maximum value. When the size of network reaches a scale, the number of neighbors will reach the maximum value, and no longer increase. Therefore, the control overhead without VCR-like operations doesn't depend on the network size when the size of network reaches a certain scale.

From Fig. 4, we can see the control overhead increases a little in comparison with the control overhead in Fig. 3 under the same network size, because the VCR-like operations lead to peer churn, which will produce some additional messages. The number of additional messages depends on the jump operations and neighbor discovery efficiency, which doesn't depend on the network size. So, when the size of network reaches a certain scale, the control overhead with VCR-like operations doesn't depend on the network size too.

From both Fig. 3 and Fig. 4, we can see the maximum number of the messages processed by a peer is about 160. Since the gossip period is 60 seconds, each peer only needs to process about 2 or 3 messages in a second. Compared with the high bandwidth streaming traffic, the control overhead is negligible.

In conclusion, the control overhead of FiRiNet doesn't increase when the size of network reaches a scale. Therefore, FiRiNet can support large scale VoD streaming applications. Furthermore, the control overhead of FiRiNet is very small and negligible, compared with the high bandwidth streaming traffic.

4.2. Jump latency

We consider the jump latency of random seeking in terms of the average number of message hops. We compare the jump latency of FiRiNet and that of RINDY under different scales and different video lengths. In the experiments, we set the probability 6=30%. When a peer jumps, the number of message hops is recorded. The number of jumping peers is also recorded. Then, we compute the average number of message hops per peer. Fig. 5 presents the result under 6=30% with different scale. First, we can see the jump latency of FiRiNet is shorter than that of RINDY.

Figure 5: Jump latency under different scales

Figure 6: Jump latency under different video length

Figure 7: Continuity index under different scales

Figure 8: Continuity index under different probabilities

Second, when the scale increases, the jump latency of FiRiNet and that of RINDY don't increase. The reason is that the jump latency is independent of the scale and is up to the video length. Fig. 6 presents the results under ¿=30%, the number of peers is 3000 and different video lengths. First, we can see the jump latency of FiRiNet is shorter than that of RINDY under varied video lengths. Second, the jump latency of both FiRiNet and RINDY increase when the length of video increases.

In conclusion, the jump latency of FiRiNet is shorter than that of RINDY. Therefore, the Fibonacci ring structure can achieve a higher neighbor discovery efficiency than the power law ring structure.

4.3. Streaming quality

We evaluate the streaming quality using the metric-continuity index, which represents the percentage of video data received at the client side by the playback deadline of the video data. The higher the continuity index is, the higher the streaming quality received is.

We compare the continuity index of FiRiNet and that of RINDY under different network scales and different probabilities. Fig. 7 presents the results under probability ¿=30% and different network scales. From Fig. 7, we can see when the network scale becomes large, both the continuity index of FiRiNet and that of RINDY increase a little. The reason is that when the network scale becomes large, the number of the streaming parents of a peer will increase, which is helpful for a peer to get the desired video data timely. Moreover, the continuity index of FiRiNet is higher than that of RINDY under the same condition. Fig. 8 presents the results under different probabilities and the fixed network scale (3000 peers). From Fig. 8, we can see when the probability 6 increases, the continuity index of RINDY drops clearly, while the continuity index of FiRiNet drops a little. Therefore, FiRiNet is more resilient to the peer churn caused by VCR-like operations.

In conclusion, the streaming quality of FiRiNet is higher than that of RINDY under the same condition. Furthermore, the streaming quality of FiRiNet drops a little under the high peer churn caused by VCR-like operations, while that of RINDY drops clearly, which shows FiRiNet is more resilient to the high peer churn caused by VCR-like operations than RINDY.

5. Conclusion

In this paper, we have proposed a novel P2P scheme-Fibonacci ring overlay networks with distributed chunk storage for the P2P VoD streaming applications, called FiRiNet. FiRiNet has the following highlights. First, the Fibonacci ring structure can achieve a higher neighbor discovery efficiency than the power law ring structure, which makes the jump latency of FiRiNet is shorter than that of RINDY. Second, FiRiNet uses the "store-and-relay" mechanism with distributed chunk storage to avoid the adverse impact of children abandoning. Third, when VCR-like operations occur, FiRiNet only adjusts the playback module to re-establish the streaming service and keeps the overlay module unchanged, which makes FiRiNet resilient to the high peer churn caused by the frequent VCR-like operations. Under a common simulation platform, we demonstrate the performance of FiRiNet by simulations. The simulation results show that FiRiNet is an efficient and resilient scheme which is superior to the previous efficient scheme-RINDY in terms of control overhead, jump latency, and streaming quality.

Acknowledgements

This work is supported by the Natural Science Foundation of China with funding number 61063038.

References

[1] C. Huang, J. Li, K. W. Ross, Can internet video-on-demand be profitable?, in: Proceedings of the 2007 conference on Applications, technologies, architectures, and protocols for computer communications, SIGCOMM '07, ACM, New York, NY, USA, 2007, pp. 133-144.

[2] Y. Huang, T. Z. Fu, D.-M. Chiu, J. C. Lui, C. Huang, Challenges, design and analysis of a large-scale p2p-vod system, in: Proceedings of the ACM SIGCOMM 2008 conference on Data communication, SIGCOMM '08, ACM, New York, NY, USA, 2008, pp. 375-388.

[3] PPLive, http://www.pplive.com/.

[4] PPStream, http://www.ppstream.com/.

[5] UUsee, http://www.uusee.com/.

[6] B. Cheng, H. Jin, X. Liao, Supporting vcr functions in p2p vod services using ring-assisted overlays, in: Communications, 2007. ICC '07. IEEE International Conference on, 2007, pp. 1698 -1703.

[7] B. Wong, A. Slivkins, E. G. Sirer, Meridian: a lightweight network location service without virtual coordinates, in: Proceedings of the 2005 conference on Applications, technologies, architectures, and protocols for computer communications, SIGCOMM '05, 2005, pp. 85-96.

[8] W.-P. Ken Yiu, X. Jin, S.-H. Gary Chan, Distributed storage to support user interactivity in peer-to-peer video streaming, in: Communications, 2006. ICC '06. IEEE International Conference on, Vol. 1, 2006, pp. 55 -60.

[9] W.-P. Yiu, X. Jin, S.-H. Chan, Vmesh: Distributed segment storage for peer-to-peer interactive video streaming, Selected Areas in Communications, IEEE Journal on 25 (9) (2007) 1717 -1731.

[10] Y. Chawathe, S. Ratnasamy, L. Breslau, N. Lanham, S. Shenker, Making gnutella-like p2p systems scalable, in: Proceedings of the 2003 conference on Applications, technologies, architectures, and protocols for computer communications, SIGCOMM '03, ACM, New York, NY, USA, 2003, pp. 407-418.

[11] Y. Guo, K. Suh, J. Kurose, D. Towsley, P2cast: peer-to-peer patching scheme for vod service, in: Proceedings of the 12th international conference on World Wide Web, WWW '03, ACM, New York, NY, USA, 2003, pp. 301-309.

[12] T. Do, K. Hua, M. Tantaoui, P2vod: providing fault tolerant video-on-demand streaming in peer-to-peer environment, in: Communications, 2004 IEEE International Conference on, Vol. 3, 2004, pp. 1467 - 1472 Vol.3.

[13] T. Xu, J. Chen, W. Li, S. Lu, Y. Guo, M. Hamdi, Supporting vcr-like operations in derivative tree-based p2p streaming systems, in: Communications, 2009. ICC '09. IEEE International Conference on, 2009, pp. 1 -5.

[14] X. Zhang, J. Liu, B. Li, Y.-S. Yum, Coolstreaming/donet: a data-driven overlay network for peer-to-peer live media streaming, in: INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings IEEE, Vol. 3, 2005, pp. 2102-2111 vol. 3.

[15] N. Vratonjic, P. Gupta, N. Knezevic, D. Kostic, A. Rowstron, Enabling dvd-like features in p2p video-on-demand systems, in: Proceedings of the 2007 workshop on Peer-to-peer streaming and IP-TV, P2P-TV '07, ACM, New York, NY, USA, 2007, pp. 329-334.

[16] C. Xu, G.-M. Muntean, E. Fallon, A. Hanley, A balanced tree-based strategy for unstructured media distribution in p2p networks, in: Communications, 2008. ICC '08. IEEE International Conference on, 2008, pp. 1797-1801.

[17] D. Wang, J. C. Liu, A dynamic skip list-based overlay for on-demand media streaming with vcr interactions, IEEE Transactions on Parallel and Distributed Systems 19 (4) (2008) 503-514.

[18] J. Chen, X. Du, A layered semi-structured overlay for supporting vcr-like interactions in p2p vod system, in: Communications (ICC), 2010 IEEE International Conference on, 2010, pp. 1 -5.

[19] A. Varga, The omnet++ discrete event simulation system, Proceedings of the European Simulation Multiconference (ESM'2001).

[20] INET, http://inet.omnetpp.org/index.php?n=main.homepage.

[21] B. Ingmar, H. Bernhard, K. Stephan, Oversim: A flexible overlay network simulation framework, in: Proceedings of 10th IEEE Global Internet Symposium (GI '07) in conjunction with IEEE INFOCOM 2007, 2007, pp. 79-84.