Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Energy Analysis of Precooling Air Compressor System
Next Article in Special Issue
Degrees of Freedom of a K-User Interference Channel in the Presence of an Instantaneous Relay
Previous Article in Journal
Inference for a Kavya–Manoharan Inverse Length Biased Exponential Distribution under Progressive-Stress Model Based on Progressive Type-II Censoring
Previous Article in Special Issue
Reliable Semantic Communication System Enabled by Knowledge Graph
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Coded Caching for Broadcast Networks with User Cooperation †

1
School of Information Science and Technology, ShanghaiTech University, No. 393 Huaxia Middle Road, Pudong, Shanghai 201210, China
2
Information Processing and Communications Laboratory, Telecom Paris, IP Paris, 91120 Palaiseau, France
*
Author to whom correspondence should be addressed.
This paper was in part presented at the IEEE Information Theory Workshop, Visby, Gotland, Sweden, 2019 and at the IEEE 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 24–27 September 2019.
Entropy 2022, 24(8), 1034; https://doi.org/10.3390/e24081034
Submission received: 22 June 2022 / Revised: 24 July 2022 / Accepted: 25 July 2022 / Published: 27 July 2022
(This article belongs to the Special Issue Information Theoretic Methods for Future Communication Systems)
Figure 1
<p>Caching system considered in this paper. A server connects with <span class="html-italic">K</span> cache-enabled users and the users can cooperate through a flexible network.</p> ">
Figure 2
<p>Centralized cooperation gain and parallel gain when <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>40</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p> ">
Figure 3
<p>Transmission delay when <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>40</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. The upper bounds are achieved under the centralized caching scenario.</p> ">
Figure 4
<p>Decentralized cooperation gain and parallel gain when <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p> ">
Figure 5
<p>Transmission delay when <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>. The upper bounds are achieved under the decentralized random caching scenario.</p> ">
Versions Notes

Abstract

:
Caching technique is a promising approach to reduce the heavy traffic load and improve user latency experience for the Internet of Things (IoT). In this paper, by exploiting edge cache resources and communication opportunities in device-to-device (D2D) networks and broadcast networks, two novel coded caching schemes are proposed that greatly reduce transmission latency for the centralized and decentralized caching settings, respectively. In addition to the multicast gain, both schemes obtain an additional cooperation gain offered by user cooperation and an additional parallel gain offered by the parallel transmission among the server and users. With a newly established lower bound on the transmission delay, we prove that the centralized coded caching scheme is order-optimal, i.e., achieving a constant multiplicative gap within the minimum transmission delay. The decentralized coded caching scheme is also order-optimal if each user’s cache size is larger than a threshold which approaches zero as the total number of users tends to infinity. Moreover, theoretical analysis shows that to reduce the transmission delay, the number of users sending signals simultaneously should be appropriately chosen according to the user’s cache size, and always letting more users send information in parallel could cause high transmission delay.

1. Introduction

With the rapid development of Internet of Things (IoT) technologies, IoT data traffic, such as live streaming and on-demand video streaming, has grown dramatically over the past few years. To reduce the traffic load and improve the user latency experience, the caching technique has been viewed as a promising approach that shifts the network traffic to low congestion periods. In the seminal paper [1], Maddah-Ali and Niesen proposed a coded caching scheme based on centralized file placement and coded multicast delivery that achieves a significantly larger global multicast gain compared to the conventional uncoded caching scheme.
The coded caching scheme has attracted wide and significant interest. The coded caching scheme was extended to a setup with decentralized file placement, where no coordination is required for the file placement [2]. For the cache-aided broadcast network, ref. [3] showed that the rate–memory tradeoff of the above caching system is within a factor of 2.00884. For the setting with uncoded file placement where each user stores uncoded content from the library, refs. [4,5] proved that Maddah-Ali and Niesen’s scheme is optimal. In [6], both the placement and delivery phases of coded caching are depicted using a placement delivery array (PDA), and an upper bound for all possible regular PDAs was established. In [7], the authors studied a cached-aided network with heterogeneous setting where the user cache memories are unequal. More asymmetric network settings have been discussed, such as coded caching with heterogeneous user profiles [8], with distinct sizes of files [9], with asymmetric cache sizes [10,11,12] and with distinct link qualities [13]. The settings with varying file popularities have been discussed in [14,15,16]. Coded caching that jointly considers various heterogeneous aspects was studied in [17]. Other works on coded caching include, e.g., cache-aided noiseless multi-server network [18], cache-aided wireless/noisy broadcast networks [19,20,21,22], cache-aided relay networks [23,24,25], cache-aided interference management [26,27], coded caching with random demands [28], caching in combination networks [29], coded caching under secrecy constraints [30], coded caching with reduced subpacketization [31,32], the coded caching problem where each user requests multiple files [33], and a cache-aided broadcast network for correlated content [34], etc.
A different line of work is to study the cached-aided networks without the presence of a server, e.g., the device-to-device (D2D) cache-aided network. In [35], the authors investigated coded caching for wireless D2D network [35], where users locate in a fixed mesh topology wireless D2D network. A D2D system with selfish users who do not participate in delivering the missing subfiles to all users was studied in [36]. Wang et al. applied the PDA to characterize cache-aided D2D wireless networks in [37]. In [38], the authors studied the spatial D2D networks in which the user locations are modeled by a Poisson point process. For heterogeneous cache-aided D2D networks where users are equipped with cache memories of distinct sizes, ref. [39] minimized the delivery load by optimizing over the partition during the placement phase and the size and structure of D2D during the delivery phase. A highly dense wireless network with device mobility was investigated in [40].
In fact, combining the cache-aided broadcast network with the cache-aided D2D network can potentially reduce the transmission latency. This hybrid network is common in many practical distributed systems such as cloud network [41], where a central cloud server broadcasts messages to multiple users through the cellular network, and meanwhile users communicate with each other through a fiber local area network (LAN). A potential scenario is that users in a moderately dense area, such as a university, want to download files, such as movies, from a data library, such as a video service provider. It should be noted that the user demands are highly redundant, and the files need not only be stored by a central server but also partially cached by other users. Someone can attain the desired content through both communicating with the central server and other users such that the communication and storage resources can be used efficiently. Unfortunately, there is very little research investigating the coded caching problem for this hybrid network. In this paper, we consider such hybrid cache-aided network where a server consisting of N Z + files connects with K Z + users through a broadcast network, and meanwhile the users can exchange information via a D2D network. Unlike the settings of [35,38], in which each user can only communicate with its neighboring users via spatial multiplexing, we consider the D2D network as either an error-free shared link or a flexible routing network [18]. In particular, for the case of the shared link, all users exchange information via a shared link. In the flexible routing network, there exists a routing strategy adaptively partitioning all users into multiple groups, in each of which one user sends data packets error-free to the remaining users in the corresponding group. Let α Z be the number of groups who send signals at the same time, then the following fundamental questions arise for this hybrid cache-aided network:
  • How does α affect the system performance?
  • What is the (approximately) optimal value of α to minimize the transmission latency?
  • How can communication loads be allocated between the server and users to achieve the minimum transmission latency?
In this paper, we try to address these questions, and our main contributions are summarized as follows:
  • We propose novel coded caching schemes for this hybrid network under centralized and decentralized data placement. Both schemes efficiently exploit communication opportunities in D2D and broadcast networks, and appropriately allocate communication loads between the server and users. In addition to multicast gain, our schemes achieve much smaller transmission latency than both that of Maddah-Ali and Niesen’s scheme for a broadcast network [1,2] and the D2D coded caching scheme [35]. We characterize a cooperation gain and a parallel gain achieved by our schemes, where the cooperation gain is obtained through cooperation among users in the D2D network, and the parallel gain is obtained through the parallel transmission between the server and users.
  • We prove that the centralized scheme is order-optimal, i.e., achieving the optimal transmission delay within a constant multiplicative gap in all regimes. Moreover, the decentralized scheme is also optimal when the cache size of each user M is larger than the threshold N ( 1 1 / ( K + 1 ) K 1 ) that is approaching zero as K .
  • For the centralized data placement case, theoretical analysis shows that α should decrease with the increase of the user caching size. In particular, when each user’s caching size is sufficiently large, only one user should be allowed to send information, indicating that the D2D network can be just a simple shared link connecting all users. For the decentralized data placement case, α should be dynamically changing according to the sizes of subfiles created in the placement phase. In other words, always letting more users parallelly send information can cause a high transmission delay.
Please note that the decentralized scenario is much more complicated than the centralized scenario, since each subfile can be stored by s = 1 , 2 , , K users, leading to a dynamic file-splitting and communication strategy in the D2D network. Our schemes, in particular the decentralized coded caching scheme, differ greatly with the D2D coded caching scheme in [35]. Specifically, ref. [35] considered a fixed network topology where each user connects with a fixed set of users, and the total user cache sizes must be large enough to store all files in the library. However, in our schemes, the user group partition is dynamically changing, and each user can communicate with any set of users via network routing. Moreover, our model has the server share communication loads with the users, resulting in an allocation problem on communication loads between the broadcast network and D2D network. Finally, our schemes achieve a tradeoff between the cooperation gain, parallel gain and multicast gain, while the schemes in [1,2,35] only achieve the multicast gain.
The remainder of this paper is as follows. Section 2 presents the system model, and defines the main problem studied in this paper. We summarize the obtained main results in Section 3. Following that is a detailed description of the centralized coded caching scheme with user cooperation in Section 4. Section 5 extends the techniques we developed for the centralized caching problem to the setting of decentralized random caching. Section 6 concludes this paper.

2. System Model and Problem Definition

Consider a cache-aided network consisting of a single server and K users as depicted in Figure 1. The server has a library of N independent files W 1 , , W N . Each file W n , n = 1 , , N , is uniformly distributed over
[ 2 F ] { 1 , 2 , , 2 F } ,
for some positive integer F. The server connects with K users through a noisy-free shared link but rate-limited to a network speed of C 1 bits per second (bits/s). Each user k [ K ] is equipped with a cache memory of size M F bits, for some M [ 0 , N ] , and can communicate with each other via a D2D network.
We mainly focus on two types of D2D networks: a shared link as in [1,2] and a flexible routing network introduced in [18]. In the case of a shared link, all users connect with each other through a shared error-free link but rate-limited to C 2 bits/s. In the flexible routing network, K users can arbitrarily form multiple groups via network routing, in each of which at most one user can send error-free data packets at a network speed C 2 bits/s to the remaining users within the group. To unify these two types of D2D networks, we introduce an integer α max { 1 , K 2 } , which denotes the maximum number of groups allowed to send data parallelly in the D2D network. For example, when α max = 1 , the D2D network degenerates into a shared link, and when α max = K 2 , it turns to be the flexible network.
The system works in two phases: a placement phase and a delivery phase. In the placement phase, all users will access the entire library W 1 , , W N and fill the content to their caching memories. More specifically, each user k, for k [ K ] , maps W 1 , , W N to its cache content:
Z k ϕ k ( W 1 , , W N ) ,
for some caching function
ϕ k : [ 2 F ] N [ 2 M F ] .
In the delivery phase, each user requests one of the N files from the library. We denote the demand of user k as d k [ N ] , and its desired file as W d k . Let d ( d 1 , , d K ) denotes the request vector. In this paper, we investigate the worst request case where each user makes a unique request.
Once the request vector d is informed to the server and all users, the server produces the symbol
X f d ( W 1 , , W N ) ,
and broadcasts it to all users through the broadcast network. Meanwhile, user k { 1 , , K } produces the symbol (Each user k can produce X k as a function of Z k and the received signals sent by the server, but because all users can access to the server’s signal due to the fact that the server broadcasts its signals to the network, it is equivalent to generating X k as a function Z k ).
X k f k , d ( Z k ) ,
and sends it to a set of intended users D k [ K ] through the D2D network. Here, D k represents the set of destination users served by node k, f d and f k , d are some encoding functions
f d : [ 2 F ] N [ 2 R 1 F ] , f k , d : [ 2 M F ] [ 2 R 2 F ] ,
where R 1 and R 2 denote the transmission rate sent by the server in the broadcast network and by each user in the D2D network, respectively. Here we focus on the symmetric case where all users have the same transmission rate. Due to the constraint of α max , at most α max users can send signals parallelly in each channel use. The set of α max users who send signals in parallel could be adaptively changed in the delivery phase.
At the end of the delivery phase, due to the error-free transmission in the broadcast and D2D networks, user k observes symbols sent to them, i.e., ( X j : j [ K ] , k D j ) , and decodes its desired message as W ^ d k = ψ k , d ( X , ( X j : j [ K ] , k D j ) , Z k ) , where ψ k , d is a decoding function.
We define the worst-case probability of error as
P e max d F n max k [ K ] Pr W ^ d k W d k .
A coded caching scheme ( M , R 1 , R 2 ) consists of caching functions { ϕ k } , encoding functions { f d , f k , d } and decoding functions { ψ k , d } . We say that the rate region ( M , R 1 , R 2 ) is achievable if for every ϵ > 0 and every large enough file size F, there exists a coded caching scheme such that P e is less than ϵ .
Since the server and the users send signals in parallel, the total transmission delay, denoted by T, can be defined as
T max { R 1 F C 1 , R 2 F C 2 } .
The optimal transmission delay is T * inf { T : T is achievable } . For simplicity, we assume that C 1 = C 2 = F , and then from (7) we have
T = max { R 1 , R 2 } .
When C 1 C 2 , e.g., C 1 : C 2 = 1 / k , one small adjustment allowing our scheme to continue to work is multiplying λ by 1 / ( k ( 1 λ ) + λ ) , where λ is a devisable parameter introduced later.
Our goal is to design a coded caching scheme to minimize the transmission delay. Finally, in this paper we assume K N and M N . Extending the results to other scenarios is straightforward, as mentioned in [1].

3. Main Results

We first establish a general lower bound on the transmission delay for the system model described in Section 2, then present two upper bounds of the optimal transmission delay achieved by our centralized and decentralized coded caching schemes, respectively. Finally, we present the optimality results of these two schemes.
Theorem 1
(Lower Bound). For memory size 0 M N , the optimal transmission delay is lower bounded by
T * max 1 2 1 M N , max s [ K ] s K M N / s , max s [ K ] s s M N / s 1 1 + α max .
Proof. 
See the proof in Appendix A. □

3.1. Centralized Coded Caching

In the following theorem, we present an upper bound on the transmission delay for the centralized caching setup.
Theorem 2
(Upper Bound for the Centralized Scenario). Let t K M / N Z + , and α Z + . For memory size M { 0 , N K , 2 N K , , N } , the optimal transmission delay T * is upper bounded by T * T central , where
T central min α α max K 1 M N 1 1 + t + α min { K α 1 , t } .
For general 0 M N , the lower convex envelope of these points is achievable.
Proof. 
See scheme in Section 4. □
The following simple example shows that the proposed upper bound can greatly reduce the transmission delay.
Example 1.
Consider a network described in Section 2 with K M / N = K 1 . The coded caching scheme without D2D communication [1] has the server multicast an XOR message useful for all K users, achieving the transmission delay K 1 M N 1 1 + t = 1 K . The D2D coded caching scheme [35] achieves the transmission delay N M ( 1 M N ) = 1 K 1 . The achievable transmission delay in Theorem 2 equals 1 2 K 1 by letting α = 1 , almost twice as short as the transmission delay of previous schemes if K is sufficiently large.
From (10), we obtain that the optimal value of α , denoted by α * , equals 1 if t K 1 and to α max if t K α max 1 . When ignoring all integer constraints, we obtain α * = K t + 1 . We rewrite this choice as follows:
α * = 1 , t K 1 , K / ( t + 1 ) , K / α max 1 < t < K 1 , α max , t K / α max 1 .
Remark 1.
From (11), we observe that when M is small such that t K / α max 1 , we have α * = α max . As M is increasing, α * becomes K / ( t + 1 ) , smaller than α max . When M is sufficiently large such that M ( K 1 ) N / K , only one user should be allowed to send information, i.e., α * = 1 . This indicates that letting more users parallelly send information could be harmful. The main reason for this phenomenon is the existence of a tradeoff between the multicast gain, cooperation gain and parallel gain, which will be introduced below in this section.
Comparing T central with the transmission delay achieved by Maddah-Ali and Niesen’s scheme for the broadcast network [1], i.e., K 1 M N 1 1 + t , T central consists of an additional factor
G central,c 1 1 + α 1 + t min { K α 1 , t } ,
referred to as centralized cooperation gain, as it arises from user cooperation. Comparing T central with the transmission delay achieved by the D2D coded caching scheme [35], i.e., N M ( 1 M N ) , T central consists of an additional factor
G central,p 1 1 + 1 t + α t min { K α 1 , t } ,
referred to as centralized parallel gain, as it arises from parallel transmission among the server and users. Both gains depend on K, M / N and α max .
Substituting the optimal α * into (12), we have
G central,c = 1 + t K + t , t K 1 , 1 + t K K t + 1 + t , K α max 1 < t < K 1 , 1 + t α max t + t + 1 , t K α max 1 .
When fixing ( K , N , α max ) , G central,c in general is not a monotonic function of M. More specifically, when M is small enough such that t < K α max 1 , the function G central,c is monotonically decreasing, indicating that the improvement caused by introducing D2D communication. This is mainly because relatively larger M allows users to share more common data with each other, providing more opportunities on user cooperation. However, when M grows larger such that t K α max 1 , the local and global caching gains become dominant, and less improvement can be obtained from user cooperation, turning G central,c to a monotonic increasing function of M,
Similarly, substituting the optimal α * into (13), we obtain
G central,p = t K + t , t K 1 , t t · K t + 1 + t + 1 , K α max 1 < t < K 1 , t α max t + t + 1 , t K α max 1 .
Equation (15) shows that G central,p is monotonically increasing with t, mainly due to the fact that when M increases, more content can be sent through the D2D network without the help of the central server, decreasing the improvement from parallel transmission between the server and users.
The centralized cooperation gain (12) and parallel gain (13) are plotted in Figure 2 when N = 40 , K = 20 and α max = 5 .
Remark 2.
Larger α could lead to better parallel and cooperation gain (more uses can concurrently multicast signals to other users), but will result in worse multicast gain (signals are multicast to fewer users in each group). The choice of α in (11) is in fact a tradeoff between the multicast gain, parallel gain and cooperation gain.
The proposed scheme achieving the upper bound in Theorem 2 is order-optimal.
Theorem 3.
For memory size 0 M N ,
T central T * 31 .
Proof. 
See the proof in Appendix B. □
The exact gap of T central / T * could be much smaller. One could apply the method proposed in [3] to obtain a tighter lower bound and shrink the gap. In this paper, we only prove the order optimality of the proposed scheme, and leave the work of finding a smaller gap as the future work.
Figure 3 plots the lower bound (9) and upper bounds achieved by various schemes, including the proposed scheme, the scheme Maddah-Ali 2014 in [1] which considers the broadcast network without D2D communication, and the scheme Ji 2016 in [35], which considers the D2D network without server. It is obvious that our scheme outperforms the previous schemes and approaches closely to the lower bound.

3.2. Decentralized Coded Caching

We exploit the multicast gain from coded caching, D2D communication, and parallel transmission between the server and users, leading to the following upper bound.
Theorem 4
(Upper Bound for the Decentralized Scenario). Define p M / N . For memory size 0 M N , the optimal transmission delay T * is upper bounded by
T * T decentral max R , R s R u R s + R u R ,
where
R K ( 1 p ) K ,
R s 1 p p 1 ( 1 p ) K ,
R u 1 α max s = 2 K α max 1 s K s s 1 p s 1 ( 1 p ) K s + 1 + s = K α max K K K 1 s 1 f ( K , s ) p s 1 ( 1 p ) K s + 1 ,
with
f ( K , s ) K s ( s 1 ) , ( K m o d s ) < 2 , K 1 K / s , ( K m o d s ) 2 .
Proof. 
Here, R represents the transmission rate of sending contents that are not cached by any user, R s and R u represent the transmission rate sent by the server via the broadcast network, and the transmission rate sent by users via the D2D network, respectively. Equation (17) balances the communication loads assigned to the server and users. See more detailed proof in Section 5. □
The key idea of the scheme achieving (17) is to partition K users into K s groups for each communication round s [ K 1 ] , and let each group perform the D2D coded caching scheme [35] to exchange information. The main challenge is that that among all K s groups, there are K s groups of the same size s, and an abnormal group of size ( K mod s ) if ( K mod s ) 0 , leading to an asymmetric caching setup. One may use the scheme [35] for the groups of size s, for the group of size ( K mod s ) 2 , but how to exploit the caching resource and communication capability of all groups while balancing communication loads among the two types of groups to minimize the transmission delay remains elusive and needs to be carefully designed. Moreover, this challenge poses complexities both in establishing the upper bound and in optimality proof.
Remark 3.
The upper bound in Theorem 4 is achieved by setting the number of users that exactly send signals in parallel as follows:
α D = α max , case 1 , K s , case 2 , K s , case 3 .
If K s > α max , the number of users who send data in parallel is smaller than α max , indicating that always letting more users parallelly send messages could cause higher transmission delay. For example, when K 4 , s = K 1 and α max = K 2 , we have α D = 1 < α max .
Remark 4.
From the definitions of T decentral , R s , R u and R , it is easy to obtain that R T decentral R s ,
T decentral = R s R u R s + R u R , R u R , R , R u < R ,
T decentral decreases as α max increases, and T decentral increases as R u increases if R u R .
Due to the complex term R u , T decentral in Theorem 4 is hard to evaluate. Since T decentral is increasing as R u increases (see Remark 4), substituting the following upper bound of R u into (17) provides an efficient way to evaluate T decentral .
Corollary 1.
For memory size 0 p 1 , the upper bound of R u is given below:
  • α max = 1 (a shared link):
    R u 1 p p 1 5 2 K p 1 p K 1 4 1 p K + 3 ( 1 ( 1 p ) K + 1 ) ( K + 1 ) p ;
  • α max = K 2 (a flexible network):
    R u K ( 1 p ) ( K 1 ) 1 1 p K 1 2 / p K 2 1 ( 1 p ) K K p ( 1 p ) K 1 .
Proof. 
See the proof in Appendix C. □
Recall that the transmission delay achieved by the decentralized scheme without D2D communication [2] is equal to R s given in (19). We define the ratio between T decentral and R s as decentralized cooperation gain:
G decentral,c max { R R s , R u R s + R u R } ,
with G decentral,c [ 0 , 1 ] because of R R s . Similar to the centralized scenario, this gain arises from the coordination between users in the D2D network. Moreover, we also compare T decentral with the transmission delay ( 1 p ) / p , achieved by the D2D decentralized coded caching scheme [35], and define the ratio between R s and ( 1 p ) / p as decentralized parallel gain:
G decentral,p G decentral,c · 1 ( 1 p ) K ,
where G decentral,p [ 0 , 1 ] arises from the parallel transmission between the server and the users.
We plot the decentralized cooperation gain and parallel gain for the two types of D2D networks in Figure 4 when N = 20 and K = 10 . It can be seen that G decentral,c and G decentral,p in general are not monotonic functions of M. Here G decentral,c performs in a way similar to G central,c . When M is small, the function G decentral,c is monotonically decreasing from value 1 until reaching the minimum. For larger M, the function G decentral,c turns to monotonically increase with M. The reason for this phenomenon is that in the decentralized scenario, when M increases, the proportion of subfiles that are not cached by any user and must be sent by the server is decreasing. Thus, there are more subfiles that can be sent parallelly via D2D network as M increases. Meanwhile, the decentralized scheme in [2] offers an additional multicasting gain. Therefore, we need to balance these two gains to reduce the transmission delay.
The function G decentral,p behaves differently as it monotonically increases when M is small. After reaching the maximal value, the function G decentral,p decreases monotonically until meeting the local minimum (The abnormal bend in parallel gain when α max = K 2 comes from a balance effect between the G decentral,c and 1 ( 1 p ) K in (27)), then G decentral,p turns to be a monotonic increasing function for large M. Similar to the centralized case, as M increases, the impact of parallel transmission among the server and users becomes smaller since more data can be transmitted by the users.
Theorem 5.
Define p M / N and p th 1 1 K + 1 1 K 1 , which tends to 0 as K tends to infinity. For memory size 0 M N ,
  • if α max = 1 (shared link), then
    T decentral T * 24 ;
  • if α max = K 2 , then
    T decentral T * max 6 , 2 K 2 K 2 K + 1 K 1 , p < p th , 6 , p p th .
Proof. 
See the proof in Appendix D. □
Figure 5 plots the lower bound in (9) and upper bounds achieved by various decentralized coded caching schemes, including our scheme, the scheme Maddah-Ali 2015 in [2] which considers the case without D2D communication, and the scheme Ji 2016 in [35] which considers the case without server.

4. Coding Scheme under Centralized Data Placement

In this section, we describe a novel centralized coded caching scheme for arbitrary K, N and M such that t = K M / N is a positive integer. The scheme can be extended to the general case 1 t K by following the same approach as in [1].
We first use an illustrative example to show how we form D2D communication groups, split files and deliver data, and then present our generalized centralized coding caching scheme.

4.1. An Illustrative Example

Consider a network consisting of K = 6 users with cache size M = 4 , and a library of N = 6 files. Thus, t = K M / N = 4 . Divide all six users into two groups of equal size, and choose an integer L 1 = 2 that guarantees K K 1 t L 1 min { α ( K / α 1 ) , t } to be an integer. (According to (11) and (29), one optimal choice could be ( α = 1 , L 1 = 4 , λ = 5 / 9 ), here we choose ( α = 2 , L 1 = 2 , λ = 1 / 3 ) for simplicity, and also in order to demonstrate that even with a suboptimal choice, our scheme still outperforms that in [1,35]). Split each file W n , for n = 1 , , N , into 3 6 4 = 45 subfiles:
W n = ( W n , T l : l [ 3 ] , T [ 6 ] , | T | = 4 ) .
We list all the requested subfiles uncached by all users as follows: for l = 1 , 2 , 3 ,
W d 1 , { 2 , 3 , 4 , 5 } l , W d 1 , { 2 , 3 , 4 , 6 } l , W d 1 , { 2 , 3 , 5 , 6 } l , W d 1 , { 2 , 4 , 5 , 6 } l , W d 1 , { 3 , 4 , 5 , 6 } l ; W d 2 , { 1 , 3 , 4 , 5 } l , W d 2 , { 1 , 3 , 4 , 6 } l , W d 2 , { 1 , 3 , 5 , 6 } l , W d 2 , { 1 , 4 , 5 , 6 } l , W d 2 , { 3 , 4 , 5 , 6 } l ; W d 3 , { 1 , 2 , 4 , 5 } l , W d 3 , { 1 , 2 , 4 , 6 } l , W d 3 , { 1 , 2 , 5 , 6 } l , W d 3 , { 1 , 4 , 5 , 6 } l , W d 3 , { 2 , 4 , 5 , 6 } l ; W d 4 , { 1 , 2 , 3 , 5 } l , W d 4 , { 1 , 2 , 3 , 6 } l , W d 4 , { 1 , 2 , 5 , 6 } l , W d 4 , { 1 , 3 , 5 , 6 } l , W d 4 , { 2 , 3 , 5 , 6 } l ; W d 5 , { 1 , 2 , 3 , 4 } l , W d 5 , { 1 , 2 , 3 , 6 } l , W d 5 , { 1 , 2 , 4 , 6 } l , W d 5 , { 1 , 3 , 4 , 6 } l , W d 5 , { 2 , 3 , 4 , 6 } l ; W d 6 , { 1 , 2 , 3 , 4 } l , W d 6 , { 1 , 2 , 3 , 5 } l , W d 6 , { 1 , 2 , 4 , 5 } l , W d 6 , { 1 , 3 , 4 , 5 } l , W d 6 , { 2 , 3 , 4 , 5 } l .
The users can finish the transmission in different partitions. Table 1 shows the transmission in four different partitions over the D2D network.
In Table 1, all users first send XOR symbols with superscript l = 1 . Please note that the subfiles W d 2 , { 1 , 3 , 4 , 5 } 1 and W d 5 , { 1 , 2 , 4 , 6 } 1 are not delivered at the beginning since K K 1 t α ( K / α 1 ) is not an integer. Similarly, for subfiles with l = 2 , W d 3 , { 1 , 2 , 5 , 6 } 2 and W d 4 , { 2 , 3 , 5 , 6 } 2 remain to be sent to user 3 and 4. In the last transmission, user 1 delivers the XOR message W d 3 , { 1 , 2 , 5 , 6 } 2 W d 2 , { 1 , 3 , 4 , 5 } 1 to user 2 and 3, and user 6 multicasts W d 5 , { 1 , 2 , 4 , 6 } 1 W d 4 , { 2 , 3 , 5 , 6 } 2 to user 5 and 6. The transmission rate in the D2D network is R 2 = 1 3 .
For the remaining subfiles with superscript l = 3 , the server delivers them in the same way as in [1]. Specifically, it sends symbols k S W d k , S { k } 3 , for all S [ K ] : | S | = 5 . Thus, the rate sent by the server is R 1 = 2 15 , and the transmission delay T central = max { R 1 , R 2 } = 1 3 , which is less than the delay achieved by the coded caching schemes for the broadcast network [1] and the D2D communication [35], respectively.

4.2. The Generalized Centralized Coding Caching Scheme

In the placement phase, each file is first split into K t subfiles of equal size. More specifically, split W n into subfiles as follows: W n = W n , T : T [ K ] , | T | = t . User k caches all the subfiles if k T for all n = 1 , . . . , N , occupying the cache memory of M F bits. Then split each subfile W n , T into two mini-files as W n , T = W n , T s , W n , T u , where
| W n , T s | = λ · | W n , T | = λ · F K t , | W n , T u | = ( 1 λ ) · | W n , T | = ( 1 λ ) · F K t ,
with
λ = 1 + t α min { K α 1 , t } + 1 + t .
Here, the mini-file W n , T s and W n , T u will be sent by the server and users, respectively. For each mini-file W n , T u , split it into L 1 pico-files of equal size ( 1 λ ) · F L 1 K t , i.e., W n , T u = W n , T u , 1 , , W n , T u , L 1 , where L 1 satisfies
K · K 1 t · L 1 α min { K α 1 , t } Z + .
As we will see later, condition (29) ensures that communication loads can be optimally allocated between the server and the users, and (30) ensures that the number of subfiles is large enough to maximize multicast gain for the transmission in the D2D network.
In the delivery phase, each user k requests file W d k . The request vector d = ( d 1 , d 2 , , d K ) is informed by the server and all users. Please note that different parts of file W d k have been stored in the user cache memories, and thus the uncached parts of W d k can be sent both by the server and users. Subfiles
W d k , T u , 1 , , W d k , T u , L 1 : T [ K ] , | T | = t , k T
are requested by user k and will be sent by the users via the D2D network. Subfiles
W d k , T s : T [ K ] , | T | = t , k T
are requested by user k and will be sent by the server via the broadcast network.
First consider the subfiles sent by the users. Partition the K users into α groups of equal size:
G 1 , , G α ,
where for i , j = 1 , , α , G i [ K ] : | G i | = K / α , and G i G j = , if i j . In each group G i , one of K / α users plays the role of server and sends symbols based on its cached contents to the remaining ( K / α 1 ) users within the group.
Focus on a group G i and a set S [ K ] : | S | = t + 1 . If G i S , then all nodes in G i share subfiles
( W n , T u , l : l [ L 1 ] , n [ N ] , G i T , | T | = t ) .
In this case, user k G i sends XOR symbols that contains the requested subfiles useful to all remaining K / α 1 users in G i , i.e., j G i { k } W d j , S { j } u , l ( k , G i , S ) , where l ( k , G i , S ) [ L 1 ] is a function of ( k , G i , S ) which avoids redundant transmission of any fragments.
If S G i , then the nodes in S share subfiles
( W n , T u , l : l [ L 1 ] , n [ N ] , T S , | T | = t ) .
In this case, user k S sends an XOR symbol that contains the requested subfiles for all remaining t 1 users in S , i.e., j S { k } W d j , S { j } u , l ( k , G i , S ) . Other groups perform the similar steps and concurrently deliver the remaining requested subfiles to other users.
By changing group partition and performing the delivery strategy described above, we can send all the requested subfiles
( W d k , T u , 1 , , W d k , T u , L 1 : T [ K ] , | T | = t , k T ) k = 1 K
to the users.
Since α groups send signals in a parallel manner ( α users can concurrently deliver contents), and each user in a group delivers a symbol containing min { K / α 1 , t } non-repeating pico-files requested by other users, in order to send all requested subfiles in (31), we need to send in total
K · K 1 t · L 1 α min { K α 1 , t }
XOR symbols, each of size 1 λ K t F bits. Notice that L 1 is chosen according to (30), ensuring that (32) equals to an integer. Thus, we obtain R 2 as
R 2 = K L 1 · K 1 t α min { K α 1 , t } · 1 λ L 1 K t = K 1 M N 1 1 + t + α min { K α 1 , t } ,
where the last equality holds by (29).
Now consider the delivery of the subfiles sent by the server. Apply the delivery strategy as in [1], i.e., the server broadcasts
k S W d k , S { k } s
to all users, for all S [ K ] : | S | = t + 1 . We obtain the transmission rate of the server
R 1 = λ · K 1 M N · 1 1 + t = K 1 M N 1 1 + t + α min { K α 1 , t } .
From (33) and (34), we can see that the choice λ in (29) guarantees equal communication loads at the server and users. Since the server and users transmit the signals simultaneously, the transmission delay of the whole network is the maximum between R 1 and R 2 , i.e., T central = max { R 1 , R 2 } = K ( 1 M / N ) 1 + t + α min { K / α 1 , t } , for some α [ α max ] .

5. Coding Scheme under Decentralized Data Placement

In this section, we present a novel decentralized coded caching scheme for joint broadcast network and D2D network. The decentralized scenario is much more complicated than the centralized scenario, since each subfile can be stored by s = 1 , 2 , , K users, leading to a dynamic file-splitting and communication strategy in the D2D network. We first use an illustrative example to demonstrate how we form D2D communication groups, split data and deliver data, and then present our generalized coding caching scheme.

5.1. An Illustrative Example

Consider a joint broadcast and D2D network consisting of K = 7 users. When using the decentralized data placement strategy, the subfiles cached by user k can be written as
W n , T : n [ N ] , k T , T [ 7 ] .
We focus on the delivery of subfiles W n , T : n [ N ] , k T , | T | = s = 4 , i.e., each subfile is stored by s = 4 users. A similar process can be applied to deliver other subfiles with respect to s [ K ] { 4 } .
To allocate communication loads between the server and users, we divide each subfile into two mini-files W n , T = W n , T s , W n , T u , where mini-files { W n , T s } and { W n , T u } will be sent by the server and users, respectively. To reduce the transmission delay, the size of W n , T s and W n , T u need to be chosen properly such that R 1 = R 2 , i.e., the transmission rate of the server and users are equal; see (37) and (39) ahead.
Divide all the users into two non-intersecting groups ( G 1 r , G 2 r ) , for r [ 35 ] which satisfies
G 1 r [ K ] , G 2 r [ K ] , | G 1 r | = 4 , | G 2 r | = 3 , G 1 r G 2 r = .
There are 7 4 = 35 kinds of partitions in total, thus r [ 35 ] . Please note that for any user k G i r , | G i r | 1 of its requested mini-files are already cached by the rest users in G i r , for i = 1 , 2 .
To avoid repetitive transmission of any mini-file, each mini-file in
( W d k , T { k } u : T [ 7 ] , k [ 7 ] )
is divided into non-overlapping pico-files W d k , T { k } u 1 and W d k , T { k } u 2 , i.e.,
W d k , T { k } u = ( W d k , T { k } u 1 , W d k , T { k } u 2 ) .
The sizes of W n , T u 1 and W n , T u 2 need to be chosen properly to have equal transmission rate of group G 1 r and G 2 r ; see (51) and (52) ahead.
To allocate communication loads between the two different types of groups, split each W d k , T { k } u 1 and W d k , T { k } u 2 into 3 and two equal fragments, respectively, e.g.,
W d 2 , { 1 , 3 , 4 } u 1 = W d 2 , { 1 , 3 , 4 } u 1 , 1 , W d 2 , { 1 , 3 , 4 } u 1 , 2 , W d 2 , { 1 , 3 , 4 } u 1 , 3 , W d 2 , { 1 , 3 , 4 } u 2 = W d 2 , { 1 , 3 , 4 } u 2 , 1 , W d 2 , { 1 , 3 , 4 } u 2 , 2 .
During the delivery phase, in each round, one user in each group produces and multicasts an XOR symbol to all other users in the same group, as shown in Table 2.
Please note that in this example, each group only appears one time among all partitions. However, for some other values of s, each group could appear multiple times in different partitions. For example, when s = 2 , group { 1 , 2 } appears in both partitions { { 1 , 2 } , { 3 , 4 } , { 5 , 6 , 7 } } and { { 1 , 2 } , { 3 , 5 } , { 4 , 6 , 7 } } . To reduce the transmission delay, one should balance communication loads between all groups, and between the server and users as well.

5.2. The Generalized Decentralized Coded Caching Scheme

In the placement phase, each user k applies the caching function to map a subset of M F N bits of file W n , n = 1 , . . . , N , into its cache memory at random: W n = W n , T : T [ K ] . The subfiles cached by user k can be written as W n , T : n [ N ] , k T , T [ K ] . When the size of file F is sufficiently large, by the law of large numbers, the subfile size with high probability can be written by
| W n , T | p | T | ( 1 p ) K | T | .
The delivery procedure can be characterized into three different levels: allocating communication loads between the server and user, inner-group coding (i.e., transmission in each group) and parallel delivery among groups.

5.2.1. Allocating Communication Loads between the Server and User

To allocate communication loads between the server and users, split each subfile W n , T , for T [ K ] : T , into two non-overlapping mini-files
W n , T = W n , T s , W n , T u ,
where
| W n , T s | = λ · | W n , T | , | W n , T u | = ( 1 λ ) · | W n , T | ,
and λ is a design parameter whose value is determined in Remark 5.
Mini-files ( W d k , T { k } s : k [ K ] ) will be sent by the server using the decentralized coded caching scheme for the broadcast network [2], leading to the transmission delay
λ R s = λ 1 M / N M / N 1 1 M N K ,
where R s is defined in (19).
Mini-files ( W d k , T { k } u : k [ K ] ) will be sent by users using parallel user delivery described in Section 5.2.3. The corresponding transmission rate is
R 2 = ( 1 λ ) R u ,
where R u represents the transmission bits sent by each user normalized by F.
Since subfile W d k , is not cached by any user and must be sent exclusively from the server, the corresponding transmission delay for sending ( W d k , : k [ K ] ) is
R = K 1 M N K ,
where R coincides with the definition in (18).
By (38)–(40), we have
R 1 = R + λ R s , R 2 = ( 1 λ ) R u .
According to (8), we have T decentral = max { R 1 , R 2 } .
Remark 5
(Choice of λ ).The parameter λ is chosen such that T decentral is minimized. If R u < R , then the inequality R 2 R 1 always holds and T decentral reaches the minimum T decentral = R with λ = 0 . If R u R , solving R 1 = R 2 yields λ = R u R R s + R u and T decentral = R s R u R s + R u R .

5.2.2. Inner-Group Coding

Given parameters ( s , G , i , γ ) where s [ K 1 ] , G [ K ] , i { u , u 1 , u 2 } with indicators u , u 1 , u 2 described in (37) and (51), and γ Z + , we present how to successfully deliver
( W d k , S { k } i : S [ K ] , | S | = s , G S )
to every user k G via D2D communication.
Split each W d k , S { k } i into ( | G | 1 ) γ non-overlapping fragments of equal size, i.e.,
W d k , S { k } i = W d k , S { k } i , l : l [ ( | G | 1 ) γ ] ,
and each user k G takes turn to broadcast XOR symbol
X k , G , s i j G { k } W d j , S { j } i , l ( j , G , S ) ,
where l ( k , G , S ) [ ( | G | 1 ) γ ] is a function of ( k , G , S ) which avoids redundant transmission of any fragments. The XOR symbol X k , G , s i will be received and decoded by the remaining users in G .
For each group G , inner-group coding encodes in total K | G | s | G | of W d k , S { k } i , and each XOR symbol X k , G , s i in (43) contains fragments required by | G | 1 users in G .

5.2.3. Parallel Delivery among Groups

The parallel user delivery consists of ( K 1 ) rounds characterized by s = 2 , , K . In each round s, mini-files
( W d k , T { k } u : T [ K ] , | T | = s , k [ K ] )
are recovered through D2D communication.
The key idea is to partition K users into K s groups for each communication round s { 2 , . . . , K } , and let each group perform the D2D coded caching scheme [35] to exchange information. If ( K mod s ) 0 , there will be K s numbers of groups of the same size s, and an abnormal group of size ( K mod s ) , leading to an asymmetric caching setup. We optimally allocate the communication loads between the two types of groups, and between the broadcast network and D2D network as well.
Based on K, s and α max , the delivery strategy in the D2D network is divided into 3 cases:
  • Case 1: K s > α max . In this case, α max users are allowed to send data simultaneously. Select s · α max users from all users and divide them into α max groups of equal size s. The total number of such kinds of partition is
    β 1 K s K s s K s ( α max 1 ) s α max ! .
    In each partition, α max users, selected from α max groups, respectively, send data in parallel via the D2D network.
  • Case 2: K s α max and ( K mod s ) < 2 . In this case, choose ( K s 1 ) s users from all users and partition them into ( K s 1 ) groups of equal size s. The total number of such kind partition is
    β 2 K s K s s K s ( K s 1 ) s K s ! .
    In each partition, ( K s 1 ) users selected from ( K s 1 ) groups of equal size s, respectively, together with an extra user selected from the abnormal group of size K s ( K s 1 ) send data in parallel via the D2D network.
  • Case 3: K s α max and ( K mod s ) 2 . In this case, every s users form a group, resulting in K s groups consisting of s K s users. The remaining ( K mod s ) users form an abnormal group. The total number of such kind of partition is
    β 3 = β 2 .
    In each partition, K s users selected from K s groups of equal size s, respectively, together with an extra user selected from the abnormal group of size ( K mod s ) send data in parallel via the D2D network.
Thus, the exact number of users who parallelly send signals can be written as follows:
α D = α max , case 1 , K s , case 2 , K s , case 3 .
Please note that each group G re-appears
N G K s s K s · ( α D 1 ) s ( α D 1 ) !
times among [ β c ] partitions.
Now we present the decentralized scheme for these three cases as follows.
Case 1 ( K s > α max ): Consider a partition r [ β 1 ] , denoted by
G 1 r , , G α D r ,
where | G i r | = s and G i r G j r = , i , j [ α D ] and i j .
Since each group G i r re-appears N G i r times among [ β 1 ] partitions, and ( | G i r | 1 ) users take turns to broadcast XOR symbols (43) in each group G i r , in order to guarantee that each group can send a unique fragment without repetition, we split each mini-file W d k , S { k } u into ( | G i r | 1 ) N G i r fragments of equal size.
Each group G i r , for r [ β 1 ] and i [ α D ] , performs inner-group coding (see Section 5.2.2) with parameters
( s , G i r , u , N G i r ) ,
for all s satisfying K s > α max . For each round r, all groups G 1 r , , G α D r parallelly send XOR symbols containing | G i r | 1 fragments required by other users of its group. By the fact that the partitioned groups traverse every set T , i.e.,
T { G 1 r G α D r } r = 1 β 1 , T [ K ] : | T | = s ,
and since inner-group coding enables each group G i r to recover
( W d k , S { k } u : S [ K ] , | S | = s , G i r S , k [ K ] ) ,
we can recover all required mini-files
( W d k , T { k } u : T [ K ] , | T | = s , k [ K ] ) .
The transmission delay of Case 1 in round s is thus
R case 1 u ( s ) r [ β 1 ] k G i r | X k , G i r , s u | = ( a ) K K 1 s 1 α D ( s 1 ) | W d k , T { k } u | = K K 1 s 1 α max ( s 1 ) ( 1 λ ) p s 1 ( 1 p ) K s + 1 ,
where (a) follows by (44) and (48).
Case 2 ( K s α max and ( K mod s ) < 2 ): We apply the same delivery procedure as Case 1, except that β 1 is replaced by β 2 and α D = K s . Thus, the transmission delay in round s is
R case 2 u ( s ) = K K 1 s 1 α D ( s 1 ) | W d k , T { k } u | = K K 1 s 1 K s ( s 1 ) ( 1 λ ) p s 1 ( 1 p ) K s + 1 .
Case 3 ( K s α max and ( K mod s ) 2 ): Consider a partition r [ β 3 ] , denoted as
G 1 r , , G α D r ,
where G i r [ K ] , G i r G j r = , i , j [ α D 1 ] and i j and G α D r = [ K ] ( i = 1 α D 1 G i r ) with | G i r | = s , | G α D r | = ( K mod s ) .
Since group G i r : i [ α D 1 ] and G α D r have different group sizes, we further split each mini-file W d k , T { k } u into two non-overlapping fragments such that
| W d k , T { k } u 1 | = λ 2 | W d k , T { k } u | , | W d k , T { k } u 2 | = ( 1 λ 2 ) | W d k , T { k } u | ,
where λ 2 [ 0 , 1 ] is a designed parameter satisfying (52).
Split each mini-file W d k , S { k } u 1 and W d k , S { k } u 2 into fragments of equal size:
W d k , S { k } u 1 = W d k , S { k } u 1 , l : l [ ( s 1 ) N G i r ] , W d k , S { k } u 2 = W d k , S { k } u 2 , l : l | G α D r | 1 s 1 | G α D r | 1 N G i r .
Following the similar encoding operation in (43), group G i r : i [ α D 1 ] and group G α D r send the following XOR symbols, respectively:
X k , G i r , s u 1 : k G i r i = 1 ( α D 1 ) , X k , G α D r , s u 2 : k G α D r .
For each s { 2 , , K } , the transmission delay for sending the XOR symbols above by group G i r : i [ α D 1 ] and group G K s r can be written as
R case 3 u 1 ( s ) = λ 2 K K 1 s 1 ( α D 1 ) ( s 1 ) · | W d k , T { k } u | , R case 3 u 2 ( s ) = ( 1 λ 2 ) K K 1 s 1 ( K mod s ) 1 · | W d k , T { k } u | ,
respectively. Since G i : i [ K s ] and group G K s can send signals in parallel, by letting
R case 3 u 1 ( s ) = R case 3 u 2 ( s ) ,
we eliminate the parameter λ 2 and obtain the balanced transmission delay at users for Case 3:
R case 3 u ( s ) K K 1 s 1 K 1 K s ( 1 λ ) p s 1 ( 1 p ) K s + 1 .
Remark 6.
The condition K s > α max in Case 1 implies that s K α max 1 . In this regime, the transmission delay is given in (49). If s K α max 1 and ( K mod s ) < 2 , scheme for Case 2 starts to work and the transmission delay is given in (50); If s K α max 1 and ( K mod s ) 2 , scheme for Case 3 starts to work and the transmission delay is given in (53).
In each round s { 2 , , K } , all requested mini-files can be recovered by the delivery strategies above. By Remark 6, the transmission delay in the D2D network is
R 2 = ( 1 λ ) 1 α max s = 2 K α max 1 s K s s 1 p s 1 ( 1 p ) K s + 1 + ( 1 λ ) s = K α max K K K 1 s 1 f ( K , s ) p s 1 ( 1 p ) K s + 1 = ( 1 λ ) R u ,
where R u is defined in (20) and
f ( K , s ) K s ( s 1 ) , ( K m o d s ) < 2 , K 1 K / s , ( K m o d s ) 2 .

6. Conclusions

In this paper, we considered a cache-aided communication via joint broadcast network with a D2D network. Two novel coded caching schemes were proposed for centralized and decentralized data placement settings, respectively. Both schemes achieve a parallel gain and a cooperation gain by efficiently exploiting communication opportunities in the broadcast and D2D networks, and optimally allocating communication loads between the server and users. Furthermore, we showed that in the centralized case, letting too many users parallelly send information could be harmful. The information theoretic converse bounds were established, with which we proved that the centralized scheme achieves the optimal transmission delay within a constant multiplicative gap in all regimes, and the decentralized scheme is also order-optimal when the cache size of each user is larger than a small threshold which tends to zero as the number of users tends to infinity. Our work indicates that combining the cache-aided broadcast network with the cache-aided D2D network can greatly reduce the transmission latency.

Author Contributions

Project administration, Y.W.; Writing—original draft, Z.H., J.C. and X.Y.; Writing—review & editing, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China grant number 61901267.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of the Converse

Let T 1 * and T 2 * denote the optimal rate sent by the server and each user. We first consider an enhance system where every user is served by an exclusive server and user, which both store full files in the database, then we are easy to obtain the following lower bound:
T * 1 2 ( 1 M N ) .
Another lower bound follows similar idea to [1]. However, due to the flexibility of D2D network, the connection and partitioning status between users can change during the delivery phase, prohibiting the direct application of the proof in [1] into the hybrid network considered in this paper. Moreover, the parallel transmission of the server and many users creates abundant different signals in the networks, making the scenario more sophisticated.
Consider the first s users with cache contents Z 1 , . . . , Z s . Define X 1 , 0 as the signal sent by the server, and X 1 , 1 , , X 1 , α max as the signals sent by the α max users, respectively, where X j , i [ 2 T 2 * F ] for j [ s ] and i [ α max ] . Assume that W 1 , , W s are determined by X 1 , 0 , X 1 , 1 , , X 1 , α max and Z 1 , , Z s . Additionally, define X 2 , 0 , X 2 , 1 , , X 2 , α max as the signals which enable the users to decode W s + 1 , . . . , W 2 s . Continue the same process such that X N / s , 0 , X N / s , 1 , , X N / s , α max are the signals which enable the users to decode W s N / s s + 1 , . . . , W s N / s . We then have Z 1 , , Z s , X 1 , 0 , , X N / s , 0 , and
X 1 , 1 , , X 1 , α max , , X N / s , 1 , , X N / s , α max
to determine W 1 , , W s N / s . Let
X 1 : α max ( X 1 , 1 , , X 1 , α max , , X N / s , 1 , , X N / s , α max ) .
By the definitions of T 1 * , T 2 * and the encoding function (5), we have
H ( X 1 , 0 , , X N / s , 0 ) N / s T 1 * F ,
H ( X 1 : α max ) N / s α max T 2 * F ,
H ( X 1 : α max , Z 1 , , Z s ) K M F .
Consider the cut separating X 1 , 0 , , X N / s , 0 , X 1 : α max , and Z 1 , , Z s from the corresponding s users. By the cut-set bound and (A2), we have
N s s F N s T 1 * F + K M F ,
N s s F N s T 1 * F + s M F + N s α max T 2 * F .
Since we have T * T 1 * and T * max { T 1 * , T 2 * } from the above definition, we obtain
T * max s [ K ] ( s K M N / s ) ,
T * max s [ K ] ( s s M N / s ) 1 1 + α max .

Appendix B

We prove that T central is within a constant multiplicative gap of the minimum transmission delay T * for all values of M. To prove the result, we compare them in the following regimes.
  • If 0.6393 < t < K / α 1 , from Theorem 1, we have
    T * ( s M s N / s ) 1 1 + α max ( a ) 1 12 · K 1 M N 1 1 + t · 1 1 + α max ,
    where (a) follows from [1] [Theorem 3]. Then we have
    T central T * 12 · ( 1 + α max ) ( 1 + t ) 1 + t + α t = 12 · ( 1 + α max ) 1 + α t / ( 1 + t ) 12 · ( 1 + α max ) 1 + α · 0.6393 / ( 1 + 0.6393 ) 31 ,
    where the last inequality holds by setting α = α max .
  • If t > K / α 1 , we have
    T central T * K ( 1 M N ) 1 1 + t + α ( K / α 1 ) 1 2 ( 1 M N ) = 2 K 1 + t + α ( K / α 1 ) ( a ) 2 K K + K M / N 2 ,
    where ( a ) holds by choosing α = 1 .
  • If t 0.6393 , setting s = 0.275 N , we have
    T * s K M N / s ( a ) s K M N / s 1 = 0.275 N t · 0.3793 N 0.0325 N > 1 31 · N ,
    where ( a ) holds since x x 1 for any x 1 . Please note that for all values of M, the transmission delay
    T central min { K , N } .
    Combining with (A12) and (A13), we have T central T * 31 .

Appendix C

Appendix C.1. Case αmax = K 2

When α max = K 2 , we have
R u = R u-f s = 2 K K K 1 s 1 f ( K , s ) p s 1 q K s + 1 ,
where R u-f denotes the user’s transmission rate for a flexible D2D network with α max = K 2 . In the flexible D2D network, at most K 2 users are allowed to transmit messages simultaneously, in which the user transmission turns to unicast.
Please note that in each term of the summation:
K K 1 s 1 f ( K , s ) K K 1 s 1 K 1 K s = K K 1 + K K 1 2 s K K 1 · K 1 s 1 K K 1 s 1 K 1 + 2 K K s ( K 1 ) ( K 2 ) ,
where the last inequality holds by s K K 1 + K 2 K 1 = 2 and
K K 1 2 s K K 1 K 1 s 1 = K 2 K 1 s 1 ( K 1 ) ( K 2 ) · K 2 K 1 s K K 1 K 2 K 1 s 1 ( K 1 ) ( K 2 ) · K 2 K 1 + K K 1 s K K 1 + K K 1 = 2 K ( K 1 ) ( K 2 ) · K s .
Therefore, by (A15), R u-f can be rewritten as
R u-f K K 1 s = 2 K K 1 s 1 p s 1 q K s + 1 + 2 K ( K 1 ) ( K 2 ) s = 2 K K s p s 1 q K s + 1 = K q K 1 · i = 1 K 1 K 1 i p i q K 1 i + 2 K q / p ( K 1 ) ( K 2 ) · s = 2 K K s p s q K s = K q K 1 1 q K 1 + 2 K q / p ( K 1 ) ( K 2 ) · 1 q K K p q K 1 .

Appendix C.2. Case αmax =1

When α max = 1 , the cooperation network degenerates into a shared link where only one user acts as the server and broadcasts messages to the remaining K 1 users. A similar derivation is given in [35]. In this case, R u can be rewritten as
R u = s = 2 K s K s s 1 p s 1 q K s + 1 s = 2 K 1 + 3 s + 1 K s p s 1 q K s + 1 = s = 2 K K s p s 1 q K s + 1 + 3 K + 1 s = 2 K K + 1 s + 1 p s 1 q K s + 1 = q p 1 q K K p q K 1 + 3 q / p 2 K + 1 1 q K + 1 K + 1 p q K K ( K + 1 ) 2 p 2 q K 1 = q p 1 5 2 K p q K 1 4 q K + 3 ( 1 q K + 1 ) ( K + 1 ) p ,
where the inequality holds by the fact that s 2 .

Appendix D

Appendix D.1. When αmax = K 2

Recall that p th 1 1 K + 1 1 K 1 , which tends to zero as K goes to infinity. We first introduce the following three lemmas.
Lemma A1.
Given arbitrary convex function g 1 ( p ) and arbitrary concave function g 2 ( p ) , if they intersect at two points with p 1 < p 2 , then g 1 ( p ) g 2 ( p ) for all p [ p 1 , p 2 ] .
We omit the proof of Lemma A1 as it is straightforward.
Lemma A2.
For memory size 0 p 1 and α max = K 2 , we have
R u R , T decentral = R s R u R s + R u R , for all p [ p th , 1 ] .
Proof. 
When α max = K 2 , from Equation (20), we have
R u = s = 2 K K K 1 s 1 f ( K , s ) p s 1 ( 1 p ) K s + 1 K K x = 1 K 1 K 1 x p x ( 1 p ) K x = ( 1 p ) 1 ( 1 p ) K 1 ,
where the first inequality holds by letting x = s 1 and K K 1 K s > K K 1 . It is easy to show that ( 1 p ) 1 ( 1 p ) K 1 is a concave function of p by verifying 2 ( 1 p ) ( 1 ( 1 p ) K 1 ) p 2 0 . □
On the other hand, one can easily show that
R = K ( 1 p ) K
is a convex function of p by showing 2 R ( p ) p 2 0 . Since the two functions ( 1 p ) 1 ( 1 p ) K 1 and R intersect at p th = 1 1 K + 1 1 K 1 and p 2 = 1 with p th p 2 , from Lemma A1 and (A16), we have
R u ( 1 p ) 1 ( 1 p ) K 1 R ,
for all p [ p th , 1 ] . From Remark 4, we know that T decentral = R s R u R s + R u R if R u R
Lemma A3.
For memory size 0 p 1 and α max = K 2 , we have
R s R u R s + R u R 6 T * .
Proof 
From (25) and (19), we have
R u K K 1 · q q K + 2 K ( K 1 ) ( K 2 ) · q p 1 q K K p q K 1 ( a ) K K 1 · q q K + 2 K ( K 1 ) ( K 2 ) · q p 1 1 K p K p q K 1 = K ( 3 K 2 ) ( K 1 ) ( K 2 ) · q q K ,
R s = q p 1 q K ( b ) q p 1 1 K p = K q ,
where ( a ) and ( b ) both follow from inequality
1 p K 1 K p .
Then, by Remark 4 and (A17), (A18) and definition of R in (18), if α max = K 2 , then
R s R u R s + R u R K q · K ( 3 K 2 ) ( K 1 ) ( K 2 ) q q K K q + K ( 3 K 2 ) ( K 1 ) ( K 2 ) q q K K q K = 3 2 K · q .
From Theorem 1, we have T * 1 2 q . Thus, we obtain
R s R u R s + R u R · 1 T * 3 2 / K · q q / 2 6 4 K < 6 .
Next, we use Lemmas A2 and A3 to prove that when α max = K 2 ,
T decentral T * max 6 , 2 K 2 K 2 K + 1 K 1 , p < p th , 6 , p p th .

Appendix D.1.1. Case αmax = K 2 and ppth

In this case, from Lemma A2, we have
T decentral = R s R u R s + R u R .
Thus, from Lemma A3,
T decentral = R s R u R s + R u R 6 T * .

Appendix D.1.2. Case αmax = K 2 and ppth

From the definition of T decentral in (17), we have
T decentral T * = max { R T * , R s R u R s + R u R · 1 T * } .
From Lemma A3, we know that
R s R u R s + R u R · 1 T * 6 ,
and thus only focus on the upper bound of R / T * .
According to Theorem 1, T * has the following two lower bounds: T * 1 p 2 , and
T * max s [ K ] s K M N / s max s [ K ] s K M N / ( 2 s ) .
Let R 1 * ( p ) 1 2 ( 1 p ) and R 2 * ( p ) ( K 2 K 2 p ) , then we have
T * max { R 1 * ( p ) , R 2 * ( p ) } .
Here R / R 1 * ( p ) and R / R 2 * ( p ) both are monotonic functions of p according to the following properties:
R / R 1 * ( p ) p = 2 K ( 1 p ) K 1 p 0 , R / R 2 * ( p ) p = q K / ( 1 2 K p ) p = K q K 1 1 + 2 ( K 1 ) p ( 1 2 K p ) 2 0 .
Additionally, notice that if p = 0 , then R R 2 * ( p ) = 1 < R R 1 * ( p ) , and if p = 1 , R R 2 * ( p ) > R R 1 * ( p ) = 1 . Therefore, the maximum value of R / max { R 1 * , R 2 * } is chosen at p = 1 2 K + 1 which satisfying R 1 * ( 1 2 K + 1 ) = R 2 * ( 1 2 K + 1 ) , implying that
R T * R ( 1 2 K + 1 ) R 1 * ( 1 2 K + 1 ) = 2 K 2 K 2 K + 1 K 1 .
From (A21)–(A23), we obtain the following equality:
T decentral T * max 2 K 2 K 2 K + 1 K 1 , 6 .

Appendix D.2. When αmax = 1

From Equation (24), we obtain that
R u q p 1 5 2 K p q K 1 4 q K + 3 ( 1 q K + 1 ) ( K + 1 ) p q p 1 5 2 K p q K 1 4 q K + 3 ( K + 1 ) p ( K + 1 ) p = q p 4 · ( 1 q K ) 5 2 K p q K 1 q p ( 4 · ( 1 q K ) ) = 4 R s ,
where the second inequality holds by (A19) and the last equality holds by the definition R s q p ( 1 q K ) in (19). On the other hand, rewrite the second lower bound of T * :
T * max s [ K ] s s M N / s 1 1 + α max .
From the result in [2] (Appendix B), we have
R s max s [ K ] s s M N / s 12 .
Combining (A24)–(A26), we have
R s T * 12 ( 1 + α max ) , R u T * 48 ( 1 + α max ) .
If p p th , by (A27) and since R T decentral R s (see Remark 4), we have
T decentral T * R s T * 12 ( 1 + α max ) = 24 ,
the last equality holds by the fact α max = 1 .
If p p th , from Lemma A2, we have R u R and
T decentral T * = R s R u R s + R u R T * min { R u , R s } T * min { 12 ( 1 + α max ) , 48 ( 1 + α max ) } = 24 ,
where the second inequality holds by (A27) and the last equality is from the fact α max = 1 in this case.

References

  1. Maddah-Ali, M.A.; Niesen, U. Fundamental limits of caching. IEEE Trans. Inf. Theory 2014, 60, 2856–2867. [Google Scholar] [CrossRef]
  2. Maddah-Ali, M.A.; Niesen, U. Decentralized coded caching attains order-optimal memory-rate tradeoff. IEEE/ACM Trans. Netw. 2015, 23, 1029–1040. [Google Scholar] [CrossRef]
  3. Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. Characterizing the Rate-Memory Tradeoff in Cache Networks within a Factor of 2. IEEE Trans. Inf. Theory 2019, 65, 647–663. [Google Scholar] [CrossRef]
  4. Wan, K.; Tuninetti, D.; Piantanida, P. On the optimality of uncoded cache placement. In Proceedings of the IEEE Information Theory Workshop (ITW), Cambridge, UK, 11–14 September 2016; pp. 161–165. [Google Scholar]
  5. Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. The exact rate-memory tradeoff for caching with uncoded prefetching. IEEE Trans. Inf. Theory 2018, 64, 1281–1296. [Google Scholar] [CrossRef]
  6. Yan, Q.; Cheng, M.; Tang, X.; Chen, Q. On the placement delivery array design for centralized coded caching scheme. IEEE Trans. Inf. Theory 2017, 63, 5821–5833. [Google Scholar] [CrossRef]
  7. Zhang, D.; Liu, N. Coded cache placement for heterogeneous cache sizes. In Proceedings of the IEEE Information Theory Workshop (ITW), Guangzhou, China, 25–29 November 2018; pp. 1–5. [Google Scholar]
  8. Wang, S.; Peleato, B. Coded caching with heterogeneous user profiles. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), France, Paris, 7–12 July 2019; pp. 2619–2623. [Google Scholar]
  9. Zhang, J.; Lin, X.; Wang, C.C. Coded caching for files with distinct file sizes. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015; pp. 1686–1690. [Google Scholar]
  10. Ibrahim, A.M.; Zewail, A.A.; Yener, A. Centralized coded caching with heterogeneous cache sizes. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), San Francisco, CA, USA, 19–22 March 2017; pp. 1–6. [Google Scholar]
  11. Ibrahim, A.M.; Zewail, A.A.; Yener, A. Coded caching for heterogeneous systems: An Optimization Perspective. IEEE Trans. Commun. 2019, 67, 5321–5335. [Google Scholar] [CrossRef]
  12. Amiri, M.M.; Yang, Q.; Gündüz, D. Decentralized caching and coded delivery with distinct cache capacities. IEEE Trans. Commun. 2017, 65, 4657–4669. [Google Scholar] [CrossRef]
  13. Cao, D.; Zhang, D.; Chen, P.; Liu, N.; Kang, W.; Gündüz, D. Coded caching with asymmetric cache sizes and link qualities: The two-user case. IEEE Trans. Commun. 2019, 67, 6112–6126. [Google Scholar] [CrossRef]
  14. Niesen, U.; Maddah-Ali, M.A. Coded caching with nonuniform demands. IEEE Trans. Inf. Theory 2017, 63, 1146–1158. [Google Scholar] [CrossRef]
  15. Zhang, J.; Lin, X.; Wang, X. Coded caching under arbitrary popularity distributions. IEEE Trans. Inf. Theory 2018, 64, 349–366. [Google Scholar] [CrossRef]
  16. Pedarsani, R.; Maddah-Ali, M.A.; Niesen, U. Online coded caching. IEEE/ACM Trans. Netw. 2016, 24, 836–845. [Google Scholar] [CrossRef]
  17. Daniel, A.M.; Yu, W. Optimization of heterogeneous coded caching. IEEE Trans. Inf. Theory 2020, 66, 1893–1919. [Google Scholar] [CrossRef]
  18. Shariatpanahi, S.P.; Motahari, S.A.; Khalaj, B.H. Multi-server coded caching. IEEE Trans. Inf. Theory 2016, 62, 7253–7271. [Google Scholar] [CrossRef]
  19. Zhang, J.; Elia, P. Fundamental limits of cache-aided wireless BC: Interplay of coded-caching and CSIT feedback. IEEE Trans. Inf. Theory 2017, 63, 3142–3160. [Google Scholar] [CrossRef]
  20. Bidokhti, S.S.; Wigger, M.; Timo, R. Noisy broadcast networks with receiver caching. IEEE Trans. Inf. Theory 2018, 64, 6996–7016. [Google Scholar] [CrossRef]
  21. Sengupta, A.; Tandon, R.; Simeone, O. Cache aided wireless networks: Tradeoffs between storage and latency. In Proceedings of the 2016 Annual Conference on Information Science and Systems (CISS), Princeton, NJ, USA, 15–18 March 2016; pp. 320–325. [Google Scholar]
  22. Tandon, R.; Simeone, O. Cloud-aided wireless networks with edge caching: Fundamental latency trade-offs in fog radio access networks. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 2029–2033. [Google Scholar]
  23. Karamchandani, N.; Niesen, U.; Maddah-Ali, M.A.; Diggavi, S.N. Hierarchical coded caching. IEEE Trans. Inf. Theory 2016, 62, 3212–3229. [Google Scholar] [CrossRef]
  24. Wang, K.; Wu, Y.; Chen, J.; Yin, H. Reduce transmission delay for caching-aided two-layer networks. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), France, Paris, 7–12 July 2019; pp. 2019–2023. [Google Scholar]
  25. Wan, K.; Ji, M.; Piantanida, P.; Tuninetti, D. Caching in combination networks: Novel multicast message generation and delivery by leveraging the network topology. In Proceedings of the IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018; pp. 1–6. [Google Scholar]
  26. Naderializadeh, N.; Maddah-Ali, M.A.; Avestimehr, A.S. Fundamental limits of cache-aided interference management. IEEE Trans. Inf. Theory 2017, 63, 3092–3107. [Google Scholar] [CrossRef]
  27. Xu, F.; Tao, M.; Liu, K. Fundamental tradeoff between storage and latency in cache-aided wireless interference Networks. IEEE Trans. Inf. Theory 2017, 63, 7464–7491. [Google Scholar] [CrossRef]
  28. Ji, M.; Tulino, A.M.; Llorca, J.; Caire, G. Order-optimal rate of caching and coded multicasting with random demands. IEEE Trans. Inf. Theory 2017, 63, 3923–3949. [Google Scholar] [CrossRef]
  29. Ji, M.; Tulino, A.M.; Llorca, J.; Caire, G. Caching in combination networks. In Proceedings of the 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 8–11 November 2015. [Google Scholar]
  30. Ravindrakumar, V.; Panda, P.; Karamchandani, N.; Prabhakaran, V. Fundamental limits of secretive coded caching. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 425–429. [Google Scholar]
  31. Tang, L.; Ramamoorthy, A. Coded caching schemes with reduced subpacketization from linear block codes. IEEE Trans. Inf. Theory 2018, 64, 3099–3120. [Google Scholar] [CrossRef]
  32. Cheng, M.; Li, J.; Tang, X.; Wei, R. Linear coded caching scheme for centralized networks. IEEE Trans. Inf. Theory 2021, 67, 1732–1742. [Google Scholar] [CrossRef]
  33. Wan, K.; Caire, G. On coded caching with private demands. IEEE Trans. Inf. Theory 2021, 67, 358–372. [Google Scholar] [CrossRef]
  34. Hassanzadeh, P.; Tulino, A.M.; Llorca, J.; Erkip, E. Rate-memory trade-off for caching and delivery of correlated sources. IEEE Trans. Inf. Theory 2020, 66, 2219–2251. [Google Scholar] [CrossRef]
  35. Ji, M.; Caire, G.; Molisch, A.F. Fundamental limits of caching in wireless D2D networks. IEEE Trans. Inf. Theory 2016, 62, 849–869. [Google Scholar] [CrossRef]
  36. Tebbi, A.; Sung, C.W. Coded caching in partially cooperative D2D communication networks. In Proceedings of the 9th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), Munich, Germany, 6–8 November 2017; pp. 148–153. [Google Scholar]
  37. Wang, J.; Cheng, M.; Yan, Q.; Tang, X. Placement delivery array design for coded caching scheme in D2D Networks. IEEE Trans. Commun. 2019, 67, 3388–3395. [Google Scholar] [CrossRef]
  38. Malak, D.; Al-Shalash, M.; Andrews, J.G. Spatially correlated content caching for device-to-device communications. IEEE Trans. Wirel. Commun. 2018, 17, 56–70. [Google Scholar] [CrossRef]
  39. Ibrahim, A.M.; Zewail, A.A.; Yener, A. Device-to-Device coded caching with distinct cache sizes. arXiv 2019, arXiv:1903.08142. [Google Scholar] [CrossRef]
  40. Pedersen, J.; Amat, A.G.; Andriyanova, I.; Brännström, F. Optimizing MDS coded caching in wireless networks with device-to-device communication. IEEE Trans. Wirel. Commun. 2019, 18, 286–295. [Google Scholar] [CrossRef]
  41. Chiang, M.; Zhang, T. Fog and IoT: An overview of research opportunities. IEEE Internet Things J. 2016, 3, 854–864. [Google Scholar] [CrossRef]
Figure 1. Caching system considered in this paper. A server connects with K cache-enabled users and the users can cooperate through a flexible network.
Figure 1. Caching system considered in this paper. A server connects with K cache-enabled users and the users can cooperate through a flexible network.
Entropy 24 01034 g001
Figure 2. Centralized cooperation gain and parallel gain when N = 40 , K = 20 and α max = 5 .
Figure 2. Centralized cooperation gain and parallel gain when N = 40 , K = 20 and α max = 5 .
Entropy 24 01034 g002
Figure 3. Transmission delay when N = 40 , K = 20 and α max = 5 . The upper bounds are achieved under the centralized caching scenario.
Figure 3. Transmission delay when N = 40 , K = 20 and α max = 5 . The upper bounds are achieved under the centralized caching scenario.
Entropy 24 01034 g003
Figure 4. Decentralized cooperation gain and parallel gain when N = 20 and K = 10 .
Figure 4. Decentralized cooperation gain and parallel gain when N = 20 and K = 10 .
Entropy 24 01034 g004
Figure 5. Transmission delay when N = 100 , K = 20 and α max = 3 . The upper bounds are achieved under the decentralized random caching scenario.
Figure 5. Transmission delay when N = 100 , K = 20 and α max = 3 . The upper bounds are achieved under the decentralized random caching scenario.
Entropy 24 01034 g005
Table 1. Subfiles sent by users in different partition, l = 1 , 2 .
Table 1. Subfiles sent by users in different partition, l = 1 , 2 .
{ 1 , 2 , 3 } { 4 , 5 , 6 }
user 2: W d 1 , { 2 , 3 , 4 , 5 } 1 W d 3 , { 1 , 2 , 4 , 5 } 1 user 5: W d 4 , { 2 , 3 , 5 , 6 } 1 W d 6 , { 2 , 3 , 4 , 5 } 1
user 2: W d 1 , { 2 , 3 , 4 , 6 } 1 W d 3 , { 1 , 2 , 4 , 6 } 1 user 5: W d 4 , { 1 , 2 , 5 , 6 } 1 W d 6 , { 1 , 2 , 4 , 5 } 1
user 1: W d 2 , { 1 , 3 , 4 , 6 } 1 W d 3 , { 1 , 2 , 5 , 6 } 1 user 4: W d 5 , { 2 , 3 , 4 , 6 } 1 W d 6 , { 1 , 3 , 4 , 5 } 1
user 3: W d 1 , { 2 , 3 , 5 , 6 } 1 W d 2 , { 1 , 3 , 5 , 6 } 1 user 6: W d 4 , { 1 , 3 , 5 , 6 } 1 W d 5 , { 1 , 3 , 4 , 6 } 1
{ 1 , 2 , 4 } { 3 , 5 , 6 }
user 2: W d 1 , { 2 , 4 , 5 , 6 } l W d 4 , { 1 , 2 , 3 , 5 } l user 5: W d 3 , { 1 , 4 , 5 , 6 } l W d 6 , { 1 , 2 , 3 , 5 } l
{ 1 , 4 , 6 } { 2 , 3 , 5 }
user 6: W d 1 , { 3 , 4 , 5 , 6 } l W d 4 , { 1 , 2 , 3 , 6 } l user 3: W d 2 , { 3 , 4 , 5 , 6 } l W d 5 , { 1 , 2 , 3 , 4 } l
{ 1 , 2 , 5 } { 3 , 4 , 6 }
user 1: W d 2 , { 1 , 4 , 5 , 6 } l W d 5 , { 1 , 2 , 3 , 6 } l user 4: W d 3 , { 2 , 4 , 5 , 6 } l W d 6 , { 1 , 2 , 3 , 4 } l
{ 1 , 2 , 3 } { 4 , 5 , 6 }
user 3: W d 1 , { 2 , 3 , 4 , 5 } 2 W d 2 , { 1 , 3 , 4 , 5 } 2 user 4: W d 5 , { 2 , 3 , 4 , 6 } 2 W d 6 , { 2 , 3 , 4 , 5 } 2
user 3: W d 1 , { 2 , 3 , 4 , 6 } 2 W d 2 , { 1 , 3 , 4 , 6 } 2 user 4: W d 5 , { 1 , 2 , 4 , 6 } 2 W d 6 , { 1 , 2 , 4 , 5 } 2
user 2: W d 1 , { 2 , 3 , 5 , 6 } 2 W d 3 , { 1 , 2 , 4 , 5 } 2 user 5: W d 4 , { 1 , 3 , 5 , 6 } 2 W d 6 , { 1 , 3 , 4 , 5 } 2
user 1: W d 3 , { 1 , 2 , 4 , 6 } 2 W d 2 , { 1 , 3 , 5 , 6 } 2 user 6: W d 4 , { 1 , 2 , 5 , 6 } 2 W d 5 , { 1 , 3 , 4 , 6 } 2
user 1: W d 3 , { 1 , 2 , 5 , 6 } 2 W d 2 , { 1 , 3 , 4 , 5 } 1 user 6: W d 5 , { 1 , 2 , 4 , 6 } 1 W d 4 , { 2 , 3 , 5 , 6 } 2
Table 2. Parallel user delivery when K = 7 , s = 4 , G 1 r = 4 and G 2 r = 3 , r [ 35 ] .
Table 2. Parallel user delivery when K = 7 , s = 4 , G 1 r = 4 and G 2 r = 3 , r [ 35 ] .
{ 1 , 2 , 3 , 4 } { 5 , 6 , 7 }
user 1 : W d 2 , { 1 , 3 , 4 } u 1 , 1 W d 3 , { 1 , 2 , 4 } u 1 , 1 W d 4 , { 1 , 2 , 3 } u 1 , 1 user 5 : x { 1 , 2 , 3 , 4 } W d 6 , { 5 , 7 , x } u 2 , 1 W d 7 , { 5 , 6 , x } u 2 , 1
user 2 : W d 1 , { 2 , 3 , 4 } u 1 , 1 W d 3 , { 1 , 2 , 4 } u 1 , 2 W d 4 , { 1 , 2 , 3 } u 1 , 2 user 6 : x { 1 , 2 , 3 , 4 } W d 5 , { 6 , 7 , x } u 2 , 1 W d 7 , { 5 , 6 , x } u 2 , 2
user 3 : W d 2 , { 1 , 3 , 4 } u 1 , 2 W d 1 , { 2 , 3 , 4 } u 1 , 2 W d 4 , { 1 , 2 , 3 } u 1 , 3 user 7 : x { 1 , 2 , 3 , 4 } W d 6 , { 5 , 7 , x } u 2 , 2 W d 5 , { 6 , 7 , x } u 2 , 2
user 4 : W d 2 , { 1 , 3 , 4 } u 1 , 3 W d 3 , { 1 , 2 , 4 } u 1 , 3 W d 1 , { 2 , 3 , 4 } u 1 , 3
{ 1 , 2 , 3 , 5 } { 4 , 6 , 7 }
user 1 : W d 2 , { 1 , 3 , 5 } u 1 , 1 W d 3 , { 1 , 2 , 5 } u 1 , 1 W d 5 , { 1 , 2 , 3 } u 1 , 1 user 4 : x { 1 , 2 , 3 , 5 } W d 6 , { 4 , 7 , x } u 2 , y ( . . ) W d 7 , { 4 , 6 , x } u 2 , y ( . . )
user 2 : W d 1 , { 2 , 3 , 5 } u 1 , 1 W d 3 , { 1 , 2 , 5 } u 1 , 2 W d 5 , { 1 , 2 , 3 } u 1 , 2 user 6 : x { 1 , 2 , 3 , 5 } W d 4 , { 6 , 7 , x } u 2 , 1 W d 7 , { 4 , 6 , x } u 2 , y ( . . )
user 3 : W d 2 , { 1 , 3 , 5 } u 1 , 2 W d 1 , { 2 , 3 , 5 } u 1 , 2 W d 5 , { 1 , 2 , 3 } u 1 , 3 user 7 : x { 1 , 2 , 3 , 5 } W d 6 , { 4 , 7 , x } u 2 , y ( . . ) W d 4 , { 6 , 7 , x } u 2 , 2
user 5 : W d 2 , { 1 , 3 , 5 } u 1 , 3 W d 3 , { 125 } u 1 , 3 W d 1 , { 235 } u 1 , 3
{ 1 , 2 , 3 , 6 } { 4 , 5 , 7 }
user 1 : W d 2 , { 1 , 3 , 6 } u 1 , 1 W d 3 , { 1 , 2 , 6 } u 1 , 1 W d 6 , { 1 , 2 , 3 } u 1 , 1 user 4 : x { 1 , 2 , 3 , 6 } W d 5 , { 4 , 7 , x } u 2 , y ( . . ) W d 7 , { 4 , 5 , x } u 2 , y ( . . )
user 2 : W d 1 , { 2 , 3 , 6 } u 1 , 1 W d 3 , { 1 , 2 , 6 } u 1 , 2 W d 6 , { 1 , 2 , 3 } u 1 , 2 user 5 : x { 1 , 2 , 3 , 6 } W d 4 , { 5 , 7 , x } u 2 , 1 W d 7 , { 4 , 5 , x } u 2 , y ( . . )
user 3 : W d 2 , { 1 , 3 , 6 } u 1 , 2 W d 1 , { 2 , 3 , 6 } u 1 , 2 W d 6 , { 1 , 2 , 3 } u 1 , 3 user 7 : x { 1 , 2 , 3 , 6 } W d 5 , { 4 , 7 , x } u 2 , y ( . . ) W d 4 , { 5 , 7 , x } u 2 , 2
user 6 : W d 2 , { 1 , 3 , 6 } u 1 , 3 W d 3 , { 1 , 2 , 6 } u 1 , 3 W d 1 , { 2 , 3 , 6 } u 1 , 3
                            ⋯                     
There should be 35 partitions in total while the table only shows three partitions.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huang, Z.; Chen, J.; You, X.; Ma, S.; Wu, Y. Coded Caching for Broadcast Networks with User Cooperation. Entropy 2022, 24, 1034. https://doi.org/10.3390/e24081034

AMA Style

Huang Z, Chen J, You X, Ma S, Wu Y. Coded Caching for Broadcast Networks with User Cooperation. Entropy. 2022; 24(8):1034. https://doi.org/10.3390/e24081034

Chicago/Turabian Style

Huang, Zhenhao, Jiahui Chen, Xiaowen You, Shuai Ma, and Youlong Wu. 2022. "Coded Caching for Broadcast Networks with User Cooperation" Entropy 24, no. 8: 1034. https://doi.org/10.3390/e24081034

APA Style

Huang, Z., Chen, J., You, X., Ma, S., & Wu, Y. (2022). Coded Caching for Broadcast Networks with User Cooperation. Entropy, 24(8), 1034. https://doi.org/10.3390/e24081034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop