Hierarchical Cache-Aided Networks for Linear Function Retrieval
<p>The <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>K</mi> <mn>1</mn> </msub> <mo>,</mo> <mo> </mo> <msub> <mi>K</mi> <mn>2</mn> </msub> <mo>;</mo> <mo> </mo> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>,</mo> <mo> </mo> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>;</mo> <mo> </mo> <mi>N</mi> <mo>)</mo> </mrow> </semantics></math> hierarchical caching system with <math display="inline"><semantics> <mrow> <mi>N</mi> <mspace width="-0.166667em"/> <mo>=</mo> <mspace width="-0.166667em"/> <mn>4</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mn>1</mn> </msub> <mspace width="-0.166667em"/> <mo>=</mo> <mspace width="-0.166667em"/> <msub> <mi>K</mi> <mn>2</mn> </msub> <mspace width="-0.166667em"/> <mo>=</mo> <mspace width="-0.166667em"/> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mspace width="-0.166667em"/> <mo>=</mo> <mspace width="-0.166667em"/> <mn>2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mspace width="-0.166667em"/> <mo>=</mo> <mspace width="-0.166667em"/> <mn>1</mn> </mrow> </semantics></math>.</p> "> Figure 2
<p><math display="inline"><semantics> <msub> <mi>R</mi> <mn>1</mn> </msub> </semantics></math> on <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mn>1</mn> </msub> <mspace width="-0.166667em"/> <mo>=</mo> <mspace width="-0.166667em"/> <mn>20</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mn>2</mn> </msub> <mspace width="-0.166667em"/> <mo>=</mo> <mspace width="-0.166667em"/> <mn>10</mn> </mrow> </semantics></math>.</p> ">
Abstract
:1. Introduction
- We first propose a baseline scheme via the WSJT Scheme and KNMD Scheme where the second-layer load achieves the optimal transmission load. However, we achieve this by sacrificing the first-layer load.
- Then, in order to reduce the first-layer load, we propose another scheme whose second load also achieves optimality at the expense of increased subpacketization. Our scheme also aids in reducing the redundancy for some special demand distributions.
2. Preliminary
2.1. System Model
- Placement phase: The mirror site Mk1 caches some parts of the files by using a cache function , where is the memory ratio of the mirror in the first layer, . The cache contents of mirror areThe user caches some parts of the files by using a cache function , where is the memory ratio of users in the second layer, . Then, the cache contents of are
- Delivery phase: Each user randomly requests a linear combination of the files
- –
- The messages sent by the server: The server generates signal by using an encoding function , where
- –
- The messages sent by the mirror: Based on , , and , each mirror generates a signal by using the encoding function , where
For the retrieval process, each user can decode its required linear combination of files from , which means that there exist decoding functions , , such that
2.2. Existing Schemes
3. Main Results
4. Scheme for Theorem 2
4.1. An Example for Theorem 2
- Placement phase: Each file from is divided into subfiles with equal size, i.e., , , . For simplicity, we represent a set that is the subscript of some studied object by a string. For example, is represented by . The contents cached by the mirrors are as follows:The subfiles cached by the users are as follows:
- Delivery phase: In the delivery phase, the demand vectors with length 12 are . As we can see, . Without loss of generality, we set . We denote a linear combination of subfiles as
- –
- The messages sent by the server: The server generates signal satisfying and as follows:In this example, we have , and has no intersection with . Here, we have and . By Lemma 2, we can generate . Thus, the transmission load of the first layer is .
- –
- The messages sent by mirror : Here, we take mirror as an example. From , we have , and transmits , where , , , i.e.,
Then, the transmission amount by mirror is 3 packets, and the transmission load of the second layer is .
4.2. General Description of Scheme for Theorem 2
- Placement phase: Firstly, we divide each file into equal-size subfiles; then, we further divide each subfile into sub-subfiles. The index of subfiles consists of two parts, and , i.e., , . Each mirror site caches subfiles according to the following rule, which is mainly related to the first subscript .Similarly, each user , caches subfiles according to the following rule, which is mainly related to the second subscript .Under this caching strategy, we can verify that it satisfies the memory constraints stated in Theorem 2. Each mirror caches subfiles and each user caches subfiles, where each subfile is bits. Thus, the memory ratios of the mirror and user are and , respectively. For any user’s demand vector of N-length, we use the notation as follows to denote a linear combination of subfiles:
- Delivery phase: For the convenience of the subsequent discussion, we first give the following two definitions of the signals transmitted in the first layer, say , and the second layer, say . For any mirror set containing mirrors defined as , any mirror site set containing mirror sites defined as , and any user set containing users defined as , we defineAfter the demand matrix of size and its sub-matrix of size in (1) are revealed, we have the leader mirror set according to Definition 1. For each sub-matrix of , , we have the leader user set , according to Definition 2. There are two types of messages transmitted by the server and mirror, respectively.
- –
- The messages sent by the server: For each , , , the server transmits to the mirror sites.
- –
- The messages sent by the mirror: Mirror site transmits via the following rules.(1) For each , , , , , mirror transmits by subtracting from , i.e.,(2) For each , , , , mirror directly transmits to its connected users generated from its cached content .
4.2.1. Decodability of Mirror
4.2.2. Decodability of User
4.2.3. Performance
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Proof of Lemma 1
Appendix B. Proof of Lemma 2
References
- Maddah-Ali, M.A.; Niesen, U. Fundamental limits of caching. IEEE Trans. Inf. Theory 2014, 60, 2856–2867. [Google Scholar] [CrossRef]
- Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. Characterizing the rate-memory tradeoff in cache networks within a factor of 2. IEEE Trans. Inf. Theory 2019, 65, 647–663. [Google Scholar] [CrossRef]
- Wan, K.; Tuninetti, D.; Piantanida, P. An index coding approach to caching with uncoded cache placement. IEEE Trans. Inf. Theory 2020, 66, 1318–1332. [Google Scholar] [CrossRef]
- Wan, K.; Ji, M.; Piantanida, P.; Tuninetti, D. Caching in combination networks: Novel multicast message generation and delivery by leveraging the network topology. In Proceedings of the 2018 IEEE International Conference on Communications, Kansas City, MO, USA, 20–24 May 2018; pp. 1–6. [Google Scholar]
- Ji, M.; Tulino, A.M.; Llorca, J.; Caire, G. Caching in combination networks. In Proceedings of the 2015 49th Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 8–11 November 2015; pp. 1269–1273. [Google Scholar]
- Ji, M.; Caire, G.; Molisch, A.F. Fundamental limits of caching in wireless D2D networks. IEEE Trans. Inf. Theory 2016, 62, 849–869. [Google Scholar] [CrossRef]
- Zhou, H.; Jiang, K.; He, S.B.; Min, G.Y.; Wu, J. Distributed Deep Multi-Agent Reinforcement Learning for Cooperative Edge Caching in Internet-of-Vehicles. IEEE Trans. Wireless Commun. 2023, 22, 9595–9609. [Google Scholar] [CrossRef]
- Zhou, H.; Wang, Z.; Zheng, H.T.; He, S.B.; Dong, M.X. Cost Minimization-Oriented Computation Offloading and Service Caching in Mobile Cloud-Edge Computing: An A3C-Based Approach. IEEE Trans. Netw. Sci. Eng 2023, 10, 1326–1338. [Google Scholar] [CrossRef]
- Karamchandani, N.; Niesen, U.; Maddah-Ali, M.A.; Diggavi, S.N. Hierarchical coded caching. IEEE Trans. Inf. Theory 2016, 62, 3212–3229. [Google Scholar] [CrossRef]
- Wang, K.; Wu, Y.; Chen, J.; Yin, H. Reduce transmission delay for caching-aided two-layer networks. In Proceedings of the 2023 IEEE International Symposium on Information Theory, Paris, France, 7–12 July 2019; pp. 2019–2023. [Google Scholar]
- Kong, Y.; Wu, Y.; Cheng, M. Combinatorial designs for coded caching on hierarchical networks. In Proceedings of the IEEE Conference on Wireless Communications and Networking, Glasgow, UK, 26–29 March 2023; pp. 1–6. [Google Scholar]
- Wan, K.; Sun, H.; Ji, M.; Tuninetti, D.; Caire, G. On the optimal load-memory tradeoff of cache-aided scalar linear function retrieval. IEEE Trans. Inf. Theory 2021, 67, 4001–4018. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, L.; Kong, Y.; Wu, Y.; Cheng, M. Hierarchical Cache-Aided Networks for Linear Function Retrieval. Entropy 2024, 26, 195. https://doi.org/10.3390/e26030195
Zhang L, Kong Y, Wu Y, Cheng M. Hierarchical Cache-Aided Networks for Linear Function Retrieval. Entropy. 2024; 26(3):195. https://doi.org/10.3390/e26030195
Chicago/Turabian StyleZhang, Lingyu, Yun Kong, Youlong Wu, and Minquan Cheng. 2024. "Hierarchical Cache-Aided Networks for Linear Function Retrieval" Entropy 26, no. 3: 195. https://doi.org/10.3390/e26030195
APA StyleZhang, L., Kong, Y., Wu, Y., & Cheng, M. (2024). Hierarchical Cache-Aided Networks for Linear Function Retrieval. Entropy, 26(3), 195. https://doi.org/10.3390/e26030195