TY - JOUR
T1 - Fog-Aided Wireless Networks for Content Delivery
T2 - Fundamental Latency Tradeoffs
AU - Sengupta, Avik
AU - Tandon, Ravi
AU - Simeone, Osvaldo
N1 - Funding Information:
Manuscript received May 5, 2016; revised March 11, 2017; accepted July 28, 2017. Date of publication August 4, 2017; date of current version September 13, 2017. O. Simeone was supported in part by the European Research Council under the European Unions Horizon 2020 Research and Innovation Programme under Grant 725731 and in part by the U.S. NSF under Grant CCF-1525629. This paper was presented in part at the 2016 50th Annual Conference on Information Sciences and Systems, at the 2016 IEEE International Symposium on Information Theory, at the 2016 IEEE International Workshop on Signal Processing Advances in Wireless Communications, and at the 2016 IEEE Globecom.
Publisher Copyright:
© 1963-2012 IEEE.
PY - 2017/10
Y1 - 2017/10
N2 - A fog-aided wireless network architecture is studied in which edge nodes (ENs), such as base stations, are connected to a cloud processor via dedicated fronthaul links while also being endowed with caches. Cloud processing enables the centralized implementation of cooperative transmission strategies at the ENs, albeit at the cost of an increased latency due to fronthaul transfer. In contrast, the proactive caching of popular content at the ENs allows for the low-latency delivery of the cached files, but with generally limited opportunities for cooperative transmission among the ENs. The interplay between cloud processing and edge caching is addressed from an information-theoretic viewpoint by investigating the fundamental limits of a high signal-to-noise-ratio metric, termed normalized delivery time (NDT), which captures the worst case coding latency for delivering any requested content to the users. The NDT is defined under the assumptions of either serial or pipelined fronthaul-edge transmission, and is studied as a function of fronthaul and cache capacity constraints. Placement and delivery strategies across both fronthaul and wireless, or edge, segments are proposed with the aim of minimizing the NDT. Information-theoretic lower bounds on the NDT are also derived. Achievability arguments and lower bounds are leveraged to characterize the minimal NDT in a number of important special cases, including systems with no caching capabilities, as well as to prove that the proposed schemes achieve optimality within a constant multiplicative factor of 2 for all values of the problem parameters.
AB - A fog-aided wireless network architecture is studied in which edge nodes (ENs), such as base stations, are connected to a cloud processor via dedicated fronthaul links while also being endowed with caches. Cloud processing enables the centralized implementation of cooperative transmission strategies at the ENs, albeit at the cost of an increased latency due to fronthaul transfer. In contrast, the proactive caching of popular content at the ENs allows for the low-latency delivery of the cached files, but with generally limited opportunities for cooperative transmission among the ENs. The interplay between cloud processing and edge caching is addressed from an information-theoretic viewpoint by investigating the fundamental limits of a high signal-to-noise-ratio metric, termed normalized delivery time (NDT), which captures the worst case coding latency for delivering any requested content to the users. The NDT is defined under the assumptions of either serial or pipelined fronthaul-edge transmission, and is studied as a function of fronthaul and cache capacity constraints. Placement and delivery strategies across both fronthaul and wireless, or edge, segments are proposed with the aim of minimizing the NDT. Information-theoretic lower bounds on the NDT are also derived. Achievability arguments and lower bounds are leveraged to characterize the minimal NDT in a number of important special cases, including systems with no caching capabilities, as well as to prove that the proposed schemes achieve optimality within a constant multiplicative factor of 2 for all values of the problem parameters.
KW - 5G
KW - Caching
KW - cloud radio access network (C-RAN)
KW - degrees-of-freedom
KW - edge processing
KW - fog radio access network
KW - interference channel
KW - latency
KW - wireless networks
UR - http://www.scopus.com/inward/record.url?scp=85028962494&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85028962494&partnerID=8YFLogxK
U2 - 10.1109/TIT.2017.2735962
DO - 10.1109/TIT.2017.2735962
M3 - Article
AN - SCOPUS:85028962494
SN - 0018-9448
VL - 63
SP - 6650
EP - 6678
JO - IEEE Transactions on Information Theory
JF - IEEE Transactions on Information Theory
IS - 10
M1 - 8002603
ER -