Online reinforcement learning of X-haul content delivery mode in fog radio access networks

Jihwan Moon, Osvaldo Simeone, Seok Hwan Park, Inkyu Lee

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

We consider a Fog Radio Access Network (F-RAN) with a Base Band Unit (BBU) in the cloud and multiple cache-enabled enhanced Remote Radio Heads (eRRHs). The system aims at delivering contents on demand with minimal average latency from a time-varying library of popular contents. Uncached requested files can be transferred from the cloud to the eRRHs by following either backhaul or fronthaul modes. The backhaul mode transfers fractions of the requested files, while the fronthaul mode transmits quantized baseband samples as in Cloud-RAN (C-RAN). The backhaul mode allows the caches of the eRRHs to be updated, which may lower future delivery latencies. In contrast, the fronthaul mode enables cooperative C-RAN transmissions that may reduce the current delivery latency. Taking into account the trade-off between current and future delivery performance, this letter proposes an adaptive selection method between the two delivery modes to minimize the long-term delivery latency. Assuming an unknown and time-varying popularity model, the method is based on model-free Reinforcement Learning (RL). Numerical results confirm the effectiveness of the proposed RL.

Original languageEnglish (US)
Article number2932193
Pages (from-to)1451-1455
Number of pages5
JournalIEEE Signal Processing Letters
Volume26
Issue number10
DOIs
StatePublished - Oct 2019

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Electrical and Electronic Engineering
  • Applied Mathematics

Keywords

  • Caching
  • F-RAN (Fog RAN)
  • Machine learning
  • Reinforcement learning
  • X-haul

Fingerprint

Dive into the research topics of 'Online reinforcement learning of X-haul content delivery mode in fog radio access networks'. Together they form a unique fingerprint.

Cite this