TY - GEN
T1 - Scheduling memory access on a distributed cloud storage network
AU - Rojas-Cessa, Roberto
AU - Cai, Lin
AU - Kijkanjanarat, Taweesak
N1 - Copyright:
Copyright 2012 Elsevier B.V., All rights reserved.
PY - 2012
Y1 - 2012
N2 - Memory-access speed continues falling behind the growing speeds of network transmission links. High-speed network links provide a means to connect memory placed in hosts, located in different corners of the network. These hosts are called storage system units (SSUs), where data can be stored. Cloud storage provided with a single server can facilitate large amounts of storage to a user, however, at low access speeds. A distributed approach to cloud storage is an attractive solution. In a distributed cloud, small high-speed memories at SSUs can potentially increase the memory access speed for data processing and transmission. However, the latencies of each SSUs may be different. Therefore, the selection of SSUs impacts the overall memory access speed. This paper proposes a latency-aware scheduling scheme to access data from SSUs. This scheme determines the minimum latency requirement for a given dataset and selects available SSUs with the required latencies. Furthermore, because the latencies of some selected SSUs may be large, the proposed scheme notifies SSUs in advance of the expected time to perform data access. The simulation results show that the proposed scheme achieves faster access speeds than a scheme that randomly selects SSUs and another hat greedily selects SSUs with small latencies.
AB - Memory-access speed continues falling behind the growing speeds of network transmission links. High-speed network links provide a means to connect memory placed in hosts, located in different corners of the network. These hosts are called storage system units (SSUs), where data can be stored. Cloud storage provided with a single server can facilitate large amounts of storage to a user, however, at low access speeds. A distributed approach to cloud storage is an attractive solution. In a distributed cloud, small high-speed memories at SSUs can potentially increase the memory access speed for data processing and transmission. However, the latencies of each SSUs may be different. Therefore, the selection of SSUs impacts the overall memory access speed. This paper proposes a latency-aware scheduling scheme to access data from SSUs. This scheme determines the minimum latency requirement for a given dataset and selects available SSUs with the required latencies. Furthermore, because the latencies of some selected SSUs may be large, the proposed scheme notifies SSUs in advance of the expected time to perform data access. The simulation results show that the proposed scheme achieves faster access speeds than a scheme that randomly selects SSUs and another hat greedily selects SSUs with small latencies.
UR - http://www.scopus.com/inward/record.url?scp=84861427601&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84861427601&partnerID=8YFLogxK
U2 - 10.1109/WOCC.2012.6198152
DO - 10.1109/WOCC.2012.6198152
M3 - Conference contribution
AN - SCOPUS:84861427601
SN - 9781467309394
T3 - 2012 21st Annual Wireless and Optical Communications Conference, WOCC 2012
SP - 71
EP - 76
BT - 2012 21st Annual Wireless and Optical Communications Conference, WOCC 2012
T2 - 2012 21st Annual Wireless and Optical Communications Conference, WOCC 2012
Y2 - 19 April 2012 through 21 April 2012
ER -