## Abstract

Large matrix multiplications commonly take place in large-scale machine-learning applications. Often, the sheer size of these matrices prevent carrying out the multiplication at a single server. Therefore, these operations are typically offloaded to a distributed computing platform with a master server and a large amount of workers in the cloud, operating in parallel. For such distributed platforms, it has been recently shown that coding over the input data matrices can reduce the computational delay by introducing a tolerance against straggling workers, i.e., workers for which execution time significantly lags with respect to the average. In addition to exact recovery, we impose a security constraint on both matrices to be multiplied. Specifically, we assume that workers can collude and eavesdrop on the content of these matrices. For this problem, we introduce a new class of polynomial codes with fewer non-zero coefficients than the degree +1. We provide closed-form expressions for the recovery threshold and show that our construction improves the recovery threshold of existing schemes in the literature, in particular for larger matrix dimensions and a moderate to large number of colluding workers. In the absence of any security constraints, we show that our construction is optimal in terms of recovery threshold.

Original language | English (US) |
---|---|

Article number | 266 |

Journal | Entropy |

Volume | 25 |

Issue number | 2 |

DOIs | |

State | Published - Feb 2023 |

## All Science Journal Classification (ASJC) codes

- Information Systems
- Electrical and Electronic Engineering
- General Physics and Astronomy
- Mathematical Physics
- Physics and Astronomy (miscellaneous)

## Keywords

- distributed computation
- distributed learning
- information theoretic security
- matrix multiplication
- polynomial codes