## Abstract

In this paper, we study the Differentially Private Empirical Risk Minimization (DP-ERM) problem, considering both convex and non-convex loss functions. For cases where DP-ERM involves smooth (strongly) convex loss functions with or without non-smooth regularization, we propose several novel methods. These methods achieve (near) optimal expected excess risks (i.e., utility bounds) while reducing the gradient complexity compared to existing approaches. When dealing with DP-ERM and smooth convex loss functions in high dimensions, we introduce an algorithm that achieves a superior upper bound with lower gradient complexity than previous solutions. In the second part of the paper, for DP-ERM with non-convex loss functions, we explore both low and high dimensional spaces. In the low dimensional case with a non-smooth regularizer, we extend an existing approach by measuring the utility using the ℓ_{2} norm of the projected gradient. Furthermore, we introduce a novel error bound measurement, transitioning from empirical risk to population risk by employing the expected ℓ_{2} norm of the gradient. For the high dimensional case, we demonstrate that by measuring utility with the Frank-Wolfe gap, we can bound the utility using the Gaussian Width of the constraint set, instead of the dimensionality (p) of the underlying space. We also show that the advantages of this approach can be achieved by measuring the ℓ_{2} norm of the projected gradient. Finally, we reveal that the utility of certain special non-convex loss functions can be reduced to a level (depending only on logp) similar to that of convex loss functions.

Original language | English (US) |
---|---|

Article number | 114259 |

Journal | Theoretical Computer Science |

Volume | 982 |

DOIs | |

State | Published - Jan 8 2024 |

## Keywords

- Convex optimization
- Differential privacy
- Empirical risk minimization
- Non-convex optimization
- Supervised learning

## ASJC Scopus subject areas

- Theoretical Computer Science
- General Computer Science