Optimization problems are ubiquitous in all quantitative scientific disciplines, from computer science and engineering to operations research and economics. Developing algorithms for solving various optimization problems has been the focus of mathematical research for years. In the last decade, optimization research has become even more popular due to its applications in the rapidly developing field of machine learning.
In this thesis, we discuss a few fundamental and well-studied optimization problem classes: decentralized distributed optimization (Chapters 2 to 4), distributed optimization under similarity (Chapter 5), affinely constrained optimization (Chapter 6), minimax optimization (Chapter 7), and high-order optimization (Chapter 8). For each problem class, we develop the first provably optimal algorithm: the complexity of such an algorithm cannot be improved for the problem class given. The proposed algorithms show state-of-the-art performance in practical applications, which makes them highly attractive for potential generalizations and extensions in the future.
Date of Award | Sep 2022 |
---|
Original language | English (US) |
---|
Awarding Institution | - Computer, Electrical and Mathematical Sciences and Engineering
|
---|
Supervisor | Peter Richtarik (Supervisor) |
---|
- Optimization
- Convex Optimization
- Distributed Optimization
- High-Order Optimization
- Saddle-Point Problems
- Minimax Optimization
- Tensor Methods
- Optimal Algorithms