Publications

(2023). Improved Step-Size Schedules for Proximal Noisy Gradient Methods. IEEE Transactions on Signal Processing (Accepted) 2023.

(2022). First-Order Algorithms for Communication Efficient Distributed Learning. PhD Thesis:: Stockholm: KTH Royal Institute of Technology, 2022. , p. 233.

(2022). Zeroth-order randomized subspace Newton methods. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

(2022). Eco-Fedsplit: Federated Learning with Error-Compensated Compression. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

(2021). A Flexible Framework for Communication-Efficient Machine Learning. Proceedings of the AAAI Conference on Artificial Intelligence.

(2020). Compressed Gradient Methods With Hessian-Aided Error Compensation. IEEE Transactions on Signal Processing ( Volume: 69), 2020, p. 998 - 1011.

(2019). First-Order Algorithms for Communication Efficient Distributed Learning. Licentiate Thesis:: Stockholm: KTH Royal Institute of Technology, 2019. , p. 106.

(2019). Convergence Bounds for Compressed Gradient Methods with Memory Based Error Compensation. ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

(2018). The convergence of sparsified gradient methods. Advances in Neural Information Processing Systems.

(2017). Mini-batch gradient descent: Faster convergence under data sparsity. 2017 IEEE 56th Annual Conference on Decision and Control (CDC).