Experience

 
 
 
 
 

Summer Intern

Yokogawa, Thailand, Ltd.

Jun 2012 – Jul 2012 Bangkok, Thailand
Distributed control and automation systems for large-scale chemical processes

  • Programmed with CENTRUM VP, PLC, SCADA and AutoCAD
  • Installed communications networks and power lines of distributed control systems
  • Inspected electrical and field instrument panels, and their blueprints

Recent Publications

Noisy gradient algorithms have emerged as one of the most popular algorithms for distributed optimization with massive data. Choosing …

Innovations in numerical optimization, statistics and high performance computing have enabled tremendous advances in machine learning …

Federated learning is an emerging framework for collaborative machine-learning on devices which do not want to share local data. …

Zeroth-order methods have become important tools for solving problems where we have access only to function evaluations. However, the …

With the increasing scale of machine learning tasks, it has become essential to reduce the communication between computing nodes. Early …

Recent Posts

Our article titled “Improved Step-Size Schedules for Proximal Noisy Methods” to the IEEE TSP journal in 2023. Abstract:Noisy gradient algorithms have emerged as one of the most popular algorithms for distributed optimization with massive data. Choosing proper step-size schedules is an important task to tune in the algorithms for good performance. For the algorithms to attain fast convergence and high accuracy, it is intuitive to use large step-sizes in the initial iterations when the gradient noise is typically small compared to the algorithm-steps, and reduce the step-sizes as the algorithm progresses.

On 9th March 2022, I defended my PhD thesis titled “First-Order Algorithms for Communication Efficient Distributed Learning ” with Tong Zhang (HKUST) as the opponent, and Anders Hansson (Linköping University), Martin Jaggi (EPFL) and Clarice Poon (University of Bath) as the evaluation committee.

Our article titled “Improved Step-size Schedules for Noisy Gradient Methods” has been accepted to the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) in 2021. Abstract: Noise is inherited in many optimization methods such as stochastic gradient methods, zeroth-order methods and compressed gradient methods. For such methods to converge toward a global optimum, it is intuitive to use large step-sizes in the initial iterations when the noise is typically small compared to the algorithm-steps, and reduce the step-sizes as the algorithm progresses.

Our article titled “A Flexible Framework for Communication-Efficient Machine Learning” has been accepted (in the poster session) to the 35th AAAI conference in 2021. Abstract: With the increasing scale of machine learning tasks, it has become essential to reduce the communication between computing nodes. Early work on gradient compression focused on the bottleneck between CPUs and GPUs, but communication-efficiency is now needed in a variety of different system architectures, from high-performance clusters to energy-constrained IoT devices.

On 6th December 2019, I presented my works in the licentiate seminar titled “First-Order Algorithms for Communication Efficient Distributed Learning ”. I defended with Professor Martin Jaggi from EPFL, Lausanne, Switzerland as my thesis opponent.

Contact

  • Sarit.Khirirat@mbzuai.ac.ae
  • Mohamed bin Zayed University of Artificial Intelligence, Building 1B, Masdar City, Abu Dhabi, United Arab Emirates