Summer Intern

Yokogawa, Thailand, Ltd.

Jun 2012 – Jul 2012 Bangkok, Thailand
Distributed control and automation systems for large-scale chemical processes

  • Programmed with CENTRUM VP, PLC, SCADA and AutoCAD
  • Installed communications networks and power lines of distributed control systems
  • Inspected electrical and field instrument panels, and their blueprints

Recent Publications

The emergence of big data has caused a dramatic shift in the operating regime for optimization algorithms. The performance bottleneck, …

Technological developments in devices and storages have made large volumes of data collections more accessible than ever. This …

The veritable scale of modern data necessitates information compression in parallel/distributed big-data optimization. Compression …

Asynchronous computation and gradient compression have emerged as two key techniques for achieving scalability in distributed …

Data-rich applications in machine-learning and control have motivated an intense research on large-scale optimization. Novel algorithms …

Recent Posts

Our article titled “Improved Step-size Schedules for Noisy Gradient Methods” has been accepted to the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) in 2021. Abstract: Noise is inherited in many optimization methods such as stochastic gradient methods, zeroth-order methods and compressed gradient methods. For such methods to converge toward a global optimum, it is intuitive to use large step-sizes in the initial iterations when the noise is typically small compared to the algorithm-steps, and reduce the step-sizes as the algorithm progresses.

Our article titled “A Flexible Framework for Communication-Efficient Machine Learning” has been accepted (in the poster session) to the 35th AAAI conference in 2021. Abstract: With the increasing scale of machine learning tasks, it has become essential to reduce the communication between computing nodes. Early work on gradient compression focused on the bottleneck between CPUs and GPUs, but communication-efficiency is now needed in a variety of different system architectures, from high-performance clusters to energy-constrained IoT devices.

On 6th December 2019, I presented my works in the licentiate seminar titled “First-Order Algorithms for Communication Efficient Distributed Learning ”. I defended with Professor Martin Jaggi from EPFL, Lausanne, Switzerland as my thesis opponent.

Our article titled “Convergence Bounds for Compressed Gradient Methods with Memory Based Error Compensation” has been recieved one of the best student paper awards at the IEEE-ICASSP 2019. The award was sponsored by Hitachi.


  • sarit@kth.se
  • Division of Decision and Control Systems, KTH School of Electrical Engineering and Computer Science, Malvinas väg 10 (previously Osquldas väg 10), floor 6, SE-100 44 STOCKHOLM, SWEDEN