Abstract:
In this paper, we analyse the effects of noise on
the gradient methods for solving a convex
unconstraint optimization problem. Assuming
that the objective function is with Lipschitz
continuous gradients, we analyse the
convergence properties of the gradient method
when the noise is deterministic and bounded.
Our theoretical results show that the gradient
algorithm converges to the related optimality
within some tolerance, where the tolerance
depends on the underlying noise, step size, and
the gradient Lipschitz continuity constant of the
underlying objective function. Moreover, we
consider an application of distributed
optimization, where the objective function is a
sum of two strongly convex functions. Then the
related convergences are discussed based on
dual decomposition together with gradient
methods, where the associated noise is
considered as a consequence of quantization
errors. Finally, the theoretical results are
verified using numerical experiments.