Nothing Special   »   [go: up one dir, main page]

This is the home page of the Open Journal of Mathematical Optimization, an electronic journal of computer science and mathematics owned by its Editorial Board.

The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

  1. Continuous Optimization
  2. Discrete Optimization
  3. Optimization under Uncertainty
  4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers, thus adhering to the principles of Diamond Open Access. The journal requires the numerical results published in its papers to be reproducible by others, ideally by publishing code and data sets along with the manuscripts.

As detailed under the Policy tab, the journal also publishes:

  • Short papers, ensuring fast review process.
  • Significant extensions of conference proceedings.


Indexing

  

 

 

SCImago Journal & Country Rank

 

 

e-ISSN : 2777-5860

New articles

Short Paper - Quadratic minimization: from conjugate gradient to an adaptive Polyak’s momentum method with Polyak step-sizes

In this work, we propose an adaptive variation on the classical Heavy-ball method for convex quadratic minimization. The adaptivity crucially relies on so-called “Polyak step-sizes”, which consists of using the knowledge of the optimal value of the optimization problem at hand instead of problem parameters such as a few eigenvalues of the Hessian of the problem. This method happens to also be equivalent to a variation of the classical conjugate gradient method, and thereby inherits many of its attractive features, including its finite-time convergence, instance optimality, and its worst-case convergence rates.

The classical gradient method with Polyak step-sizes is known to behave very well in situations in which it can be used, and the question of whether incorporating momentum in this method is possible and can improve the method itself appeared to be open. We provide a definitive answer to this question for minimizing convex quadratic functions, an arguably necessary first step for developing such methods in more general setups.

Available online:
PDF

Iterative Linear Quadratic Optimization for Nonlinear Control: Differentiable Programming Algorithmic Templates

Iterative optimization algorithms depend on access to information about the objective function. In a differentiable programming framework, this information, such as gradients, can be automatically derived from the computational graph. We explore how nonlinear control algorithms, often employing linear and/or quadratic approximations, can be effectively cast within this framework. Our approach illuminates shared components and differences between gradient descent, Gauss–Newton, Newton, and differential dynamic programming methods in the context of discrete time nonlinear control. Furthermore, we present line-search strategies and regularized variants of these algorithms, along with a comprehensive analysis of their computational complexities. We study the performance of the aforementioned algorithms on various nonlinear control benchmarks, including autonomous car racing simulations using a simplified car model. All implementations are publicly available in a package coded in a differentiable programming language.

Available online:
PDF