^{1}, Fabrice Poirion

^{1}and Jean-Antoine Désidéri

^{2}

^{1}Onera (DMAS), 29 Avenue de la division Leclerc, 92320 Châtillon France.

^{2}bInria, 2004 route des Lucioles, 06902 Valbonne France.

We consider a new method for solving multiobjective optimization problems where the objectives are written as expectations of random functions. To ensure a Pareto equilibrium of a design for such problem without estimating the expectations we propose an extension of the classical stochastic gradient algorithm to the multiobjective case. This extension is based on the existence of a common descent vector built using either the objective gradients or their subdifferentials. Considering classic hypothesis of the stochastic gradient algorithm, the mean square and almost sure convergence of this new Stochastic Multiple Objective Descent Algorithm (SMODA) can be proven. We propose to replace the classical formulation of RBDO problems with a multiobjective problem where the probability constraint is transformed into an objective and written as an expectation of an indicator function. Some regularization of the indicator function is applied in order to solve this multiobjective problem with SMODA. The Pareto front resulting from the multiobjective optimization describes the best reliability/performance trade off possible. Some classical yet simple reliability problem is then analyzed and discussed using this new methodology.