\documentstyle[12pt]{article} \textheight 210 mm \textwidth 150 mm \begin{document} \large \begin{center} {\bf Iterative methods for unstable weakly nonlinear problems} \end{center} \begin{center} {\bf V.~V.~Vasin} \end{center} The purpose of the talk is to expose a brief survay concerned with some classes of iterative gradient and Gauss-Newton type methods for solving problems which can be writtten in the form of a nonlinear operator equation $$ A(x)=y,\quad y\in R(A),\eqno(1) $$ where the operator $A$ acts between Hilbert spaces $X$ and $Y.$ We assume that in common case the inverse operator $A^{-1}$ is many-valued and discontinuous. Thus, we have tipical ill-posed (unstable) problem. The class of iterative methods in the form $$ x^{k+1}=x^k-\beta_k A'(x^k)^*(A(x^k)-y),\quad x^0\in X \eqno(2) $$ is considered. If $\beta_k=const$ or it does not depend from iterates $x^k$ then we have so called Landweber method, if $\beta_k=\|S(x^k)\|^2/ \|A'(x^k)S(x^k)\|^2$ $(S(x)=A'(x)^*(A(x)-y))$ then we obtain the nonlinear variant of the steepest discent method, and if $\beta_k=\|A(x^k)-y\|^2/ \|S(x^k)\|^2$ then the scheeme (2) generates a nonlinear variant of the minimum error method. Under $\beta_k\equiv \beta_k(A)=[A'(x^k)^* A'(x^k)+ \alpha_k I]^{-1}$ we deal with the regularized Gauss-Newton metod. Under same assumption on the operator $A$ the convergence theorems are formulated, error estimates are considered and for noisy data the stopping rule in the form of the discrepancy principle is justified. Also applications of the iterative processes (2) and their modifications to the solution of some nonlinear integral equations and inverse problems are discussed. Supported by the Russian Foundation of Fundamental Research, Grant 97-01-00520. \end{document}