The document presents the concept of single-call stochastic extra-gradient methods, focusing on their application in solving variational inequalities and optimization problems. It highlights different algorithms, convergence metrics, and results regarding global and local convergence, stressing the benefits of reducing oracle calls in deep learning contexts. The authors conclude with insights on localization of stochastic guarantees and last iterate convergence in non-monotone scenarios.