Jens Schulz’s Post

What is the next big leap in #mathematicaloptimization: quantum approaches, AI/ML/RL, GPU...? So many trends to follow and to evaluate. So, which to bet on? Since early this year, it crystallized that for some large (especially gigantically large) linear optimization problems, GPU acceleration can help tremendously -- based on a proper mathematical foundation (first order methods) combined with the strength of GPUs (in contrast to CPUs) when it comes to memory exchange and parallelism. With the October 2025 release, #Xpress 9.8 now incorporates GPU acceleration on PDHG that makes your large-scale linear programming solutions fly! 🔥 What's Got Us Excited: • 30x speedups in single precision and 25x in double precision! • Full algorithm GPU implementation - not just matrix operations • Great for problems with over 100,000 non zeroes; even better for problems with over 10,000,000 non zeroes Thanks to our partners who submitted instances for testing and evaluating. 💡 Read more here: https://lnkd.in/eUq69x8G Don't get addicted purely to GPUs, yet! There are still many instances for which a Barrier or dual Simplex outperform current GPU implementations, and are less dependent on the numerical tolerances. Let's research more, and start enjoying!

To view or add a comment, sign in

Explore content categories