This study examines how undersampling affects posterior probability estimates in unbalanced classification tasks. It shows that undersampling warps the posterior probabilities away from the true probabilities. However, the study presents a method to correct the warped probabilities using a simple formula, which provides calibrated probabilities without loss of predictive performance. Experiments on real-world datasets demonstrate that the corrected probabilities have better calibration than uncorrected probabilities while maintaining ranking quality.