The document discusses the development of a calibrated prediction methodology for assessing safety in image-controlled autonomous systems at the University of Florida. It outlines challenges and contributions related to conformal confidence calibration, evaluation results comparing monolithic and composite predictors, and emphasizes the superiority of calibrated predictors. The findings indicate that calibrated predictions outperform uncalibrated ones, demonstrating reliable coverage for conformal calibration.
Related topics: