A call-theoretic characterization of good calibration is that an agent searching for to reduce a correct loss in expectation can’t enhance their final result by post-processing a wonderfully calibrated predictor. Hu and Wu (FOCS’24) use this to outline an approximate calibration measure known as calibration resolution loss (CDL), which measures the maximal enchancment achievable by any post-processing over any correct loss. Sadly, CDL seems to be intractable to even weakly approximate within the offline setting, given black-box entry to the predictions and labels. We propose circumventing this by proscribing consideration to structured households of post-processing capabilities Okay. We outline the calibration resolution loss relative to Okay, denoted CDLOkay the place we take into account all correct losses however prohibit post-processings to a structured household Okay. We develop a complete principle of when CDLOkay is information-theoretically and computationally tractable, and use it to show each higher and decrease bounds for pure courses Okay. Along with introducing new definitions and algorithmic strategies to the idea of calibration for resolution making, our outcomes give rigorous ensures for some broadly used recalibration procedures in machine studying.
- †College of Texas at Austin
- ‡ Harvard College






