Multiaccuracy and multicalibration are multigroup equity notions for prediction which have discovered quite a few functions in studying and computational complexity. They are often achieved from a single studying primitive: weak agnostic studying. Right here we examine the ability of multiaccuracy as a studying primitive, each with and with out the extra assumption of calibration. We discover that multiaccuracy in itself is relatively weak, however that the addition of worldwide calibration (this notion known as calibrated multiaccuracy) boosts its energy considerably, sufficient to get better implications that have been beforehand recognized solely assuming the stronger notion of multicalibration.
We give proof that multiaccuracy won’t be as highly effective as normal weak agnostic studying, by displaying that there isn’t any method to post-process a multiaccurate predictor to get a weak learner, even assuming the perfect speculation has correlation 1/2. Relatively, we present that it yields a restricted type of weak agnostic studying, which requires some idea within the class to have correlation better than 1/2 with the labels. Nevertheless, by additionally requiring the predictor to be calibrated, we get better not simply weak, however robust agnostic studying.
An analogous image emerges once we think about the derivation of hardcore measures from predictors satisfying multigroup equity notions. On the one hand, whereas multiaccuracy solely yields hardcore measures of density half the optimum, we present that (a weighted model of) calibrated multiaccuracy achieves optimum density. Our outcomes yield new insights into the complementary roles performed by multiaccuracy and calibration in every setting. They make clear why multiaccuracy and international calibration, though not significantly highly effective by themselves, collectively yield significantly stronger notions.
- †College of Oxford
- ‡ Stanford College







