Pretraining strong imaginative and prescient or multimodal basis fashions (e.g., CLIP) depends on large-scale datasets which may be noisy, probably misaligned, and have long-tail distributions. Earlier works have proven promising ends in augmenting datasets by producing artificial samples. Nevertheless, they solely assist domain-specific advert hoc use instances (e.g., both picture or textual content solely, however not each), and are restricted in information range attributable to an absence of fine-grained management over the synthesis course of. On this paper, we design a controllable image-text synthesis pipeline, CtrlSynth, for data-efficient and strong multimodal studying. The important thing thought is to decompose the visible semantics of a picture into primary parts, apply user-specified management insurance policies (e.g., take away, add, or substitute operations), and recompose them to synthesize pictures or texts. The decompose and recompose characteristic in CtrlSynth permits customers to regulate information synthesis in a fine-grained method by defining custom-made management insurance policies to govern the essential parts. CtrlSynth leverages the capabilities of pretrained basis fashions corresponding to giant language fashions or diffusion fashions to motive and recompose primary parts such that artificial samples are pure and composed in numerous methods. CtrlSynth is a closed-loop, training-free, and modular framework, making it simple to assist completely different pretrained fashions. With intensive experiments on 31 datasets spanning completely different imaginative and prescient and vision-language duties, we present that CtrlSynth considerably improves zero-shot classification, image-text retrieval, and compositional reasoning efficiency of CLIP fashions.
- †Work executed whereas at Apple
- ‡ Meta