Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved With Text

Abstract

In-context vision and language models like Flamingo support arbitrarily in- terleaved sequences of images and text as input. This format not only enables few-shot learning via interleaving independent supervised (image, text) examples, but also, more complex prompts involving interaction between images, e.g., “What do image A and image B have in common?” To support this interface, pretraining occurs over web corpora that similarly contain interleaved images+text. To date, however, large-scale data of this form have not been publicly available. We release Multimodal C4 (mmc4), an augmentation of the popular text-only c4 corpus2 with images interleaved. We use a linear assignment algorithm to place images into longer bodies of text using CLIP features, a process that we show outperforms alternatives. mmc4 spans everyday topics like cooking, travel, technology, etc. A manual inspection of a random sample of documents shows that a vast majority (90%) of images are topically relevant, and that linear assignment frequently selects individual sentences specifically well-aligned with each image (78%). After filtering NSFW images, ads, etc., the corpus contains 103M documents containing 585M images interleaved with 43B English tokens.

Publication
On the Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track
Wanrong Zhu
Wanrong Zhu
CS Ph.D. Candidate

My research interests include vision-and-language problems and text generation.

Related