{"id":14804,"date":"2026-05-15T20:01:37","date_gmt":"2026-05-15T20:01:37","guid":{"rendered":"https:\/\/techtrendfeed.com\/?p=14804"},"modified":"2026-05-15T20:01:38","modified_gmt":"2026-05-15T20:01:38","slug":"instructing-imaginative-and-prescient-language-fashions-to-communicate-cinema-machine-studying-weblog-mlcmu","status":"publish","type":"post","link":"https:\/\/techtrendfeed.com\/?p=14804","title":{"rendered":"Instructing Imaginative and prescient-Language Fashions to Communicate Cinema \u2013 Machine Studying Weblog | ML@CMU"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><em>A 12 months of constructing a video caption pipeline with 100+ skilled creators, and what it taught us about scaling supervision as a substitute of fashions.<\/em><\/p>\n<p>By <strong>Zhiqiu Lin<\/strong> and <strong>Chancharik Mitra<\/strong>. Based mostly on our CVPR 2026 work, <em><a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2604.21718\">Constructing a Exact Video Language with Human-AI Oversight<\/a><\/em> (Spotlight, Prime 3%).<\/p>\n<h2>How shut is in the present day&#8217;s video generator to a Hollywood cinematographer?<\/h2>\n<p>Hollywood administrators attain for sure pictures as a result of they make a scene <em>land<\/em>. They cue a selected feeling within the viewer that flat protection can&#8217;t. Open your favourite video generator (Veo 3.1, Seedance 2, or any of the most recent open-source fashions) and ask it for a <strong>dolly zoom of a person standing in the course of a bustling road<\/strong>, the way in which Hitchcock used the shot to make the world really feel like it&#8217;s collapsing inward. Or a <strong>rack focus pulling from a espresso cup to the lady behind it<\/strong>, the type of focus pull that quietly tells the viewers the place to look. Or a <strong>Dutch-angle shot of a nervous individual staring into the void<\/strong>, a tilted body that places the viewer on edge.<\/p>\n<p>Most mills will hand again one thing near a generic dolly-in, or a slow-motion clip with the unsuitable focal topic. The output is often visually competent, however it doesn&#8217;t <em>do the factor<\/em>. The mannequin has clearly seen movies that comprise these methods. It simply doesn&#8217;t know tips on how to act on the phrases.<\/p>\n<p>We expect that is symptomatic of a broader hole. Filmmakers talk with a shared, exact vocabulary: shot dimension, body place, focus kind, lens distortion, digicam peak, video pace. Right this moment&#8217;s vision-language fashions (VLMs), and the captioning datasets that feed them, principally don&#8217;t.<\/p>\n<p>On this put up we describe <strong>CHAI<\/strong>, a <em>captioning<\/em> pipeline (in our utilization, a <em>caption<\/em> is a protracted, structured paragraph describing a video&#8217;s content material, movement, and digicam work \u2014 not a subtitle monitor) that we constructed over the previous 12 months with 100+ skilled video creators. The acronym stands for <em><strong>C<\/strong>ritique-based <strong>H<\/strong>uman-<strong>AI<\/strong> Oversight<\/em>. Current video caption datasets are sometimes written both by crowdworkers, who lack the cinematic vocabulary to explain a shot exactly, or by massive vision-language fashions, whose captions learn easily (fluent \u2014 no grammatical or stylistic errors) however routinely describe objects and motions that aren&#8217;t within the video (hallucinated). The central thought behind CHAI is to mix the 2: the captioner mannequin (e.g., a big video-language mannequin equivalent to Gemini-2.5-Professional) writes the draft, a educated human critiques it, and the mannequin revises in opposition to that critique.<\/p>\n<p>This put up works by 4 questions:<\/p>\n<p><strong>1. <\/strong>Why do VLMs wrestle with cinematic prompts?<\/p>\n<p><strong>2. <\/strong>How ought to people and fashions divide the captioning work?<\/p>\n<p><strong>3. <\/strong>Does the standard of human critique change what the mannequin can be taught?<\/p>\n<p><strong>4. <\/strong>Do higher captions within the coaching information give us a greater video generator?<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"417\" src=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/teaser_draft-1024x417.png\" alt=\"\" class=\"wp-image-22450\" srcset=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/teaser_draft-1024x417.png 1024w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/teaser_draft-300x122.png 300w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/teaser_draft-1536x625.png 1536w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/teaser_draft-2048x833.png 2048w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/teaser_draft-970x395.png 970w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/teaser_draft-320x130.png 320w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/teaser_draft-80x33.png 80w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/teaser_draft-300x122@2x.png 600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\"\/><figcaption>Determine 1. Three failure modes of present video captioning pipelines (prime, crimson), and the alternatives we make in response (backside, blue): a exact specification, a human-AI oversight loop, and post-training on express preferences plus critiques fairly than output-only comparisons.<\/figcaption><\/figure>\n<h2>Query 1: Why do VLMs wrestle with cinematic prompts?<\/h2>\n<p>A pure first speculation is that this can be a <em>capability<\/em> drawback \u2014 that the present era of vision-language fashions is just too small, has too little context, or has not been pretrained on sufficient video to deal with cinematic prompts, and that the following era will resolve it. However after auditing eight well-liked video-text datasets from 2016 to 2025 (ActivityNet Captions, MSR-VTT, DREAM-1K, ShareGPT4Video, PerceptionLM, and others), we expect the bottleneck is some place else. The visible content material is within the movies these fashions prepare on, and fashionable VLMs understand it effectively. <strong>What&#8217;s lacking is the language: the captions paired with these movies don&#8217;t comprise the exact vocabulary wanted to explain cinematic approach.<\/strong> In our experiments, coaching bigger fashions on extra of the identical information solely marginally improved these points. They seem like issues of <em>annotation coverage<\/em>, not of capability.<\/p>\n<p>Three patterns confirmed up time and again:<\/p>\n<p><strong>\u2022 Imprecise terminology.<\/strong> Captions conflate dolly-in (the digicam bodily strikes ahead) with zoom-in (the focal size modifications), or describe a fisheye distortion as &#8220;round constructing.&#8221;<\/p>\n<p><strong>\u2022 Lacking data.<\/strong> Captions describe what&#8217;s within the body and skip every part else: movement, digicam shake, focus modifications, shot dimension. Something temporal, something in regards to the digicam, will get dropped.<\/p>\n<p><strong>\u2022 Subjective descriptions.<\/strong> &#8220;An atmospheric shot filled with stress&#8221; tells a mannequin nothing it could actually floor in pixels.<\/p>\n<p>A pure subsequent thought: simply rent crowdworkers to write down extra cautious captions. We tried that. Crowdworkers nonetheless confused dolly-in with zoom-in, known as extensive pictures &#8220;close-ups,&#8221; and described fisheye distortion as &#8220;a spherical constructing.&#8221; <strong>Seeing just isn&#8217;t the identical as realizing tips on how to describe.<\/strong><\/p>\n<figure class=\"wp-block-video\"><video controls=\"\" src=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/crowdsourced_compressed-2.mp4\"\/><figcaption><strong>Determine 2. <\/strong>Crowdworker vs. knowledgeable descriptions for a similar clips. Crowdworkers see the aerial-view shot, the fisheye lens, and the dolly zoom. They only attain for on a regular basis language (&#8220;hen&#8217;s-eye view,&#8221; &#8220;round constructing,&#8221; &#8220;warping impact&#8221;) as a substitute of the technical vocabulary the mannequin would want to behave on the outline.<\/figcaption><\/figure>\n<p>What labored, ultimately, was bringing in individuals whose <em>job requires<\/em> this vocabulary: cinematographers, administrators of images, movement graphics designers, VFX artists, recreation designers, digicam operators. Over the previous 12 months, we constructed a structured caption specification with 100+ such collaborators. The specification has 5 features:<\/p>\n<p><strong>\u2022 Topic<\/strong> (kind, attribute, relations)<\/p>\n<p><strong>\u2022 Scene<\/strong> (composition, dynamics, overlays, viewpoint)<\/p>\n<p><strong>\u2022 Movement<\/strong> (topic actions, interactions, group exercise)<\/p>\n<p><strong>\u2022 Spatial<\/strong> (shot dimension, body place, depth, spatial motion)<\/p>\n<p><strong>\u2022 Digital camera<\/strong> (focus kind, depth of subject, steadiness, motion, video pace, lens distortion, peak, angle)<\/p>\n<p>All 5 features collectively contain roughly 200 low-level visible primitives, each one with a definition and a call rule for when it applies. This prevents annotators from freelancing terminology, as all they need to do is tag in opposition to the spec.<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"365\" src=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/specification-1024x365.png\" alt=\"\" class=\"wp-image-22457\" srcset=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/specification-1024x365.png 1024w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/specification-300x107.png 300w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/specification-1536x548.png 1536w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/specification-2048x730.png 2048w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/specification-970x346.png 970w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/specification-320x114.png 320w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/specification-80x29.png 80w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/specification-300x107@2x.png 600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\"\/><figcaption><strong>Determine 3. <\/strong>Typical points with prior captioning work (left, crimson) and what we converged on (proper, blue). The structured taxonomy was constructed collaboratively with cinematographers, administrators of images, VFX artists, movement graphics designers, and recreation designers, and is paired with an annotation coverage and coaching tutorials so the vocabulary stays constant throughout annotators.<\/figcaption><\/figure>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"507\" src=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/overall-1-1024x507.png\" alt=\"\" class=\"wp-image-22456\" srcset=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/overall-1-1024x507.png 1024w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/overall-1-300x149.png 300w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/overall-1-1536x761.png 1536w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/overall-1-2048x1014.png 2048w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/overall-1-970x480.png 970w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/overall-1-320x158.png 320w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/overall-1-80x40.png 80w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/overall-1-300x149@2x.png 600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\"\/><figcaption><strong>Determine 4. <\/strong>The complete taxonomy. 5 features, every decomposed into sub-aspects, every grounded in a set of visible or movement primitives.<\/figcaption><\/figure>\n<p><strong><em>Takeaway: <\/em><\/strong><em>VLMs wrestle with cinematic prompts as a result of the captions they have been educated on don&#8217;t comprise the exact vocabulary professionals use. In our experiments, scaling fashions or information alone gave solely marginal features; specifying the language rigorously made a a lot greater distinction.<\/em><\/p>\n<h2>Query 2: How ought to people and fashions divide the captioning work?<\/h2>\n<p>As soon as we made the spec, we nonetheless needed to determine <em>who<\/em> would write the lengthy captions. The 2 apparent selections, people or fashions, every include well-known limitations.<\/p>\n<p><strong>People alone<\/strong> produce captions with typos, grammatical errors, and inconsistent occasion ordering. Additionally they fatigue: 200 to 400 phrases of cautious prose per video, whereas trying up the spec, is exhausting and costly.<\/p>\n<p><strong>Fashions alone<\/strong> produce captions that learn superbly however that, on a miserable fraction of clips, confidently describe objects and motions that aren&#8217;t there. Additionally they steadily combine up left and proper.<\/p>\n<p>What we seen in pilot research is that the failure modes are <em>uneven<\/em> in a helpful approach. Right this moment&#8217;s LLMs write higher prose than most people. However people, particularly educated ones, are significantly better than LLMs at noticing visible or movement errors in a draft, the type the place the caption says &#8220;shifting left&#8221; however the topic is shifting proper. So we constructed the pipeline round that asymmetry. The mannequin drafts, the human critiques, the mannequin revises. That is conceptually just like <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2206.05802\">Saunders et al. (2022)<\/a>&#8216;s self-critiquing fashions for summarization, however utilized to long-form video captioning the place the human nonetheless does the laborious half: catching grounded errors in opposition to the precise video.<\/p>\n<p>Concretely, the loop:<\/p>\n<p><strong>1. Primitives.<\/strong> A educated annotator labels which visible and movement primitives are current within the clip.<\/p>\n<p><strong>2. Pre-caption.<\/strong> The mannequin generates a protracted caption from these primitives, following the spec.<\/p>\n<p><strong>3. Critique.<\/strong> An annotator reads the pre-caption in opposition to the video and writes a critique mentioning what&#8217;s unsuitable and what ought to change. The critique needs to be correct (the issues it flags are unsuitable), full (it doesn&#8217;t miss errors), and constructive (it tells the mannequin what to do, not simply that one thing is unhealthy).<\/p>\n<p><strong>4. Submit-caption.<\/strong> The mannequin revises its draft utilizing the critique.<\/p>\n<p><strong>5. Refinement.<\/strong> If the post-caption continues to be off, the human refines the critique fairly than rewriting the caption.<\/p>\n<p>We tasked reviewers (top-performing annotators promoted to a quality-control function) with checking each critique and post-caption in opposition to the video. This manner annotators have been scored primarily based on their accuracy, whereas reviewers earned rewards for catching the errors they discovered. Each <em>precision<\/em> (don&#8217;t flag issues that aren&#8217;t unsuitable) and <em>recall<\/em> (don&#8217;t miss issues which are unsuitable) have been incentivized on the information degree, earlier than any modeling occurred.<\/p>\n<figure class=\"wp-block-video\"><video controls=\"\" src=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/oversight_compressed.mp4\"\/><\/figure>\n<p>Shifting the human&#8217;s job from <em>writing<\/em> to <em>proofreading<\/em> has a aspect profit we underestimated: every video takes far much less cognitive effort, and the ensuing 200 to 400 phrase captions find yourself extra correct than what both people or fashions produce alone.<\/p>\n<p><strong><em>Takeaway: <\/em><\/strong><em>LLMs and people have uneven strengths in long-form video captioning. Designing the pipeline round that asymmetry, fairly than making an attempt to exchange one with the opposite, provides each higher captions and a extra sustainable annotation course of.<\/em><\/p>\n<h2>Query 3: Does the standard of human critique change what the mannequin can be taught?<\/h2>\n<p>The pipeline produces a triple for each video: <em>(pre-caption, critique, post-caption)<\/em>. That triple is extra than simply an annotated caption. It&#8217;s supervision for 3 totally different post-training duties directly:<\/p>\n<p><strong>\u2022 Captioning.<\/strong> Practice the mannequin to supply lengthy, devoted captions.<\/p>\n<p><strong>\u2022 Reward modeling.<\/strong> Deal with (pre-caption, post-caption) as a (rejected, most popular) pair.<\/p>\n<p><strong>\u2022 Critique era.<\/strong> Practice the mannequin to write down the critique itself, given the video and the draft.<\/p>\n<p>We post-trained Qwen3-VL-8B on all three codecs collectively utilizing customary supervised fine-tuning (SFT). We additionally tried reinforcement studying (RL) strategies like Direct Choice Optimization (DPO), however discovered that straightforward SFT on the complete triplet information is the strongest. The detailed numbers are within the paper; the headline is that including express choice and critique alerts improves each methodology we examined.<\/p>\n<p>We have been curious whether or not the <em>high quality<\/em> of the critique mattered to downstream efficiency, or whether or not any &#8220;that is unsuitable&#8221; sign would do. So we ran an ablation: take a clear CHAI critique, intentionally degrade one property at a time (accuracy, recall, constructiveness), and see how the post-trained captioner performs on every process.<\/p>\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"365\" src=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/critique_quality-1024x365.png\" alt=\"\" class=\"wp-image-22458\" srcset=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/critique_quality-1024x365.png 1024w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/critique_quality-300x107.png 300w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/critique_quality-1536x548.png 1536w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/critique_quality-2048x730.png 2048w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/critique_quality-970x346.png 970w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/critique_quality-320x114.png 320w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/critique_quality-80x29.png 80w, https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/critique_quality-300x107@2x.png 600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\"\/><figcaption><strong>Determine 6. <\/strong>A helpful critique needs to be correct (the issues it flags are literally unsuitable), full (it catches the errors which are there), and constructive (it says what ought to change, not simply that one thing is unhealthy). All three are wanted; degrading anybody hurts the downstream mannequin.<\/figcaption><\/figure>\n<p>Outcomes for an 8B <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2511.21631\">Qwen3-VL<\/a> post-trained on every variant are introduced in Desk 1. <strong>Caption<\/strong> and <strong>Critique<\/strong> are BLEU-4 scores (a typical text-generation metric measuring n-gram overlap with reference textual content on a 0\u2013100 scale; increased means nearer to the human reference) in opposition to held-out reference captions and critiques. For the <strong>Reward<\/strong> process, we report binary accuracy on whether or not the captioner scores the post-caption increased than the pre-caption (probability = 50). Larger is best on all three.<\/p>\n<figure class=\"wp-block-table\">\n<table>\n<thead>\n<tr>\n<td><strong>Critique variant<\/strong><\/td>\n<td><strong>Acc.<\/strong><\/td>\n<td><strong>Rec.<\/strong><\/td>\n<td><strong>Constr.<\/strong><\/td>\n<td><strong>Caption<\/strong><\/td>\n<td><strong>Reward<\/strong><\/td>\n<td><strong>Critique<\/strong><\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Blind Gemini-2.5<\/td>\n<td>\u2014<\/td>\n<td>\u2014<\/td>\n<td>\u2014<\/td>\n<td>10.9<\/td>\n<td>44.5<\/td>\n<td>21.1<\/td>\n<\/tr>\n<tr>\n<td>Gemini-2.5<\/td>\n<td>\u2014<\/td>\n<td>\u2014<\/td>\n<td>\u2014<\/td>\n<td>12.7<\/td>\n<td>62.0<\/td>\n<td>26.2<\/td>\n<\/tr>\n<tr>\n<td>Inaccurate critique<\/td>\n<td>\u2717<\/td>\n<td>\u2713<\/td>\n<td>\u2713<\/td>\n<td>12.1<\/td>\n<td>47.1<\/td>\n<td>21.9<\/td>\n<\/tr>\n<tr>\n<td>Incomplete critique<\/td>\n<td>\u2713<\/td>\n<td>\u2717<\/td>\n<td>\u2713<\/td>\n<td>12.5<\/td>\n<td>56.6<\/td>\n<td>28.7<\/td>\n<\/tr>\n<tr>\n<td>Non-constructive critique<\/td>\n<td>\u2713<\/td>\n<td>\u2713<\/td>\n<td>\u2717<\/td>\n<td>13.4<\/td>\n<td>67.2<\/td>\n<td>32.9<\/td>\n<\/tr>\n<tr>\n<td><strong>CHAI (with QC)<\/strong><\/td>\n<td><strong>\u2713<\/strong><\/td>\n<td><strong>\u2713<\/strong><\/td>\n<td><strong>\u2713<\/strong><\/td>\n<td><strong>18.2<\/strong><\/td>\n<td><strong>89.8<\/strong><\/td>\n<td><strong>41.7<\/strong><\/td>\n<\/tr>\n<\/tbody>\n<\/table><figcaption><strong>Desk 1. <\/strong>Submit-training outcomes when the critique is artificially degraded alongside one property at a time. Larger is best. As extra reference factors, we additionally tried having off-the-shelf fashions generate the critiques rather than our human-AI pipeline: (1) <em>Blind Gemini-2.5<\/em> makes use of Gemini-2.5 to critique with the caption textual content solely and no video entry (a language-prior baseline); (2) <em>Gemini-2.5<\/em> makes use of the identical mannequin with full video enter. <em>CHAI (with QC)<\/em> is our full pipeline together with the peer-review quality-control step from Query 2 \u2014 i.e., the critiques are correct, full, and constructive.<\/figcaption><\/figure>\n<p>Three issues stand out:<\/p>\n<p><strong>1. High quality just isn&#8217;t non-obligatory.<\/strong> Dropping any one of many three properties materially hurts each downstream process. Non-constructive critiques (the most affordable to gather, because you do not need to say what&#8217;s unsuitable) damage the least however nonetheless depart a big hole.<\/p>\n<p><strong>2. Current information is generally non-constructive.<\/strong> We checked the critiques in publicly launched datasets like <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2206.05802\">Saunders et al.&#8217;s GDC launch<\/a> and <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2502.10391\">MM-RLHF<\/a>. Greater than half are non-constructive in our sense (&#8220;that is unsuitable&#8221; with no prompt repair). That helps clarify why coaching on these datasets leaves efficiency on the desk.<\/p>\n<p><strong>3. An 8B mannequin might be aggressive with a lot bigger closed fashions when the information is true.<\/strong> On the identical captioning, reward, and critique benchmarks, the post-trained 8B Qwen3-VL matches or exceeds GPT-5 and Gemini-3.1-Professional on the metrics we report. The mannequin dimension has not modified; the supervision sign has.<\/p>\n<p>A small bonus: the identical reward mannequin additionally helps at inference time. Finest-of-N decoding with the educated reward mannequin continues to enhance efficiency with no extra human labels.<\/p>\n<p><strong><em>Takeaway: <\/em><\/strong><em>The type of the critique just isn&#8217;t a stylistic element. A mannequin collectively post-trained on captions, preferences, and critiques performs materially higher on all three duties when the critiques it&#8217;s educated on are correct, full, and constructive \u2014 and materially worse when any a kind of properties is lacking.<\/em><\/p>\n<h2>Query 4: Do higher captions within the coaching information give us a greater video generator?<\/h2>\n<p>A skeptical reader may say: that is all very good, however captioning is upstream of what most individuals really need, which is era. So we examined whether or not the improved captioner strikes the needle on a downstream video generator. We took a big corpus {of professional} video (movies, advertisements, music movies, gameplay), <em>re-captioned<\/em> it with the post-trained 8B mannequin, and used these new captions to fine-tune <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2503.20314\">Wan2.2<\/a>.<\/p>\n<p>The fine-tuned mannequin can act on detailed prompts (as much as roughly 400 phrases) for methods that off-the-shelf mills reliably get unsuitable:<\/p>\n<figure class=\"wp-block-video\"><video controls=\"\" src=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/videogeneration_1_compressed-1.mp4\"\/><\/figure>\n<figure class=\"wp-block-video\"><video controls=\"\" src=\"https:\/\/blog.ml.cmu.edu\/wp-content\/uploads\/2026\/04\/videogeneration_2_compressed-1.mp4\"\/><figcaption>Determine 7. Two lengthy <em>era prompts<\/em> (the textual content fed to the video generator at inference time) that originated as <em>captions<\/em> produced by our post-trained captioner on comparable held-out clips. Proper: zero-shot Wan2.2 follows the immediate loosely, with a dolly zoom changing into a standard dolly-back and an isometric (2.5D) recreation scene changing into a generic 3D arc. Left: after Wan2.2 is fine-tuned on coaching movies re-captioned by our mannequin, it follows the identical immediate faithfully.<\/figcaption><\/figure>\n<p>We didn&#8217;t change the generator structure or coaching goal. The one factor that modified was the language used to explain the movies within the coaching set. That was sufficient to show an current generator a category of methods it beforehand couldn&#8217;t articulate.<\/p>\n<p><strong><em>Takeaway: <\/em><\/strong><em>A extra exact caption vocabulary upstream interprets into extra controllable era downstream, with the identical mannequin structure and coaching recipe. The bottleneck for cinematic management was within the supervision, not the mannequin.<\/em><\/p>\n<h2>Dialogue<\/h2>\n<p>We began this mission assuming we have been going to coach a captioner mannequin. We ended up spending many of the 12 months on the <em>pipeline round it<\/em>: what to write down captions about, who ought to write them, who ought to verify them, and what the checks ought to seem like. The mannequin contributions really feel nearly downstream of these selections.<\/p>\n<p>Three issues we want we had appreciated earlier:<\/p>\n<p><strong>\u2022 Specification earlier than scale.<\/strong> Coaching bigger fashions on noisier information gave solely marginal features. As soon as the spec was in place, smaller fashions began trying very aggressive.<\/p>\n<p><strong>\u2022 &#8220;Crowdsource it&#8221; just isn&#8217;t a baseline; it&#8217;s a totally different drawback.<\/strong> Annotating cinematic approach accurately requires the identical vocabulary the sphere already makes use of. Asking untrained staff to invent that vocabulary on the fly just isn&#8217;t a budget model of asking educated staff to use it.<\/p>\n<p><strong>\u2022 Critiques are coaching information.<\/strong> The type of the critique we accumulate in the present day decides how successfully fashions might be educated tomorrow. Datasets that report solely thumbs-up \/ thumbs-down are leaving a variety of post-training sign on the desk.<\/p>\n<p>CHAI is one piece of an extended effort on <em>exact video language<\/em>. The closest companion is <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2504.15376\">CameraBench<\/a> (NeurIPS\u201925 Highlight), our earlier benchmark on digicam movement, which seeded the camera-side primitives within the spec.<\/p>\n<h2>Sources<\/h2>\n<p>We&#8217;re releasing the specification, coaching tutorials, annotation platform, quality-control circulation, information, code, and fashions. If you&#8217;re engaged on video understanding or era and need to use any of those, please do.<\/p>\n<p>Mission web page: <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/linzhiqiu.github.io\/papers\/chai\/\">https:\/\/linzhiqiu.github.io\/papers\/chai\/<\/a><\/p>\n<p>Paper: <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2604.21718\">https:\/\/arxiv.org\/abs\/2604.21718<\/a><\/p>\n<p>Code: <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/github.com\/chancharikmitra\/CHAI\">https:\/\/github.com\/chancharikmitra\/CHAI<\/a><\/p>\n<h2>References<\/h2>\n<p><strong>Krishna et al., 2017.<\/strong> Dense-Captioning Occasions in Movies (ActivityNet Captions). ICCV. <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1705.00754\">arXiv:1705.00754<\/a>.<\/p>\n<p><strong>Xu et al., 2016.<\/strong> MSR-VTT: A Giant Video Description Dataset for Bridging Video and Language. CVPR.<\/p>\n<p><strong>Wang et al., 2024.<\/strong> Tarsier: Recipes for Coaching and Evaluating Giant Video Description Fashions (DREAM-1K). <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2407.00634\">arXiv:2407.00634<\/a>.<\/p>\n<p><strong>Chen et al., 2024.<\/strong> ShareGPT4Video: Enhancing Video Understanding and Technology with Higher Captions. NeurIPS. <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2406.04325\">arXiv:2406.04325<\/a>.<\/p>\n<p><strong>Cho et al., 2025.<\/strong> PerceptionLM: Open-Entry Knowledge and Fashions for Detailed Visible Understanding. <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2504.13180\">arXiv:2504.13180<\/a>.<\/p>\n<p><strong>Saunders et al., 2022.<\/strong> Self-critiquing Fashions for Aiding Human Evaluators. <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2206.05802\">arXiv:2206.05802<\/a>.<\/p>\n<p><strong>Zhang et al., 2025.<\/strong> MM-RLHF: The Subsequent Step Ahead in Multimodal LLM Alignment. <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2502.10391\">arXiv:2502.10391<\/a>.<\/p>\n<p><strong>Lin et al., 2025.<\/strong> In direction of Understanding Digital camera Motions in Any Video (CameraBench). NeurIPS Highlight. <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2504.15376\">arXiv:2504.15376<\/a>.<\/p>\n<p><strong>Wan Workforce, 2025.<\/strong> Wan: Open and Superior Giant-Scale Video Generative Fashions (Wan2.2). <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2503.20314\">arXiv:2503.20314<\/a>.<\/p>\n<p><strong>Bai et al., 2025.<\/strong> Qwen3-VL Technical Report. <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2511.21631\">arXiv:2511.21631<\/a>.<\/p>\n<p><em>All opinions expressed on this put up are these of the authors and don&#8217;t symbolize the views of CMU.<\/em><\/p>\n<\/p><\/div>\n\n","protected":false},"excerpt":{"rendered":"<p>A 12 months of constructing a video caption pipeline with 100+ skilled creators, and what it taught us about scaling supervision as a substitute of fashions. By Zhiqiu Lin and Chancharik Mitra. Based mostly on our CVPR 2026 work, Constructing a Exact Video Language with Human-AI Oversight (Spotlight, Prime 3%). How shut is in the [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":14806,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[55],"tags":[110,9081,136,113,442,266,945,631,8976],"class_list":["post-14804","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-machine-learning","tag-blog","tag-cinema","tag-learning","tag-machine","tag-mlcmu","tag-models","tag-speak","tag-teaching","tag-visionlanguage"],"_links":{"self":[{"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/posts\/14804","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14804"}],"version-history":[{"count":1,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/posts\/14804\/revisions"}],"predecessor-version":[{"id":14805,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/posts\/14804\/revisions\/14805"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=\/wp\/v2\/media\/14806"}],"wp:attachment":[{"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14804"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14804"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/techtrendfeed.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14804"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69d9690a190636c2e0989534. Config Timestamp: 2026-04-10 21:18:02 UTC, Cached Timestamp: 2026-05-15 22:00:36 UTC -->