ThinkSound: Chain-of-Thought Reasoning in Multimodal Large Language Models for Audio Generation and Editing

Click on any video to play or pause it 🎥.

Abstract


While end-to-end video-to-audio generation has greatly improved, producing high-fidelity audio that authentically captures the nuances of visual content remains challenging. Like professionals in the creative industries, such generation requires sophisticated reasoning about items such as visual dynamics, acoustic environments, and temporal relationships. We present ThinkSound, a novel framework that leverages Chain-of-Thought (CoT) reasoning to enable stepwise, interactive audio generation and editing for videos. Our approach decomposes the process into three complementary stages: foundational foley generation that creates semantically coherent soundscapes, interactive object-centric refinement through precise user interactions, and targeted editing guided by natural language instructions. At each stage, a multimodal large language model generates contextually aligned CoT reasoning that guides a unified audio foundation model. Furthermore, we introduce AudioCoT, a comprehensive dataset with structured reasoning annotations that establishes connections between visual content, textual descriptions, and sound synthesis. Experiments demonstrate that ThinkSound achieves state-of-the-art performance in video-to-audio generation across both audio metrics and CoT metrics and excels in out-of-distribution Movie Gen Audio benchmark.

Cooperate with video generation models


Provide voiceovers for the videos generated by the video generation model. All videos are generated by the corresponding video generation model, and all audios are provided by ThinkSound.

Veo3 + ThinkSound

Sora + ThinkSound

Movie Gen + ThinkSound

Video-to-Audio Generation Comparisons on VGGSound (In-distribution)


Play or pause any video by clicking on it.

Video to Audio Generation Comparisons on Movie Gen Audio Bench (Out-of-Distribution)


Play or pause any video by clicking on it.

Interactive Step-by-Step Foley Creation


Video-to-Audio Generation->Object-Focused Audio Generation->Audio Inpainting

To generate a cheerful ukulele melody with light strumming, incorporating harmonious vocals resembling two young girls singing together, set against a simple, unobtrusive background. This includes subtle background music that complements the ukulele and vocals, evoking a joyful, relaxed atmosphere. The overall sound should be warm, lively, and authentic.
Generated audio (paired with silent video):
To focus exclusively on the singing and corresponding hand movements, ensuring that the extracted audio reflects only the vocal expression and its physical accompaniment.
Generated audio (paired with silent video):
To test the model’s ability to repair audio by providing another generated result for this video-to-audio task, where a segment is randomly masked by noise. The goal is to accurately restore the masked audio segment.

Audio Spectrogram
Generated audio:

Audio Spectrogram

Video-to-Audio Generation->Object-Focused Audio Generation->Audio Editing (Add Action)

To generate a continuous background of gentle wind sounds overlaid with consistent warbler chirping, including natural variations in pitch and rhythm to mimic real bird calls, while keeping other sounds minimal and focusing on the warbler's chirping as the main audio element.
Generated audio (paired with silent video):
To extract audio from the region of interest (ROI), audio, and chain-of-thought (CoT), reducing the prominence of the wind by softening its low-frequency presence until it fades into the background. The warbler’s chirping should remain crisp, clear, and rhythmically engaging as the main auditory focus.
Generated audio (paired with silent video):
To edit audio from video, audio, and CoT by beginning with a steady backdrop of warbler chirping, maintaining its melodic and rhythmic consistency; then introducing a single robin call placed intermittently to add contrast and depth without disrupting the flow. The warbler should remain the focal point, with the robin serving as a gentle, occasional accent.

Audio Spectrogram
Generated audio:

Audio Spectrogram

Experiments


Main Results

Table 1: Comparison of our ThinkSound foundation model with existing video-to-audio baselines on the VGGSound test set. ↓ indicates lower is better, ↑ indicates higher is better. For MOS, we show the mean and variance of the MOS scores. † indicates that the method does not use text for inference.
Method Objective Metrics Subjective Metrics Efficiency
FD
↓
KLPaSST ↓ KLPaNNs ↓ DeSync ↓ CLAPcap ↑ CLAPCoT ↑ MOS-Q
↑
MOS-A ↑ Params Time(s) ↓
GT - - - 0.55 0.28 0.45 4.37±0.21 4.56±0.19 - -
See&Hear 118.95 2.26 2.30 1.20 0.32 0.35 2.75±1.08 2.87±0.99 415M 19.42
V-AURA† 46.99 2.23 1.83 0.65 0.23 0.37 3.42±1.03 3.20±1.17 695M 14.00
FoleyCrafter 39.15 2.06 1.89 1.21 0.41 0.34 3.08±1.21 2.63±0.88 1.20B 3.84
Frieren† 74.96 2.55 2.64 1.00 0.37 0.34 3.27±1.11 2.95±1.09 159M -
V2A-Mapper† 48.10 2.50 2.34 1.23 0.38 0.32 3.31±1.02 3.16±1.04 229M -
MMAudio 43.26 1.65 1.40 0.44 0.31 0.40 3.84±0.89 3.97±0.82 1.03B 3.01
ThinkSound 34.56 1.52 1.32 0.46 0.33 0.46 4.02±0.73 4.18±0.79 1.30B 1.07
w/o CoT Reasoning 39.84 1.59 1.40 0.48 0.29 0.41 3.91±0.83 4.04±0.75 1.30B 0.98

ThinkSound outperforms all baselines across most objective metrics and all subjective metrics. Compared to the strongest baseline (MMAudio), our model achieves substantial improvements in audio quality and semantic alignment, while maintaining comparable temporal synchronization performance on the objective synchronization metric.

Ablation Studies

To better understand the contribution of each component in ThinkSound and to validate the effectiveness of our design choices, we conduct comprehensive ablation studies on the VGGSound test set. We mainly focus on: (1) text encoding strategies and (2) multi-modal integration mechanisms. For more ablation and exploratory results, refer to Supplementary Materials D.

Text Encoding Strategies

We evaluate different text encoding strategies with or without CoT reasoning. The results are shown in Table 1. First, CoT reasoning substantially improves audio fidelity—for example, FD improves from 39.84 to 37.65 when comparing CLIP-only to T5 with CoT. Second, integrating contrastive features from CLIP with contextual reasoning from T5 further improves performance, reducing both KLPaSST and KLPaNNs.

Table 2: Comparison of text encoder fusion strategies (CLAP = CLAPCoT)
Method FD ↓ KLPaSST ↓ KLPaNNs ↓ DeSync ↓ CLAP ↑
CLIP 39.84 1.59 1.40 0.48 0.41
T5 (CoT) 37.65 1.54 1.35 0.46 0.44
CLIP + T5 34.56 1.52 1.32 0.46 0.46

Multi-Modal Integration Mechanisms

We investigate different ways to integrate video and audio features before feeding them into the single-stream transformer. As shown in Table 2, element-wise addition of video and audio features performs better than audio-only input, especially in synchronization (DeSync reduced from 0.50 to 0.46). Moreover, the gated fusion mechanism outperforms both alternatives across all metrics.

Table 3: Comparison of multi-modal integration mechanisms
Integration FD ↓ KLPaSST ↓ KLPaNNs ↓ DeSync ↓ CLAP ↑
audio only 37.13 1.58 1.37 0.50 0.43
linear video 38.96 1.58 1.38 0.46 0.45
gated video 34.56 1.52 1.32 0.46 0.46

Impact of Model Size

We compare three model sizes of ThinkSound: Large (1.3B), Medium (724M), and Small (533M). The results are shown in Table 3. The Large model achieves the best performance across all metrics. As model size decreases, performance degrades substantially, highlighting the necessity of adequate model capacity for effective audio generation.

Table 4: Impact of model size results.
Size FD ↓ KLPaSST ↓ KLPaNNs ↓ DeSync ↓ CLAPCoT ↑
Small 40.80 1.64 1.38 0.46 0.41
Medium 36.80 1.56 1.34 0.46 0.44
Large 34.56 1.52 1.32 0.46 0.46

Code and Dataset


The code and dataset will be released soon.