The surge of highly realistic synthetic videos produced by contemporary generative systems has significantly
increased the risk of malicious use, challenging both humans and existing detectors. Against this backdrop,
we take a generator-side view and observe that internal cross-attention mechanisms in these models encode
fine-grained speech–motion alignment, offering useful correspondence cues for forgery detection.
Building on this insight, we propose X-AVDT, a robust and generalizable deepfake detector
that probes generator-internal audio-visual signals accessed via DDIM inversion to expose these cues.
X-AVDT extracts two complementary signals: (i) a video composite capturing inversion-induced discrepancies,
and (ii) audio–visual cross-attention feature reflecting modality alignment enforced during generation.
To enable faithful, cross-generator evaluation, we further introduce MMDF,
a new multi-modal deepfake dataset spanning diverse manipulation types and rapidly evolving synthesis paradigms,
including GANs, diffusion, and flow-matching. Extensive experiments demonstrate that X-AVDT achieves leading
performance on MMDF and generalizes strongly to external benchmarks and unseen generators, outperforming existing
methods with accuracy improved by +13.1%. Our findings highlight the importance of leveraging internal
audio–visual consistency cues for robustness to future generators in deepfake detection.