Speech Driven Video Editing via an Audio-Conditioned Diffusion Model

Dan Bigioi1, Shubhajit Basak1, Michał Stypułkowski2, Maciej Zieba3,4, Hugh Jordan5, Rachel McDonnell5, Peter Corcoran1,

1University of Galway

2University of Wrocław

3Wrocław University of Science and Technology

4Tooploox

5Trinity College Dublin

Sample video results generated by our multispeaker model on unseen subjects

Abstract

Taking inspiration from recent developments in visual generative tasks using diffusion models, we propose a method for end-to-end speech-driven video editing using a denoising diffusion model. Given a video of a talking person, and a separate auditory speech recording, the lip and jaw motions are re-synchronized without relying on intermediate structural representations such as facial landmarks or a 3D face model. We show this is possible by conditioning a denoising diffusion model on audio mel spectral features to generate synchronised facial motion. Proof of concept results are demonstrated on both single-speaker and multi-speaker video editing, providing a baseline model on the CREMA-D audiovisual data set. To the best of our knowledge, this is the first work to demonstrate and validate the feasibility of applying end-to-end denoising diffusion models to the task of audio-driven video editing.

MY ALT TEXT

High-Level Overview of Network Architecture including the forward and backward diffusion processes.

Videos Generated By Our Multi-Speaker Model with Unseen Identities

Note, that while these results look really nice, the models still suffer from a lack of prolonged training time due to limitations in our available hardware. We recommend users with access to beefier GPUs to continue training from our pretrained checkpoints! More details regarding the training set up available in the paper :) Don't forget to click the arrow icon to cycle through the videos!

Test Samples Generated by our Single-Speaker Model

The following are videos generated by our single speaker model from the unseen test set. Notably this model was trained without attention in the up/donwsampling layers. To train your own model, you can finetune on top of the multi-speaker model, or train from scratch using the code in our repo.

Sample video results generated by our single-speaker model

Paper PDF

BibTeX

@misc{https://doi.org/10.48550/arxiv.2301.04474,
  doi = {10.48550/ARXIV.2301.04474},
  
  url = {https://arxiv.org/abs/2301.04474},
  
  author = {Bigioi, Dan and Basak, Shubhajit and Stypułkowski, Michał and Zieba, Maciej and Jordan, Hugh and McDonnell, Rachel and Corcoran, Peter},
  
  title = {Speech Driven Video Editing via an Audio-Conditioned Diffusion Model},
  
  publisher = {arXiv},
  
  year = {2023},
  
  copyright = {Creative Commons Attribution 4.0 International}
}