Sam Motamed

I am an ELLIS Ph.D. student at INSAIT where I am advised by Prof. Luc Van Gool and a Student Researcher at Google DeepMind in Toronto.

Before that I was a visiting researcher from 2021 to 2023 at CMU's Human Sensing Lab working with the amazing Fernando De La Torre. I also spent 7 wonderful years at the University of Toronto's Computer Science department where I earned my HBSc and MS degrees.

Email  /  CV  /  Google Scholar  /  Twitter  /  Github

profile photo
Research

I'm broadly interested in Generative Vision models for content creation, and currently focused on Video synthesis. My research aims to gain a better understanding of how to enable user-intuitive control over Generative models. I am also interested in bias mitigation and harnessing the power of large vision and language models by adapting them to solve personalized tasks using limited data. Relevant work is highlighted here.

Publications
fast-texture D3GU: Multi-Target Active Domain Adaptation via Enhancing Domain Alignment
Lin Zhang, Linghan Xu, Saman Motamed, Shayok Chakraborty, Fernando De la Torre
WACV, 2024
arxiv

A Multi-Target Active Domain Adaptation (MT-ADA) framework for image classification.

fast-texture Personalized Face Inpainting With Diffusion Models by Parallel Visual Attention
Jianjin Xu, Saman Motamed, Praneetha Vaddamanu, Chen Henry Wu, Christian Haene, Jean-Charles Bazin, Fernando De la Torre
WACV, 2024
code /  arxiv

Fast, identity preserving face inpainting with diffusion models.

fast-texture Lego: Learning to Disentangle and Invert Concepts Beyond Object Appearance in Text-to-Image Diffusion Models
Saman Motamed, Danda Pani Paudel, Luc Van Gool
arxiv, 2023
code /  arxiv

A method for textual inversion of adjectives and verbs in text-to-image diffusion models.

fast-texture PATMAT: Person Aware Tuning of Mask-Aware Transformer for Face Inpainting
Saman Motamed, Jianjin Xu, Chen Henry Wu, Fernando De la Torre
ICCV, 2023
ICCV /  code /  arxiv

A tuning method for personalizing inpainting of the face and preserving the odentity of a subject.

fast-texture Generative Visual Prompt: Unifying Distributional Control of Pre-Trained Generative Models
Chen Henry Wu, Saman Motamed, Shaunak Srivastava, Fernando De La Torre
NeurIPS, 2022
NeurIPS /  code /  arxiv

A framework for defining control over latent-based generative models.




Happenings
  • Oct 2023 Two papers accepted at WACV 2024. Details will be posted soon.
  • Oct 2023 I served as a volunteer at ICCV 23 and presented PATMAT.



















Feel free to steal this website's source code