Lego: Learning to Disentangle and Invert Concepts Beyond Object Appearance in Text-to-Image Diffusion Models

INSAIT, ETH Zurich, KU Leuven


Diffusion models have revolutionized generative content creation and text-to-image (T2I) diffusion models in particular have increased the creative freedom of users by allowing scene synthesis using natural language. T2I models excel at synthesizing concepts such as nouns, appearances, and styles. To enable customized content creation based on a few example images of a concept, methods such as Textual Inversion and DreamBooth invert the desired concept and enable synthesizing it in new scenes. However, inverting more general concepts that go beyond object appearance and style (adjectives and verbs) through natural language, remains a challenge. Two key characteristics of these concepts contribute to the limitations of current inversion methods. 1) Adjectives and verbs are entangled with nouns (subject) and can hinder appearance-based inversion methods, where the subject appearance leaks into the concept embedding and 2) describing such concepts often extends beyond single word embeddings (“being frozen in ice”, “walking on a tightrope”, etc.) that current methods do not handle. In this study, we introduce Lego, a textual inversion method designed to invert subject entangled concepts from a few example images. Lego disentangles concepts from their associated subjects using a simple yet effective Subject Separation step and employs a Context Loss that guides the inversion of single/multi-embedding concepts. In a thorough user study, Lego-generated concepts were preferred over 70% of the time when compared to the baseline. Additionally, visual question answering using a large language model suggested Lego-generated concepts are better aligned with the text description of the concept.

How it works


We identify Two characteristics associated with adjectives and verbs that hinder previous text-based inversion methods and address them with Lego for a faithful inversion of these concepts. Example images of Adjective and Verbs that one would like to use for inversion has an associated subject (noun) that the adjective or verb is applied to. We call this entanglement where trying to invert the concept using previous methods cannot separate the subject from the concept. For this, we introduce a subject separation step that disentangles the subject from concept by dedicating a separate embedding to learn the subjuect appearance. The second characteristic is that while previous inversion methods work with single-word-embedding concepts in their framework, many adjectives and verbs require multiple word embeddings to be described. We enable optimizing such multi-word-embedding concepts by introducing a context loss that guides each embedding in the text embedding space. These two steps combgined allow Lego to faithfully represent such concepts.

A few examples and baseline comparisons