-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
249 lines (222 loc) · 16.7 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>MCPL</title>
<link href="style.css" rel="stylesheet">
<script type="text/javascript" src="jscript.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_HTML-full"></script>
</head>
<body>
<div class="content">
<h1>An Image is Worth Multiple Words: Discovering Object Level Concepts using Multi-Concepts Prompts Learning</h1>
<h5>ICML 2024</h5>
<p id="authors">
<a href="https://chenjin.netlify.app/">Chen Jin<sup>1</sup></a>
<a href="https://rt416.github.io/">Ryutaro Tanno<sup>2</sup></a>
<a href="https://uk.linkedin.com/in/amrutha-saseendran">Amrutha Saseendran<sup>1</sup> </a>
<a href="https://tomdiethe.com/">Tom Diethe<sup>1</sup></a>
<a href="https://uk.linkedin.com/in/philteare">Philip Teare<sup>1</sup></a><br>
<span style="font-size: 20px"><sup>1</sup> AstraZeneca <sup>2</sup> Google DeepMind
</span></p>
<font size="+2">
<p style="text-align: center;">
<a href="https://arxiv.org/abs/2310.12274" target="_blank">Paper</a>
<a href="https://github.com/AstraZeneca/MCPL/tree/master" target="_blank">Code</a>
<a href="https://github.com/AstraZeneca/MCPL/tree/master/dataset" target="_blank">Dataset</a>
<a href="https://www.youtube.com/watch?v=EXnyT-JVG5U" target="_blank">Video</a>
</p>
</font>
<br>
<img src="./docs/teaser_tldr.jpg" class="teaser-gif" style="width:100%;"><br>
<!-- <h4 style="text-align:center"><em>Multi-Concept Prompt Learning (MCPL) pioneers the novel task of mask-free text-guided learning for multiple prompts from one scene. Our approach not only enhances current methodologies but also paves the way for novel applications, such as facilitating knowledge discovery through natural language-driven interactions between humans and machines. </em></h4> -->
<h4 style="text-align:center"><em>TL;DR: We propose a framework that allows us to discover and manipulate multiple concepts in a given image with partial text instructions. </em></h4>
</div>
<!-- <div class="content">
<img style="width: 100%;" src="./docs/teaser.gif" alt="teaser.">
</div> -->
<div class="content">
<h2 style="text-align:center;">Abstract</h2>
<p>Textural Inversion, a prompt learning method, learns a singular embedding for a new "word" to represent image style and appearance, allowing it to be integrated into natural language sentences to generate novel synthesised images. However, identifying and integrating multiple object-level concepts within one scene poses significant challenges even when embeddings for individual concepts are attainable. This is further confirmed by our empirical tests. To address this challenge, we introduce a framework for <i>Multi-Concept Prompt Learning (MCPL)</i>, where multiple new "words" are simultaneously learned from a single sentence-image pair. To enhance the accuracy of word-concept correlation, we propose three regularization techniques: <i>Attention Masking (AttnMask)</i> to concentrate learning on relevant areas; <i>Prompts Contrastive Loss (PromptCL)</i> to separate the embeddings of different concepts; and <i>Bind adjective (Bind adj.)</i> to associate new "words" with known words. We evaluate via image generation, editing, and attention visualization with diverse images. Extensive quantitative comparisons demonstrate that our method can learn more semantically disentangled concepts with enhanced word-concept correlation. Additionally, we introduce a novel dataset and evaluation protocol tailored for this new task of learning object-level concepts.</p>
</div>
<div class="content">
<h2>Learning Multiple Concepts from Single Image and Editing</h2>
<h3>teddy bear and skateboard example</h3>
<div class="cat-hat-main">
<div class="intro-text"> Our method learns multiple new concepts and assures disentangled and precise prompt-concept correlation (click to view per-prompt attention maps). <br> </div>
<div class="intro-text"> We can then modify each concept by replacing the prompts/words to generate novel images (click words below to try editing).<br> </div>
</div>
<br>
<div class="cat-hat-main">
<img class="attn-img-tt" id="attn-img-tt" src="./docs/teddybear_timesquare/teddybear_timesquare_attention-mask.png">
<div class="edit-text"> <b>Visuslise</b> <br>
<span class="tt-button_a">attention-mask</span>
of <br> "a <span class="tt-button_a">photo</span> of brown * (<span class="tt-button_a">teddybear-attention</span>)
on a rolling @ (<span class="tt-button_a">skateboard-attention</span>) at times square".
</div>
<img class="edit-img-tt" id="edit-img-tt" src="./docs/teddybear_timesquare/teddybear_timesquare_*.png">
<div class="edit-text"> <b>Edit</b> <br> "brown <span class="tt-button_b">*</span>   (<span class="tt-button_b">panda</span> / <span class="tt-button_b">cat</span>)
on a rolling <span class="tt-button_b">@</span>   (<span class="tt-button_b">surfboard</span> / <span class="tt-button_b">blanket</span>)
at times square." <br>
</div>
</div>
<h3>banana and basket example</h3>
<div class="cat-hat-main">
<img class="attn-img-bb" id="attn-img-bb" src="./docs/banana_basket/banana_basket_attention-mask.png">
<div class="edit-text"> <b>Visuslise</b> <br>
<span class="bb-button_a">attention-mask</span>
of <br> "a <span class="bb-button_a">photo</span> of brown * (<span class="bb-button_a">basket-attention</span>)
with yellow @ (<span class="bb-button_a">banana-attention</span>)".
</div>
<img class="edit-img-bb" id="edit-img-bb" src="./docs/banana_basket/banana_basket_*.png">
<div class="edit-text"> <b>Edit</b> <br> "brown <span class="bb-bracket"><span class="bb-button_b">*</span>   (<span class="bb-button_b">stainless-pot</span> / <span class="bb-button_b">pottery</span>)</span>
with <span class="bb-bracket"><span class="bb-button_b">yellow-@</span>   (<span class="bb-button_b">yellow-pineapple</span> / <span class="bb-button_b">green-grapes</span>)</span>
at times square." <br>
</div>
</div>
</div>
<div class="content">
<h2>Discovering OOD Concepts from Medical Image and Disentangling</h2>
<div class="body-text"> Our method opens an avenue for discovering/introducing new concepts the model have not seen before, from abundantly available natural language annotations such as paired textbook figures and captions. <br> </div>
<br><br>
<h3>cardiac MRI example</h3>
<div class="cat-hat-main">
<div class="intro-text"> We learn out-of-distribution concepts using biomedical figures and their simplified captions. <br> </div>
<div class="intro-text"> We can then generate or remove each disentangled prompt to verify the learning of unfamiliar concepts.<br> </div>
</div>
<br>
<div class="cat-hat-main">
<img class="attn-img-lge" id="attn-img-lge" src="./docs/lge_cmr/lge_cmr_attention-mask.png">
<div class="edit-text"> <b>Visuslise</b> <br>
<span class="lge-button_a">attention-mask</span>
of <br> "a <span class="lge-button_a">photo</span> of
! (<span class="lge-button_a">cMRI-attention</span>) with
round * (<span class="lge-button_a">cavity-attention</span>)
and thin @ (<span class="lge-button_a">scar-attention</span>) on the side circled by
<span class="lge-button_a">yellow-lines-attention</span>".
</div>
<img class="edit-img-lge" id="edit-img-lge" src="./docs/lge_cmr/lge_cmr_round-*.png">
<div class="edit-text"> <b>Generate</b> <br> disentangled <br>
<span class="lge-bracket"><span class="lge-button_b">round-*</span>, <br>
<span class="lge-bracket"><span class="lge-button_b">thin-@</span>, <br>
<span class="lge-bracket"><span class="lge-button_b">yellow-lines</span>, <br>
<span class="lge-bracket"><span class="lge-button_b">!</span>, <br>
</div>
</div>
<h3>chest X-ray example</h3>
<div class="cat-hat-main">
<img class="attn-img-ms" id="attn-img-ms" src="./docs/ms_cxr/ms_cxr_attention-mask.png">
<div class="edit-text"> <b>Visuslise</b> <br>
<span class="ms-button_a">attention-mask</span>
of <br> "a <span class="ms-button_a">photo</span> of
white ! (<span class="ms-button_a">chest-X-ray-attention</span>) and
black @ (<span class="ms-button_a">lung-attention</span>)
which have smoky * (<span class="ms-button_a">consolidation-attention</span>)".
</div>
<img class="edit-img-ms" id="edit-img-ms" src="./docs/ms_cxr/ms_cxr_white-!.png">
<div class="edit-text"> <b>Generate</b> <br> disentangled <br>
<span class="ms-bracket"><span class="ms-button_b">white-!</span>, <br>
<span class="ms-bracket"><span class="ms-button_b">black-@</span>, <br>
<span class="ms-bracket"><span class="ms-button_b">smoky-*</span>, <br>
or <br>
<span class="ms-bracket"><span class="ms-button_b">remove white-!</span>, <br>
<span class="ms-bracket"><span class="ms-button_b">remove black-@</span>, <br>
<span class="ms-bracket"><span class="ms-button_b">remove smoky-*</span>, <br>
</div>
</div>
</div>
<div class="content">
<h2>Hypothesis Generation of Disease Progression</h2>
<div class="body-text"> Our method can also help experts/non-experts learn unfamiliar concepts from picture(s) and explore their impacts. <br> </div>
<div class="skin3d-main">
<img class="dev-img-demo" id="skin3d-img-ori" src="./docs/melanoma_skin_3d/ms_3d_demo_b.png">
<img class="dev-img" id="skin3d-img" src="./docs/melanoma_skin_3d/ms_3d_25.jpg">
<div class="dev-text">
<span style="display: block; text-align: center">Human: "how <span id ="skincancer" style="background-color: #e55608;">skin cancer</span> may develop?" </span><br>
<input type="range" min="1" max="48" value="26" class="slider" id="range_skin3d">
</div>
</div>
<br>
<div class="skinreal-main">
<img class="dev-img-demo" id="skinreal-img-ori" src="./docs/melanoma_skin_real/ms_real_demo_b.png">
<img class="dev-img" id="skinreal-img" src="./docs/melanoma_skin_real/ms_real_25.jpg">
<div class="dev-text">
<span style="display: block; text-align: center">Human: "how <span id ="skincancer_real" style="background-color: #e55608;">skin cancer</span> may develop?" </span><br>
<input type="range" min="0" max="47" value="25" class="slider2" id="range_skinreal">
</div>
</div>
<br>
<div class="cxray-main">
<img class="dev-img-demo" id="cxray-img-ori" src="./docs/chest_xray_cos/chest_xray_demo_b.png">
<img class="dev-img" id="cxray-img" src="./docs/chest_xray_cos/cxrayl_25.jpg">
<div class="dev-text">
<span style="display: block; text-align: center">Human: "how <span id ="cxray_real" style="background-color: #e55608;">lung consolidation</span> may develop?" </span><br>
<input type="range" min="0" max="46" value="25" class="slider2" id="range_cxray">
</div>
</div>
<br>
</div>
<div class="content">
<h2>Method Overview</h2>
<img class="summary-img" src="./docs/method.png" style="width:100%;">
<p>
Prior methods failed due to inaccurate word-concept correlation. We fixed this by contrasting different concepts and aligning cross-attention with semantically meaningful regions of known words. Details are as follows: <i>MCPL</i> takes a sentence (top-left) and a sample image (top-right) as input, feeding them into a pre-trained text-guided diffusion model comprising a text encoder \(c_\phi\) and a denoising network \(\epsilon_\theta\). The string's multiple prompts are encoded into a sequence of embeddings which guide the network to generate images \(\tilde{X}_0\) close to the target one \(X_0\). MCPL focuses on learning multiple learnable prompts (coloured texts), updating only the embeddings \(\{v^*\}\) and \(\{v^\&\}\) of the learnable prompts while keeping \(c_\phi\) and \(\epsilon_\theta\) frozen. We introduce <i>Prompts Contrastive Loss (PromptCL)</i> to help separate multiple concepts within learnable embeddings. We also apply <i>Attention Masking (AttnMask)</i>, using masks based on the average cross-attention of prompts, to refine prompt learning on images. Optionally we associate each learnable prompt with an adjective (e.g., "brown" and "rolling") to improve control over each learned concept, referred to as <i>Bind adj.</i>
</p>
</div>
<div class="content">
<h2>Introducing MCPL-one and MCPL-diverse Training Strategies</h2>
<img class="summary-img" src="./docs/041_MCv1_MCv2_v2.png" style="width:100%;">
<p>
Learning and Composing “ball” and “box”. We learned the concepts of “ball” and “box” using different methods (top row) and composed them into unified scenes (bottom row). We compare three learning methods: Textural Inversion (Gal et al., 2022), which learns each concept separately from isolated images (left); MCPL-one, which jointly learns both concepts from un- cropped examples using a single prompt string (middle); and MCPL-diverse, which advances this by learning both concepts with per-image specific relationships (right).
</p>
</div>
<div class="content">
<h2>Ablation Studies</h2>
<h3>Comparing MCPL-diverse versus MCPL-one</h3>
<img class="experiment-img" src="./docs/ablation_per_img_diverse_vs_one_cat_hat.png"">
<p>
Visual comparison of MCPL-diverse versus MCPL-one in learning per-image different concept tasks (cat with different hat example). As MCPL-diverse are specially designed for such tasks, it outperforms MCPL-one, which fails to capture per image different hat styles.
</p>
<h3>Learning more than two concepts from a single image</h3>
<img class="experiment-img" src="./docs/ours_vs_bas_chicken.png"">
<p>
A qualitative comparison between our method (MCPL-diverse) and mask-based approaches. Our MCPL-diverse, which neither uses mask inputs nor updates model parameters, showed decent results, outperforming most mask-based approaches and was closer to SoTA Break-A-Scene. Images modified from Break-A-Scene (Avrahami et al., 2023).
</p>
</div>
<div class="content">
<h2>Quantitative Evaluations Dataset and Results</h2>
</p>
We collect both in-distribution natural images and out-of-distribution biomedical images over 16 object-level concepts, with all images containing multiple concepts and object-level masks.
</p>
<img class="summary-img" src="./docs/044v_visualise_bas_and_ground_truth.png" style="width:100%;">
<p>
Visualisation of the prepared ground truth examples (top) and the generated images with Break-A-Scene (bottom). Note that BAS requires segmentation masks as input and employs separate segmentation models to produce masked objects, thus serving as a performance upper-bound.
We compare three baseline methods: 1) Textural Inversion applied to each masked object serving as our best estimate for the unknown disentangled “ground truth” embedding. 2) Break-A-Scene (BAS), the state-of-the-art (SoTA) mask-based multi-concept learning method, serves as a performance upper bound, though it’s not directly comparable. 3) MCPL-all as our naive adaptation of the Textural Inversion method to achieve the multi-concepts learning goal.
</p>
<h3>The t-SNE projection of the learned embeddings</h3>
<img class="experiment-img" src="./docs/0432_tsne_v3.png"">
<p>
Our method can effectively distinguish all learned concepts compared to Textural Inversion (MCPL-all), the SoTA mask-based learning method, Break-A-Scene, and the masked ’ground truth.
</p>
<h3>Embedding similarity in learned object-level concepts compared to masked “ground truth”</h3>
<div class="cat-hat-main">
<img class="result-img" src="./docs/0433_iclr_emb_similarity_natural_merged.png"">
<img class="result-img" src="./docs/0433_iclr_emb_similarity_medical_merged.png"">
</div>
<p>
We evaluate the embedding similarity of our multi-concept adaptation of Textural Inversion (MCPL-all) and the state-of-the-art (SoTA) mask-based learning method, Break-A-Scene (BAS) by Avrahami et al. (2023), against our regularised versions. The analysis is conducted in both pre-trained text (BERT) and image encoder spaces (CLIP, DINOv1, and DINOv2), with each bar representing an average of 40,000 pairwise cosine similarities.
</p>
</div>
<div class="content">
<h4>BibTex</h4>
<p> @inproceedings{<br>
anonymous2024an,<br>
title={An Image is Worth Multiple Words: Discovering Object Level Concepts using Multi-Concept Prompt Learning},<br>
author={Anonymous},<br>
booktitle={Forty-first International Conference on Machine Learning},<br>
year={2024},<br>
url={https://openreview.net/forum?id=F3x6uYILgL}<br>
} </p>
<br>
</div>
</body>
</html>