-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
executable file
·283 lines (227 loc) · 16.2 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-87339102-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-87339102-1');
</script>
<title>Kristina Monakhova</title>
<meta name="description" content="Kristina Monakhova's Homepage">
<meta name="author" content="Kristina Monakhova">
<link href="minimal.css" rel="stylesheet">
</head>
<body>
<div class="container">
<header>
<a class="logo" href="index.html"> Kristina Monakhova</a>
<nav class="float-right">
<ul>
<li> <a href = "mailto:monakhova@berkeley.edu"> email </a> / <a href="resources/Monakhova_CV.pdf">CV</a> / <a href="https://scholar.google.com/citations?user=X71o1ykAAAAJ&hl">google scholar</a></li>
</ul>
</nav>
</header>
<div class="row">
<img class="float-left" src="resources/new_headshot1_circle.png" style="height:170px;">
<p> Hello! I am a postdoctoral fellow at <a href = "https://www.rle.mit.edu/">MIT RLE</a> working with <a href="https://www.rle.mit.edu/yougroup/">Sixian You</a> and <a href = "http://meche.mit.edu/people/faculty/gbarb@mit.edu">George Barbastathis</a>.
I obtained my PhD in Electrical Engineering and Computer Sciences from UC Berkeley in Laura Waller's <a href="http://www.laurawaller.com/">Computational Imaging Lab</a>.
I work on co-designing optics and algorithms to create better, smaller, and more capable cameras and microscopes. My work is at the intersection of signal processing,
optics, optimization, and machine learning.
</p>
<p> During my PhD, I was affiliated with the Berkeley Artificial Intelligence Research (<a href="https://bair.berkeley.edu/">BAIR</a>) Lab and was supported by the <a href="https://www.nsfgrfp.org/">NSF GRFP</a> fellowship.
In 2021, I worked as a research intern with <a href="http://vladlen.info/">Vladlen Koltun</a> on machine learning for extreme low light imaging. My PhD dissertation was on
<a href = "https://www2.eecs.berkeley.edu/Pubs/TechRpts/2022/EECS-2022-177.html">Physics-Informed Machine Learning for Computational Imaging</a>.
</p>
<p style="color:IndianRed;"><b>I will be joining Cornell as an assistant professor in Computer Science in Fall 2024!</b></p>
<!--
<p> I completed my B.S. in Electrical Engineering from the <a href="http://www.buffalo.edu">State University of New York at Buffalo</a> in May 2016.
At Buffalo, I was involved in a nanosatellite mission and several other space-related research projects, which you can read more about on my old website <a href="http://monakhova.weebly.com">here.</a>
</p>-->
</div>
<hr>
<div class="row">
<!-- <div class="col-2"> -->
<h3>Selected Invited Talks:</h3>
EPFL School of Computer and Communication Sciences Seminar, Mar. 2023 <br>
Cornell CS Seminar, Mar. 2023 <br>
University of Virginia ECE Seminar, Feb. 2023 <br>
Boston University ECE Seminar, Feb. 2023 <br>
Carnegie Mellon University Mechanical Engineering Seminar, Jan 2023 <br>
<a href = "https://www.cs.cornell.edu/content/physics-informed-machine-learning-computational-imaging-virtual-talk">Cornell Artificial Intelligence Seminar</a>, Dec. 2022 <br>
<a href = "https://rsvp.withgoogle.com/events/2022-computational-imaging-workshop">Google Computational Imaging Workshop</a>, Aug. 2022<br>
<a href = "https://sites.northwestern.edu/ccd2022/">CVPR Computational Cameras and Displays (CCD) workshop</a>, June 2022 <br>
<a href="https://visual.ee.ucla.edu/web_series/"> Warren Grundfest Lectures in Computational Imaging</a>, May 2022 <br>
</div>
<h3>News:</h3>
<div class="row">
<!-- </div> -->
<!-- <div class="col-8"> -->
<!-- Dec. 2022: I'm giving an invited talk at <a href = "https://www.cs.cornell.edu/content/physics-informed-machine-learning-computational-imaging">Cornell's Artificial Intelligence Seminar!</a>! <br>
Aug. 2022: I'm giving an invited talk at the <a href = "https://rsvp.withgoogle.com/events/2022-computational-imaging-workshop">Google Computational Imaging Workshop</a>!<br>
June 2022: I'm giving an invited talk at the CVPR <a href = "https://sites.northwestern.edu/ccd2022/">Computational Cameras and Displays (CCD) workshop</a>! <br>-->
Aug. 2022: I'll be starting at postdoc at MIT this fall through the support of the <a href = "https://engineering.mit.edu/the-mit-postdoctoral-fellowship-program-for-engineering-excellence/">MIT Postdoctoral Fellowship for Engineering Excellence</a>! <br>
<!--Apr. 2022: I'll be giving a lecture on May 6th as part of the <a href="https://visual.ee.ucla.edu/web_series/"> Warren Grundfest Lectures in Computational Imaging</a> series! <br> -->
July 2022: Congrats to my undergraduate mentee Christian on his paper at OSA's Imaging and Applied Optics Congress on <a href = "https://opg.optica.org/abstract.cfm?uri=cosi-2022-CF2C.1">compressive hyperspectral imaging</a>! <br>
Mar. 2022: Paper accepted to CVPR 2022 as an oral! <br>
June 2021:<br>
<p style="margin-left: 40px"> Congrats to my REU mentee <a href="https://www.linkedin.com/in/vi-tran-125230167">Vi Tran</a> for transferring from Orange Coast College to UC Berkeley! <br>
Congrats to my undergraduate mentee <a href = "https://ellinzhao.github.io/index.html">Ellin Zhao</a> on choosing UCLA for her PhD! <br>
Congrats to my undergraduate mentee <a href = "https://www.linkedin.com/in/nicolas-deshler/">Nico Deshler</a> on choosing the University of Arizona Optics for his PhD!</p>
Jan. 2021: Starting a research internship at <a href = "http://vladlen.info/lab/">Intel's Intelligent Systems Lab</a> <br>
Oct. 2020: Selected to participate in <a href="https://eecs.berkeley.edu/rising-stars-2020">EECS Rising Stars 2020 Workshop</a> <br>
July 2020:
<p style="margin-left: 40px"> Selected to participate in the <a href="https://nextprofnexus.engin.umich.edu/">NextProf Nexus 2020 Workshop</a><br>
Congrats to my undergraduate mentees Ellin and Nico on their paper at OSA's Imaging and Applied Optics Congress on <a href="https://www.osapublishing.org/abstract.cfm?uri=COSI-2020-CF2C.6">multi-sensor lensless imaging</a>!</p>
</div>
<!-- </div> -->
<!-- </div>-->
<hr>
<h2>Research Projects</h2>
<div class="row">
<h3>Dancing under the stars: video denoising in starlight</h3>
<video class="float-right" id="v0" width="300px" autoplay loop muted controls>
<source src="resources/starlight.mov" type="video/mp4" />
</video>
<p><b>K. Monkhova</b>, S. Richter, L. Waller, and V. Koltun<br>
<a href="https://kristinamonakhova.com/starlight_denoising/">Project Page</a> /
<a href = "https://arxiv.org/abs/2204.04210">Paper (CVPR Oral)</a> /
<!--<a href = "https://github.com/monakhova/starlight_denoising">Code</a> /-->
<a href = "https://kristinamonakhova.com/starlight_denoising/#dataset">Dataset</a> /
<a href = "https://www.youtube.com/watch?v=eFvQs2j9RMw">Video</a>
</p>
<!-- <img class="float-right" src="resources/ladmm.png" style="width:400px;"> -->
<p>Imaging in low light is extremely challenging due to low photon counts.
Using sensitive CMOS cameras, it is currently possible to take videos at
night under moonlight (0.05-0.3 lx illumination). In this paper, we
demonstrate photorealistic video under starlight (no moon present,
<0.001 lx) for the first time. To enable this, we <b>learn a
physics-based noise model</b> to more accurately represent camera noise at
the lowest light levels. Using this noise model, we train a video
denoiser using a combination of simulated noisy video clips and real
noisy still images. We present a 5-10fps video dataset with significant
motion taken between 0.6-0.7 mlx with no active illumination.
Comparing against alternative methods, we achieve improved video quality
at the lowest light levels, demonstrating <b>photorealistic video
denoising in starlight (submillilux)</b> for the first time.
</div>
<div class="row">
<h3>Deep learning for fast spatially-varying deconvolution</h3>
<img class="float-right" src="resources/spatially_varying.png" style="width:300px;">
<p>K. Yanny*, <b>K. Monkhova</b>*, R. Shuai, and L. Waller<br>
<a href="https://waller-lab.github.io/MultiWienerNet/">Project Page</a> /
<a href = "https://doi.org/10.1364/OPTICA.442438">Paper (Optica)</a>
</p>
<!-- <img class="float-right" src="resources/ladmm.png" style="width:400px;"> -->
<p>All optical systems blur images due to imperfections in the optics or
intentional design to encode additional information. Algorithms can be
used to undo some of this blur or recover 3D content, but most assume
that the blur is the same across the field of view, whereas in practice
the blur can be very different across the image. Our approach, called
MultiWienerNet, combines knowledge of the <b>field-varying blur
with deep learning</b> through multiple differetiable Wiener filters
to undo the blur quickly and efficiently. We show a 1,500X speedup
against classic methods as well as improved image quality.
</div>
<div class="row">
<h3>Untrained networks for compressive lensless photography</h3>
<img class="float-right" src="resources/rolling_shutter_combined_condensed.gif" style="width:300px;">
<p><b>K. Monkhova</b>*, V. Tran*, G. Kuo, L. Waller <br>
<a href="https://waller-lab.github.io/UDN/index.html">Project Page</a> /
<a href = "https://doi.org/10.1364/OE.424075">Paper (Optics Express)</a>
</p>
<!-- <img class="float-right" src="resources/ladmm.png" style="width:400px;"> -->
<p>Deep learning-based reconstruction methods can improve image quality for many inverse problems, but
for high-dimensional imaging (e.g. high speed video, hyperspectral, etc.) obtaining labeled
pairs to train deep networks is often impractical or impossible. In this work, we propose to use
<b>unsupervised learning</b> for compressive lensless photography. Our 'untrained network' is optimized
using only our measurement and physics model to recover a video or hyperspectral volume from a
2D measurement. We demonstrate improved image quality for <b>single-shot compressive video</b> and <b>single-shot
hyperspectral imaging</b> without needing any training data.
</div>
<div class="row">
<h3>Spectral DiffuserCam: lensless snapshot hyperspectral imaging</h3>
<img class="float-right" src="resources/spectralDiffuser.jpg" style="width:300px;">
<p><b>K. Monkhova</b>*, K. Yanny*, N. Aggarwal, L. Waller <br>
<a href="https://waller-lab.github.io/SpectralDiffuserCam/">Project Page</a> /
<a href="https://www.youtube.com/watch?v=ReH0x_W3glM&feature=emb_title">Video</a> /
<a href="https://github.com/Waller-Lab/SpectralDiffuserCam">Code</a> /
<a href = "http://www.osapublishing.org/optica/abstract.cfm?URI=optica-7-10-1298">Paper (Optica)</a>
</p>
<!-- <img class="float-right" src="resources/ladmm.png" style="width:400px;"> -->
<p>In this work, we propose a novel, <b>compact, and inexpensive computational camera for snapshot hyperspectral imaging</b>. Our system consists of a repeated spectral filter array placed directly on the image sensor and a diffuser placed close to the sensor. Each point in the world maps to a unique pseudorandom pattern on the spectral filter array, which encodes multiplexed spatio-spectral information. A sparsity-constrained inverse problem solver then recovers the hyperspectral volume with good spatio-spectral resolution. By using a spectral filter array, our hyperspectral imaging framework is flexible and can be designed with contiguous or non-contiguous spectral filters that can be chosen for a given application. </p>
</div>
<div class="row">
<h3>Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy</h3>
<p>K. Yanny, N. Antipa, W. Liberti, S. Dehaeck, <b>K. Monakhova</b>, F. L. Liu, K. Shen, R. Ng, L. Waller <br>
<img class="float-right" src="resources/bear.gif" style="width:300px;">
<a href="https://waller-lab.github.io/Miniscope3D/">Project Page</a> /
<a href="https://github.com/Waller-Lab/Miniscope3D"> Code </a> /
<a href = "https://www.nature.com/articles/s41377-020-00403-7">Paper (Nature LS&A)</a>
</p>
<!-- <img class="float-right" src="resources/ladmm.png" style="width:400px;"> -->
<p>
In this work, we replace the tube lens of a <a href="http://miniscope.org/index.php/Main_Page">Miniscope</a> with an engineered
and optimized diffuser that's printed using a <a href="https://www.nanoscribe.com/en/">Nanoscribe</a> 3D printer. The resulting imager
is inexpensive, tiny (the size of a quarter), and can capture <b>3D fluorescent volumes from a single image</b>, with resulting 3 micron lateral resolution
and 10 micron axial resolution at video rates with no moving parts. Check out more of our 3D videos of water bear videos
<a href="https://www.nature.com/articles/s41377-020-00403-7#Sec20">here</a>.
</div>
<div class="row">
<h3>Physics-based learning for lensless imaging</h3>
<img class="float-right" src="resources/video_dataset_square.gif" style="width:300px;">
<p>
<b>K. Monakhova</b>, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, L. Waller <br>
<a href="https://waller-lab.github.io/LenslessLearning/index.html">Project Page</a> /
<a href="https://github.com/Waller-Lab/LenslessLearning">Code</a> /
<a href="https://waller-lab.github.io/LenslessLearning/dataset.html">Dataset</a> /
<a href = "https://www.osapublishing.org/oe/abstract.cfm?uri=oe-27-20-28075">Paper (Optics Express)</a>
</p>
<!-- <img class="float-right" src="resources/ladmm.png" style="width:400px;"> -->
<p>Mask-based lensless imagers, like <a = href="https://waller-lab.github.io/DiffuserCam/">DiffuserCam</a>, can be small, compact, and capture higher-dimensional
information (3D, temporal), but the reconstruction time is slow and the
image quality is often degraded. In this work, we show that we can use
knowledge of optical system physics along with deep learning to form an unrolled
model-based network to solve the reconstruction problem, thereby using <b>physics
+ deep learning</b> together to speed up and improve image reconstructions.
As compared to traditional methods, our architecture achieves better perceptual
image quality and runs 20× faster, enabling interactive previewing of the scene.
</p>
<!--
Papers:
<p> <b>Kristina Monakhova</b>, Joshua Yurtsever, Grace Kuo, Nick Antipa, Kyrollos
Yanny, and Laura Waller, "Learned reconstructions for practical mask-based
lensless imaging," Opt. Express 27, 28075-28090 (2019)
<a href = "https://www.osapublishing.org/oe/abstract.cfm?uri=oe-27-20-28075"> [pdf] </a>
</p>
<p>
<p> <b>Kristina Monakhova</b>, Nick Antipa, and Laura Waller, “Learning for lensless mask-based imaging,”
in Computational Optical Sensing and Imaging, pp. CTu3A–2, Optical Society of America, 2019 <a href = "https://www.osapublishing.org/abstract.cfm?uri=COSI-2019-CTu3A.2">[pdf] </a> </p>
-->
</div>
<hr>
<div class="row">
<h3>Awards and Recognition</h3>
<ul>
<li>MIT <a href = "https://engineering.mit.edu/the-mit-postdoctoral-fellowship-program-for-engineering-excellence/">Postdoctoral Fellowship for Engineering Excellence</a>, 2022</li>
<li>UC Berkeley EECS <a href = "https://www2.eecs.berkeley.edu/Students/Awards/1/">Demetri Angelakos Memorial Achievement Award</a>, 2021 </li>
<li> <a href = "https://www2.eecs.berkeley.edu/risingstars/2020/participants/monakhova.shtml">Rising Star in EECS</a>, 2020</li>
<li>UC Berkeley EECS Chairs’ Graduate Award, 2020 </li>
<li>NSF Graduate Research Fellowship, 2016</li>
<li>NDSEG Graduate Research Fellowship, 2016 (declined)</li>
<li>Barry M. Goldwater Scholarship, 2015 </li>
<li>University at Buffalo Presidential Scholar, 4 year full ride scholarship</li>
</ul>
</div>
<!--<div class="row"><div class="col-12"><img class="center" src="resources/img3.jpg" style="width:1000px;"></div></div>
-->
<footer>
© Kristina Monakhova <!-- Link not required, but appreciated. --><a class="float-right" href="http://minimalcss.com">Minimal</a>
</footer><!-- footer -->
</div>
</body>
</html>