-
Notifications
You must be signed in to change notification settings - Fork 0
/
reddit.xml
335 lines (335 loc) · 63.5 KB
/
reddit.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<category term="MachineLearning" label="r/MachineLearning"/>
<updated>2018-12-27T05:18:17+00:00</updated>
<icon>https://www.redditstatic.com/icon.png/</icon>
<id>/r/MachineLearning.rss</id>
<link rel="self" href="https://www.reddit.com/r/MachineLearning.rss" type="application/atom+xml" />
<link rel="alternate" href="https://www.reddit.com/r/MachineLearning" type="text/html" />
<logo>https://b.thumbs.redditmedia.com/18a2I44a4l7fNrTWHDoJuWVy79_ptU7Y-a2sqWt4YKQ.png</logo>
<title>Machine Learning</title>
<entry>
<author>
<name>/u/omniscientclown</name>
<uri>https://www.reddit.com/user/omniscientclown</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Enjoyed this thread last year, so I am making a one for this year. </p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/omniscientclown"> /u/omniscientclown </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a6cbzm/d_what_is_the_best_ml_paper_you_read_in_2018_and/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a6cbzm/d_what_is_the_best_ml_paper_you_read_in_2018_and/">[comments]</a></span></content>
<id>t3_a6cbzm</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a6cbzm/d_what_is_the_best_ml_paper_you_read_in_2018_and/" />
<updated>2018-12-15T04:34:58+00:00</updated>
<title>[D] What is the best ML paper you read in 2018 and why?</title>
</entry>
<entry>
<author>
<name>/u/ML_WAYR_bot</name>
<uri>https://www.reddit.com/user/ML_WAYR_bot</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>This is a place to share machine learning research papers, journals, and articles that you&#39;re reading this week. If it relates to what you&#39;re researching, by all means elaborate and give us your insight, otherwise it could just be an interesting paper you&#39;ve read.</p> <p>Please try to provide some insight from your understanding and please don&#39;t post things which are present in wiki.</p> <p>Preferably you should link the arxiv page (not the PDF, you can easily access the PDF from the summary page but not the other way around) or any other pertinent links.</p> <p>Previous weeks :</p> <table><thead> <tr> <th>1-10</th> <th>11-20</th> <th>21-30</th> <th>31-40</th> <th>41-50</th> <th>51-60</th> </tr> </thead><tbody> <tr> <td><a href="https://www.reddit.com/4qyjiq">Week 1</a></td> <td><a href="https://www.reddit.com/57xw56">Week 11</a></td> <td><a href="https://www.reddit.com/60ildf">Week 21</a></td> <td><a href="https://www.reddit.com/6s0k1u">Week 31</a></td> <td><a href="https://www.reddit.com/7tn2ax">Week 41</a></td> <td><a href="https://reddit.com/9s9el5">Week 51</a></td> </tr> <tr> <td><a href="https://www.reddit.com/4s2xqm">Week 2</a></td> <td><a href="https://www.reddit.com/5acb1t">Week 12</a></td> <td><a href="https://www.reddit.com/64jwde">Week 22</a></td> <td><a href="https://www.reddit.com/72ab5y">Week 32</a></td> <td><a href="https://www.reddit.com/7wvjfk">Week 42</a></td> <td><a href="https://reddit.com/a4opot">Week 52</a></td> </tr> <tr> <td><a href="https://www.reddit.com/4t7mqm">Week 3</a></td> <td><a href="https://www.reddit.com/5cwfb6">Week 13</a></td> <td><a href="https://www.reddit.com/674331">Week 23</a></td> <td><a href="https://www.reddit.com/75405d">Week 33</a></td> <td><a href="https://www.reddit.com/807ex4">Week 43</a></td> <td></td> </tr> <tr> <td><a href="https://www.reddit.com/4ub2kw">Week 4</a></td> <td><a href="https://www.reddit.com/5fc5mh">Week 14</a></td> <td><a href="https://www.reddit.com/68hhhb">Week 24</a></td> <td><a href="https://www.reddit.com/782js9">Week 34</a></td> <td><a href="https://reddit.com/8aluhs">Week 44</a></td> <td></td> </tr> <tr> <td><a href="https://www.reddit.com/4xomf7">Week 5</a></td> <td><a href="https://www.reddit.com/5hy4ur">Week 15</a></td> <td><a href="https://www.reddit.com/69teiz">Week 25</a></td> <td><a href="https://www.reddit.com/7b0av0">Week 35</a></td> <td><a href="https://reddit.com/8tnnez">Week 45</a></td> <td></td> </tr> <tr> <td><a href="https://www.reddit.com/4zcyvk">Week 6</a></td> <td><a href="https://www.reddit.com/5kd6vd">Week 16</a></td> <td><a href="https://www.reddit.com/6d7nb1">Week 26</a></td> <td><a href="https://www.reddit.com/7e3fx6">Week 36</a></td> <td><a href="https://reddit.com/8x48oj">Week 46</a></td> <td></td> </tr> <tr> <td><a href="https://www.reddit.com/52t6mo">Week 7</a></td> <td><a href="https://www.reddit.com/5ob7dx">Week 17</a></td> <td><a href="https://www.reddit.com/6gngwc">Week 27</a></td> <td><a href="https://www.reddit.com/7hcc2c">Week 37</a></td> <td><a href="https://reddit.com/910jmh">Week 47</a></td> <td></td> </tr> <tr> <td><a href="https://www.reddit.com/53heol">Week 8</a></td> <td><a href="https://www.reddit.com/5r14yd">Week 18</a></td> <td><a href="https://www.reddit.com/6jgdva">Week 28</a></td> <td><a href="https://www.reddit.com/7kgcqr">Week 38</a></td> <td><a href="https://reddit.com/94up0g">Week 48</a></td> <td></td> </tr> <tr> <td><a href="https://www.reddit.com/54kvsu">Week 9</a></td> <td><a href="https://www.reddit.com/5tt9cz">Week 19</a></td> <td><a href="https://www.reddit.com/6m9l1v">Week 29</a></td> <td><a href="https://www.reddit.com/7nayri">Week 39</a></td> <td><a href="https://reddit.com/98n2rt">Week 49</a></td> <td></td> </tr> <tr> <td><a href="https://www.reddit.com/56s2oa">Week 10</a></td> <td><a href="https://www.reddit.com/5wh2wb">Week 20</a></td> <td><a href="https://www.reddit.com/6p3ha7">Week 30</a></td> <td><a href="https://www.reddit.com/7qel9p">Week 40</a></td> <td><a href="https://reddit.com/9cf158">Week 50</a></td> <td></td> </tr> </tbody></table> <p>Most upvoted papers two weeks ago:</p> <p><a href="/u/blackbearx3">/u/blackbearx3</a>: <a href="http://proceedings.mlr.press/v5/titsias09a.html">Variational Learning of Inducing Variables in Sparse Gaussian Processes</a></p> <p><a href="/u/wassname">/u/wassname</a>: <a href="https://papers.nips.cc/paper/8200-non-delusional-q-learning-and-value-iteration.pdf">Non-Delusional Q-Learning and Value-Iteration</a></p> <p>Besides that, there are no rules, have fun.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/ML_WAYR_bot"> /u/ML_WAYR_bot </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a8yaro/d_machine_learning_wayr_what_are_you_reading_week/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a8yaro/d_machine_learning_wayr_what_are_you_reading_week/">[comments]</a></span></content>
<id>t3_a8yaro</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a8yaro/d_machine_learning_wayr_what_are_you_reading_week/" />
<updated>2018-12-23T21:00:06+00:00</updated>
<title>[D] Machine Learning - WAYR (What Are You Reading) - Week 53</title>
</entry>
<entry>
<author>
<name>/u/tldrtldreverything</name>
<uri>https://www.reddit.com/user/tldrtldreverything</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Hey, I published a summary of a new paper from FAIR called SlowFast. The paper details a cool technique which allows to understand what&#39;s happening in images by using two neural networks - a &#39;slow&#39; CNN that recognizes the fine content of the image and a &#39;fast&#39; CNN that recognizes swift changes in the content. Full summary here: <a href="https://lyrn.ai/2018/12/21/slowfast-dual-mode-cnn-for-video-understanding/">https://lyrn.ai/2018/12/21/slowfast-dual-mode-cnn-for-video-understanding/</a></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/tldrtldreverything"> /u/tldrtldreverything </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9qkns/r_slowfast_dualmode_cnn_for_video_understanding/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9qkns/r_slowfast_dualmode_cnn_for_video_understanding/">[comments]</a></span></content>
<id>t3_a9qkns</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9qkns/r_slowfast_dualmode_cnn_for_video_understanding/" />
<updated>2018-12-26T17:44:07+00:00</updated>
<title>[R] SlowFast - Dual-mode CNN for Video Understanding</title>
</entry>
<entry>
<author>
<name>/u/Richard_wth</name>
<uri>https://www.reddit.com/user/Richard_wth</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p><strong>TL;DR</strong>: Rearranging the terms in maximum mean discrepancy yields a much better loss function for the discriminator of generative adversarial nets</p> <p><strong>Keywords</strong>:</p> <p>Spectral normalization, repulsive loss, bounded RBF kernel</p> <p><strong>Abstract</strong>:</p> <p>Generative adversarial nets (GANs) are widely used to learn the data sampling process and their performance may heavily depend on the loss functions, given a limited computational budget. This study revisits MMD-GAN that uses the maximum mean discrepancy (MMD) as the loss function for GAN and makes two contributions. First, we argue that the existing MMD loss function may discourage the learning of fine details in data as it attempts to contract the discriminator outputs of real data. To address this issue, we propose a repulsive loss function to actively learn the difference among the real data by simply rearranging the terms in MMD. Second, inspired by the hinge loss, we propose a bounded Gaussian kernel to stabilize the training of MMD-GAN with the repulsive loss function. The proposed methods are applied to the unsupervised image generation tasks on CIFAR-10, STL-10, CelebA, and LSUN bedroom datasets. Results show that the repulsive loss function significantly improves over the MMD loss at no additional computational cost and outperforms other representative loss functions. The proposed methods achieve an FID score of 16.21 on the CIFAR-10 dataset using a single DCGAN network and spectral normalization.</p> <p><strong>Links</strong></p> <p>OpenReview link: <a href="https://openreview.net/forum?id=HygjqjR9Km">https://openreview.net/forum?id=HygjqjR9Km</a></p> <p>arXiv link: <a href="https://arxiv.org/abs/1812.09916">https://arxiv.org/abs/1812.09916</a></p> <p>Code link: <a href="https://github.com/richardwth/MMD-GAN">https://github.com/richardwth/MMD-GAN</a></p> <p>---------------------------------------------</p> <p>Good day, mate!</p> <p>I am the first author of this GAN paper that proposes a new loss function, a slightly modified spectral normalization method, and a bounded RBF kernel. The methods works very well with DCGAN architecture. </p> <p>Please take a few minutes to read the paper. Any comments are welcomed and I am happy to answer any questions in this extremely hot Christmas (yes I am in Australia and we stand upside down). </p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Richard_wth"> /u/Richard_wth </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9vh36/r_iclr_poster_improving_mmdgan_training_with/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9vh36/r_iclr_poster_improving_mmdgan_training_with/">[comments]</a></span></content>
<id>t3_a9vh36</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9vh36/r_iclr_poster_improving_mmdgan_training_with/" />
<updated>2018-12-27T03:29:34+00:00</updated>
<title>[R] [ICLR poster] Improving MMD-GAN Training with Repulsive Loss Function</title>
</entry>
<entry>
<author>
<name>/u/ak96</name>
<uri>https://www.reddit.com/user/ak96</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>I am trying to build a model which can extract information from a pdf/html file and use it for answering questions which are asked later. So, for example, I have a page on a football club which has information about its captain, manager, owner etc. and I want to extract it and store it for later retrieval upon request. You guys have any suggestions on how to do it? Or any papers that you can point me to? </p> <p>(Similar to Question Answering model using RNNs with bi-directional LSTMs/GRUs)</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/ak96"> /u/ak96 </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9su90/p_a_knowledge_extractor_from_text/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9su90/p_a_knowledge_extractor_from_text/">[comments]</a></span></content>
<id>t3_a9su90</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9su90/p_a_knowledge_extractor_from_text/" />
<updated>2018-12-26T22:00:54+00:00</updated>
<title>[P] A knowledge extractor from text</title>
</entry>
<entry>
<author>
<name>/u/baylearn</name>
<uri>https://www.reddit.com/user/baylearn</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>A browser based <a href="https://mishig25.github.io/neuroevolution-robots/">environment</a> where humanoids are trained to walk through Neuroevolution of Augmenting Topologies (<a href="http://nn.cs.utexas.edu/downloads/papers/stanley.gecco02_1.pdf">NEAT</a>).</p> <p>Web Demo: <a href="https://mishig25.github.io/neuroevolution-robots/">https://mishig25.github.io/neuroevolution-robots/</a></p> <p>Tools used:</p> <ul> <li><a href="https://js.tensorflow.org">TensorFlow.js</a></li> <li><a href="https://github.com/wagenaartje/neataptic">Neataptic.js</a></li> <li><a href="http://piqnt.com/planck.js/">Planck.js</a> (a Box2D rewrite)</li> </ul> <p><a href="https://mishig25.github.io/neuroevolution-robots/">https://mishig25.github.io/neuroevolution-robots/</a></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/baylearn"> /u/baylearn </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9nn5m/p_neuroevolutionbots/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9nn5m/p_neuroevolutionbots/">[comments]</a></span></content>
<id>t3_a9nn5m</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9nn5m/p_neuroevolutionbots/" />
<updated>2018-12-26T10:48:36+00:00</updated>
<title>[P] Neuroevolution-Bots</title>
</entry>
<entry>
<author>
<name>/u/Anogio94</name>
<uri>https://www.reddit.com/user/Anogio94</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Hi! Since the GDPR laws in Europe came into effect, it is now possible to download your complete Facebook data as JSON files (messages, comments, connexion ips, friends...) Many of us have been using Facebook for years, so it&#39;s usually a large dataset with lots of opportunities for some cool data viz (or even NLP?) :) I am currently working on a script that would allow anyone to perform analytics on their own data and build nice graphs. I&#39;ve got most of the boilerplate down and I would really appreciate some inputs on what kind of analytics you guys would find interesting!</p> <p>A map showing connexion locations? An activity graph ? Friends network? Sentiment analysis ? I know there is quite a lot of fun stuff to do and I would like to get my priorities right</p> <p>It&#39;s far from ready for now, but I&#39;ll make sure to share the repo in the future if anyone is interested :)</p> <p>PS: I was sure that this would already exist, and I did some looking around and could not find anything. If you know about a similar project do let me know! </p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Anogio94"> /u/Anogio94 </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9oblj/d_what_would_you_like_to_know_about_your_own/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9oblj/d_what_would_you_like_to_know_about_your_own/">[comments]</a></span></content>
<id>t3_a9oblj</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9oblj/d_what_would_you_like_to_know_about_your_own/" />
<updated>2018-12-26T12:50:14+00:00</updated>
<title>[D] What would you like to know about your own Facebook data? (Data viz, NLP)</title>
</entry>
<entry>
<author>
<name>/u/downtownslim</name>
<uri>https://www.reddit.com/user/downtownslim</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><table> <tr><td> <a href="https://www.reddit.com/r/MachineLearning/comments/a9qlnx/riclr_oral_learning_unsupervised_learning_rules/"> <img src="https://a.thumbs.redditmedia.com/O5xSqPxXQan0eq4XIj_39G9lsyFLtyg3D_81hYNOIr4.jpg" alt="[R][ICLR Oral] Learning Unsupervised Learning Rules" title="[R][ICLR Oral] Learning Unsupervised Learning Rules" /> </a> </td><td> &#32; submitted by &#32; <a href="https://www.reddit.com/user/downtownslim"> /u/downtownslim </a> <br/> <span><a href="https://openreview.net/forum?id=HkNDsiC9KQ">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9qlnx/riclr_oral_learning_unsupervised_learning_rules/">[comments]</a></span> </td></tr></table></content>
<id>t3_a9qlnx</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9qlnx/riclr_oral_learning_unsupervised_learning_rules/" />
<updated>2018-12-26T17:47:22+00:00</updated>
<title>[R][ICLR Oral] Learning Unsupervised Learning Rules</title>
</entry>
<entry>
<author>
<name>/u/BatmantoshReturns</name>
<uri>https://www.reddit.com/user/BatmantoshReturns</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Last year&#39;s thread </p> <p><a href="https://www.reddit.com/r/MachineLearning/comments/7nrzhn/d_results_from_best_of_machine_learning_2017/">https://www.reddit.com/r/MachineLearning/comments/7nrzhn/d_results_from_best_of_machine_learning_2017/</a></p> <p>Vote for:</p> <p>Best paper</p> <p>Best innovation </p> <p>Best application</p> <p>Best video</p> <p>Best youtube channel</p> <p>Best blog post</p> <p>Best blog overall</p> <p>Best course (released in 2018)</p> <p>Best book (released in 2018)</p> <p>Best reddit comment/post</p> <p>Best Cross Validated - Stack Exchange posts</p> <p>Best project</p> <p>Best new tool</p> <p>Anything else?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/BatmantoshReturns"> /u/BatmantoshReturns </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9u7t9/d_vote_for_best_of_machine_learning_for_2018/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9u7t9/d_vote_for_best_of_machine_learning_for_2018/">[comments]</a></span></content>
<id>t3_a9u7t9</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9u7t9/d_vote_for_best_of_machine_learning_for_2018/" />
<updated>2018-12-27T00:49:03+00:00</updated>
<title>[D] Vote for Best of Machine Learning for 2018 -Categories include papers, talks, videos, applications, projects, reddit comments, Cross Validated - Stack Exchange posts, innovations, tools, and projects.</title>
</entry>
<entry>
<author>
<name>/u/neuralPr0cess0r</name>
<uri>https://www.reddit.com/user/neuralPr0cess0r</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Non-Adversarial Image Synthesis with Generative Latent Nearest Neighbors</p> <p><a href="https://arxiv.org/abs/1812.08985v1">https://arxiv.org/abs/1812.08985v1</a></p> <p>&#x200B;</p> <p>A very interesting paper by Facebook. In this paper they describe a method to create an image-generator that has many of the same qualities of a &#39;standard&#39; GAN generator but without the hassle of actually training a GAN. This means the training is more stable, less chance of mode collapse, faster to train ... etc </p> <p>They achieve this by first projecting their dataset into a low-dimensional space. This low-dimensional space is then used as input into a generator. The generator maps each projected-low-dimensional point into its complementary image. The image quality is then judges by a VGG-perceptual metric. (how similar is the generated image in VGG-feature space to the original) Finally, they learn a layer that maps noise vectors into good spots within the previously projected low-D space.</p> <p>&#x200B;</p> <p>&#x200B;</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/neuralPr0cess0r"> /u/neuralPr0cess0r </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9hy55/p_nonadversarial_image_synthesis_with_generative/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9hy55/p_nonadversarial_image_synthesis_with_generative/">[comments]</a></span></content>
<id>t3_a9hy55</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9hy55/p_nonadversarial_image_synthesis_with_generative/" />
<updated>2018-12-25T20:19:13+00:00</updated>
<title>[P] Non-Adversarial Image Synthesis with Generative Latent Nearest Neighbors <- Make pictures, no GAN!</title>
</entry>
<entry>
<author>
<name>/u/whria78</name>
<uri>https://www.reddit.com/user/whria78</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Hello, I am Han Seung Seog from I Dermatology clinic.</p> <p>&#x200B;</p> <p>The automatic screening system consists of 1) <strong>blob-detector</strong>, 2) <strong>fine-image-selector</strong>, and 3) <strong>malignancy-classifier</strong>.</p> <p>&#x200B;</p> <ol> <li>The blob-detector was trained with py-faster-RCNN (model = VGG-16) and approximately 25,000 nodular disorder images (<a href="https://www.ncbi.nlm.nih.gov/pubmed/29428356">https://www.ncbi.nlm.nih.gov/pubmed/29428356</a>) + 100,000 ImageNet images.</li> <li>The fine-image-selector was trained with NVCaffe (model = ensemble the results of SE-ResNext-50 and SE-ResNet-50) and over 500,000 images (clinical images - 400,000; ImageNet images - 100,000). It check both focus and composition of the image.</li> <li>The malignancy-classifier (Model Dermatology; version 20181225; The old (20180623) version of the Model Dermatology is currently available at <a href="http://modelderm.com">http://modelderm.com</a>) was trained with NVCaffe (model = ensemble the results of SENet and SE-ResNext-50) and over 230,000 images (220,000 - clinical diagnosis; over 10,000 - generated by RCNN technology and the diagnosis had been tagged by dermatologist from image findings).</li> </ol> <p>&#x200B;</p> <p><strong>Screenshot :</strong> <a href="https://i.redd.it/0d1ittotwf621.jpg"><strong>https://i.redd.it/0d1ittotwf621.jpg</strong></a></p> <p>&#x200B;</p> <p>The screenshot was created by testing our algorithm with 8 high-resolution images which were downloaded via search engine (image.google.com).</p> <ol> <li>Before</li> <li>After blob-detector</li> <li>After fine-image-selector</li> <li>After malignancy-classifier (malignancy scoring and Top-1 diagnosis are shown)</li> </ol> <p>&#x200B;</p> <p>Although artificial intelligence had been shown dermatologist-level performance in several studies, the most difficult problem that should be solved in skin cancer screening is false positive problem. We are working to reduce the false positive by generating a lot of skin blobs and training the algorithm with those images.</p> <p>&#x200B;</p> <p>&#x200B;</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/whria78"> /u/whria78 </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9fmzq/p_automatic_skin_cancer_screening_with/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9fmzq/p_automatic_skin_cancer_screening_with/">[comments]</a></span></content>
<id>t3_a9fmzq</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9fmzq/p_automatic_skin_cancer_screening_with/" />
<updated>2018-12-25T15:29:20+00:00</updated>
<title>[P] Automatic Skin Cancer Screening with Region-based CNN</title>
</entry>
<entry>
<author>
<name>/u/RKuurstra</name>
<uri>https://www.reddit.com/user/RKuurstra</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Hellow everyone,</p> <p>&#x200B;</p> <p>I&#39;m a long time stalker of this subreddit. I have no background in this field ( not work or even relevant degrees ), but I really love it! I&#39;m a videogame programmer, and in my free time I always &quot;try stuff&quot; in this field to hone my skills as such.</p> <p>So I have several projects, and recently I opened few of them to the public!</p> <p>&#x200B;</p> <p>In <a href="https://gitlab.com/kuurstra.renato/NeuralNetworksStatic/wikis/Genetic-algorithm-and-Neural-Networks-in-real-time">this project</a> I was trying to apply Genetic Algorithm approach to train neural networks in real time. The link is on a small wiki page of the site of the library project. The main goal for me was trying to learn how GA works ( or don&#39;t!:p) with neural networks and learn how to exploit ( or don&#39;t!:p ) it in a video game environment.</p> <p>&#x200B;</p> <p>The <a href="https://gitlab.com/kuurstra.renato/NNetworkTest/wikis/Neural-networks-using-Unreal-Engine-4.15">other project</a> use this library in two small &quot;gyms&quot;. One of such is the classic &quot;drive the car on the circuit&quot; problem, which is the most succesfull one. The framework is Unreal Engine 4.15, and I made use of both c++ and blueprints.</p> <p>&#x200B;</p> <p>Beside hoping that some of you big guys find this at least a bit interesting I have a few questions:</p> <p>&#x200B;</p> <p>1)The only paper I found doing something similar ( very similar :D ) is from <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.728.5120&amp;rep=rep1&amp;type=pdf#page=182">Stanley</a>. It took me a LOT to find. The only way seem to read all the papers and memorizing &quot;what&#39;s going on&quot;. But as a amateur doing this in my free time this is extremely difficult! Any hint?</p> <p>2) I&#39;m currently unemployed, do you think these projects are too amateurish to be added to my resume or they could make some sort of good impression in a possible employer ( most likely video game )?</p> <p>3) What should I focus more on my future projects? Should the code be more readable? More optimize? I try to make the code as readable as possible, but since the scope of these projects are very small I also tend to try and go for &quot;shortcuts&quot; as much as possible. I see this in a lot of research projects too, should I keep it like this?4) What you would add to the wikis?</p> <p>&#x200B;</p> <p>Thanks a lot if you have read all of this ^_^</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/RKuurstra"> /u/RKuurstra </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9exmg/p_real_time_genetic_algorithm_applied_to_neural/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9exmg/p_real_time_genetic_algorithm_applied_to_neural/">[comments]</a></span></content>
<id>t3_a9exmg</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9exmg/p_real_time_genetic_algorithm_applied_to_neural/" />
<updated>2018-12-25T13:40:15+00:00</updated>
<title>[P] Real Time Genetic Algorithm applied to Neural Networks</title>
</entry>
<entry>
<author>
<name>/u/alexmlamb</name>
<uri>https://www.reddit.com/user/alexmlamb</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Hello, </p> <p>Sometimes, when I&#39;ve done multi-class classification, I&#39;ve used the binary cross entropy on all of the labels, but after the softmax. </p> <p>Putting aside the question of whether this is ideal - it seems to yield a different loss from doing categorical cross entropy after the softmax. </p> <p>I did some math on both of the losses and they seem different in a way that doesn&#39;t yield an obvious mathematical correspondence. Both push up the value on the &quot;true&quot; class, but they have different normalization terms (pushing down pre-softmax on non-selected classes). </p> <p>I ran this in Pytorch: </p> <pre><code>&gt;&gt;&gt; p = torch.Tensor([0.9, 0.05, 0.02, 0.03]) &gt;&gt;&gt; nll(p.unsqueeze(0),torch.LongTensor([0])) tensor(-0.9000) &gt;&gt;&gt; nll(p.unsqueeze(0),torch.LongTensor([1])) tensor(-0.0500) &gt;&gt;&gt; nll(p.unsqueeze(0),torch.LongTensor([2])) tensor(-0.0200) &gt;&gt;&gt; nll(p.unsqueeze(0),torch.LongTensor([3])) tensor(-0.0300) &gt;&gt;&gt; y = torch.Tensor([0.0, 1.0, 0.0, 0.0]) &gt;&gt;&gt; bce(p,torch.Tensor([1.0, 0.0, 0.0, 0.0])) tensor(0.0518) &gt;&gt;&gt; bce(p,torch.Tensor([0.0, 1.0, 0.0, 0.0])) tensor(1.3372) &gt;&gt;&gt; bce(p,torch.Tensor([0.0, 0.0, 1.0, 0.0])) tensor(1.5741) &gt;&gt;&gt; bce(p,torch.Tensor([0.0, 0.0, 0.0, 1.0])) tensor(1.4702) </code></pre> <p>Both give the same rank-order, but the values are rather different. </p> <p>Do you know if there are any derivations or math comparing these two losses? Or maybe a better intuition on why one is better than the other? </p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/alexmlamb"> /u/alexmlamb </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9k4y4/d_using_binary_cross_entropy_loss_after_softmax/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9k4y4/d_using_binary_cross_entropy_loss_after_softmax/">[comments]</a></span></content>
<id>t3_a9k4y4</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9k4y4/d_using_binary_cross_entropy_loss_after_softmax/" />
<updated>2018-12-26T01:16:22+00:00</updated>
<title>[D] Using Binary Cross Entropy Loss after Softmax for Multi-class Classification</title>
</entry>
<entry>
<author>
<name>/u/xmxman</name>
<uri>https://www.reddit.com/user/xmxman</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Im working on a couple of projects which requires computer vision for web on a client side essentially is object detection from webcam stream. I found couple libraries <a href="https://gammacv.com">https://gammacv.com</a>, <a href="https://inspirit.github.io/jsfeat">https://inspirit.github.io/jsfeat</a> which can help to build things similar to OpenCV, also TensorFlow.js is now available for browser but it is maybe heavy to use because of model size and performance. It is a very interesting to me, does anyone have an experience to use CV for a web in production</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/xmxman"> /u/xmxman </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9eim0/d_computer_vision_in_browser/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9eim0/d_computer_vision_in_browser/">[comments]</a></span></content>
<id>t3_a9eim0</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9eim0/d_computer_vision_in_browser/" />
<updated>2018-12-25T12:23:34+00:00</updated>
<title>[D] Computer Vision in browser</title>
</entry>
<entry>
<author>
<name>/u/finallyifoundvalidUN</name>
<uri>https://www.reddit.com/user/finallyifoundvalidUN</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>In the following paper the author says &quot; Linear behavior in high-dimensional spaces is sufficient to cause adversarial examples&quot;</p> <p><a href="https://arxiv.org/abs/1412.6572">https://arxiv.org/abs/1412.6572</a></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/finallyifoundvalidUN"> /u/finallyifoundvalidUN </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9l52l/d_do_deep_learning_models_behave_linear_in_high/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9l52l/d_do_deep_learning_models_behave_linear_in_high/">[comments]</a></span></content>
<id>t3_a9l52l</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9l52l/d_do_deep_learning_models_behave_linear_in_high/" />
<updated>2018-12-26T03:43:04+00:00</updated>
<title>[D] do deep learning models behave linear in high dimensional space?</title>
</entry>
<entry>
<author>
<name>/u/ljvmiranda</name>
<uri>https://www.reddit.com/user/ljvmiranda</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Hi all, </p> <p>For the holidays, we made a holiday card generator to give to our workmates and friends, thought I might share this here too! </p> <p><a href="https://stories.thinkingmachin.es/ai-art-holiday-cards/">https://stories.thinkingmachin.es/ai-art-holiday-cards/</a></p> <p>Given an input string, it looks for the nearest Quick, Draw! class and draws it using Sketch-RNN, then we applied style transfer to the output image.</p> <p>We also open-sourced our pipeline here:</p> <p><a href="https://github.com/thinkingmachines/christmAIs">https://github.com/thinkingmachines/christmAIs</a></p> <p>Feel free to check that out!</p> <p>Happy Holidays!</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/ljvmiranda"> /u/ljvmiranda </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9ddu8/p_holiday_card_generator_using_sketchrnn_and/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9ddu8/p_holiday_card_generator_using_sketchrnn_and/">[comments]</a></span></content>
<id>t3_a9ddu8</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9ddu8/p_holiday_card_generator_using_sketchrnn_and/" />
<updated>2018-12-25T08:31:48+00:00</updated>
<title>[P] Holiday card generator using Sketch-RNN and Style Transfer</title>
</entry>
<entry>
<author>
<name>/u/riot-nerf-red-buff</name>
<uri>https://www.reddit.com/user/riot-nerf-red-buff</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>P.s.: I did not expect so many amazing answers. Thanks, and Happy Christmas to everyone!</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/riot-nerf-red-buff"> /u/riot-nerf-red-buff </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9ajc6/d_how_often_do_you_implement_mls_algorithms_by/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9ajc6/d_how_often_do_you_implement_mls_algorithms_by/">[comments]</a></span></content>
<id>t3_a9ajc6</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9ajc6/d_how_often_do_you_implement_mls_algorithms_by/" />
<updated>2018-12-25T00:53:15+00:00</updated>
<title>[D] How often do you implement ML's algorithms "by hand" opposed to using a programming library?</title>
</entry>
<entry>
<author>
<name>/u/hanyuqn</name>
<uri>https://www.reddit.com/user/hanyuqn</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>I am trying to find an open source project that can achieve similar results to Deep Video Portraits (<a href="https://www.youtube.com/watch?v=qc5P2bvfl44">https://www.youtube.com/watch?v=qc5P2bvfl44</a>) where facial expressions and body movements are transferred to a target actor. A few things I&#39;ve looked at:</p> <ul> <li>The AVATAR model in deepfacelab is able to transfer facial expressions (see <a href="https://www.youtube.com/watch?v=3M0E4QnWMqA">https://www.youtube.com/watch?v=3M0E4QnWMqA</a>) but it doesn&#39;t seem like the result could be put back into the original video so the body is still visible. There is this comment <a href="https://github.com/iperov/DeepFaceLab/is...-448825013">https://github.com/iperov/DeepFaceLab/is...-448825013</a> saying it could be done but I very much doubt it would work.</li> <li>Recycle-GAN (<a href="https://www.youtube.com/watch?v=EU4BvhtEuG0">https://www.youtube.com/watch?v=EU4BvhtEuG0</a>) in their example has major artefacts</li> <li>Face2Face (<a href="https://www.youtube.com/watch?v=ohmajJTcpNk">https://www.youtube.com/watch?v=ohmajJTcpNk</a>) is implemented in a third-party repo here <a href="https://github.com/datitran/face2face-demo">https://github.com/datitran/face2face-demo</a> but the example (<a href="https://github.com/datitran/face2face-de...xample.gif">https://github.com/datitran/face2face-de...xample.gif</a>) is of very poor quality</li> </ul> <p>Any suggestions welcome.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/hanyuqn"> /u/hanyuqn </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9fgz3/d_replicating_deep_video_portraits/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9fgz3/d_replicating_deep_video_portraits/">[comments]</a></span></content>
<id>t3_a9fgz3</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9fgz3/d_replicating_deep_video_portraits/" />
<updated>2018-12-25T15:05:36+00:00</updated>
<title>[D] Replicating deep video portraits</title>
</entry>
<entry>
<author>
<name>/u/madhavgoyal98</name>
<uri>https://www.reddit.com/user/madhavgoyal98</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>I&#39;m looking for some innovative project ideas that can be done using machine learning/deep learning.</p> <p>The project has to be completed in 6 months.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/madhavgoyal98"> /u/madhavgoyal98 </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9htnd/p_need_ideas_for_undergrad_final_year_project/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9htnd/p_need_ideas_for_undergrad_final_year_project/">[comments]</a></span></content>
<id>t3_a9htnd</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9htnd/p_need_ideas_for_undergrad_final_year_project/" />
<updated>2018-12-25T20:03:15+00:00</updated>
<title>"[P]" Need ideas for undergrad final year project</title>
</entry>
<entry>
<author>
<name>/u/b0red1337</name>
<uri>https://www.reddit.com/user/b0red1337</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Hi all,</p> <p>This is my attempt to solve the Tetris environment using MCTS and TD learning inspired by AlphaGo Zero.</p> <p>At 400 games of self-plays, the agent surpassed the vanilla MCTS (random policy rollouts) which has an average of 7 lines cleared per game.</p> <p>At 800 games of self-plays, the agent was able to clear more than 1300 lines in a single game which to the best of my knowledge is the best result without using any heuristics. (I couldn&#39;t find many papers on the topic, please let me know if I&#39;m wrong.)</p> <p>Here is a video showing the agent in action</p> <p><a href="https://www.youtube.com/watch?v=EALo2GfZuYU">https://www.youtube.com/watch?v=EALo2GfZuYU</a></p> <p>Here is the git repo in case you are interested</p> <p><a href="https://github.com/hrpan/tetris_mcts">https://github.com/hrpan/tetris_mcts</a></p> <p>&#x200B;</p> <p>Let me know what you think!</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/b0red1337"> /u/b0red1337 </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a97rl3/p_learning_to_play_tetris_with_mcts_and_td/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a97rl3/p_learning_to_play_tetris_with_mcts_and_td/">[comments]</a></span></content>
<id>t3_a97rl3</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a97rl3/p_learning_to_play_tetris_with_mcts_and_td/" />
<updated>2018-12-24T19:09:14+00:00</updated>
<title>[P] Learning to play Tetris with MCTS and TD learning</title>
</entry>
<entry>
<author>
<name>/u/TheShadow29</name>
<uri>https://www.reddit.com/user/TheShadow29</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>I am looking for papers in the field of forgery detection. Like if a part of the image is replaced by a GAN generated input then how to detect that part? Or if the whole image is generated by a GAN how to classify it as a synthetic image?</p> <p>The papers I have found are: </p> <ol> <li><p>Buster Net for copy move forge detection: <a href="http://openaccess.thecvf.com/content_ECCV_2018/papers/Rex_Yue_Wu_BusterNet_Detecting_Copy-Move_ECCV_2018_paper.pdf">http://openaccess.thecvf.com/content_ECCV_2018/papers/Rex_Yue_Wu_BusterNet_Detecting_Copy-Move_ECCV_2018_paper.pdf</a></p></li> <li><p>Face Tamper Detection: <a href="https://arxiv.org/pdf/1803.11276.pdf">https://arxiv.org/pdf/1803.11276.pdf</a></p></li> </ol> <p>Wanted to know if anyone is aware of any other papers.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/TheShadow29"> /u/TheShadow29 </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9aes4/d_defenses_against_gan_generated_images_for_image/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9aes4/d_defenses_against_gan_generated_images_for_image/">[comments]</a></span></content>
<id>t3_a9aes4</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9aes4/d_defenses_against_gan_generated_images_for_image/" />
<updated>2018-12-25T00:35:11+00:00</updated>
<title>[D] Defenses against GAN generated images for Image Forensics and forgery detection</title>
</entry>
<entry>
<author>
<name>/u/OldManufacturer</name>
<uri>https://www.reddit.com/user/OldManufacturer</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>My question is this: is it possible to generate some metrics that would effectively explain why ML algorithm A (SVM for example) would perform better than B for a given feature space? I know that they each have their strengths and weaknesses. But what if we don’t know the exact nature of our data set? Is there some way to look at an unknown feature space and narrow down our list of competitive ML algorithms without actually testing each one (and thus dealing with a computationally expensive experiment)?</p> <p>I would guess that variance in each feature would be an important piece of information, but could it be used to assume that one algorithm would perform better than the rest?</p> <p>Edit: My title is misleading and should read “Predicting” not “Deciphering”</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/OldManufacturer"> /u/OldManufacturer </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9bqly/r_deciphering_the_performance_of_different_ml/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9bqly/r_deciphering_the_performance_of_different_ml/">[comments]</a></span></content>
<id>t3_a9bqly</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9bqly/r_deciphering_the_performance_of_different_ml/" />
<updated>2018-12-25T03:54:14+00:00</updated>
<title>[R] Deciphering the performance of different ML algorithms on a given feature space</title>
</entry>
<entry>
<author>
<name>/u/dcn20002</name>
<uri>https://www.reddit.com/user/dcn20002</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>I am currently doing a project to write a learning algorithm to predict user behavior based on several factors. Most of which are text which I use bag of words to detect the frequency and recode them into 1 or 0. For the rest, they are continuous variable (age, etc). </p> <p>&#x200B;</p> <p>I want to figure which is the correct regression model to use. I am not an expert at statistics and would need to brush up on this topic. I wonder any of you have an advice on this?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/dcn20002"> /u/dcn20002 </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9c4n4/p_running_regression_on_both_binary_and/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a9c4n4/p_running_regression_on_both_binary_and/">[comments]</a></span></content>
<id>t3_a9c4n4</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a9c4n4/p_running_regression_on_both_binary_and/" />
<updated>2018-12-25T04:54:22+00:00</updated>
<title>[P] Running regression on both binary and continuous variables?</title>
</entry>
<entry>
<author>
<name>/u/maroonedscientist</name>
<uri>https://www.reddit.com/user/maroonedscientist</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>As the title says, so many papers I read now seem to be demonstrating a result, with relatively little discussion about the network design decisions. I&#39;m interested to find papers more focused on the theory of network design, network operation, training methods, dataset requirements, provability, network pruning, etc... </p> <p>Any advice on specific papers or sources of papers that would meet these needs?</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/maroonedscientist"> /u/maroonedscientist </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a8xjh0/d_im_tired_of_reading_resultsoriented_papers_what/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a8xjh0/d_im_tired_of_reading_resultsoriented_papers_what/">[comments]</a></span></content>
<id>t3_a8xjh0</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a8xjh0/d_im_tired_of_reading_resultsoriented_papers_what/" />
<updated>2018-12-23T19:31:32+00:00</updated>
<title>[D] I'm tired of reading results-oriented papers. What are good papers or sources of papers more focused on the emerging theory of machine learning?</title>
</entry>
<entry>
<author>
<name>/u/UltraMarathonMan</name>
<uri>https://www.reddit.com/user/UltraMarathonMan</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>This is a conversation with Juergen Schmidhuber, co-creator of long short-term memory networks (LSTMs) which are used in billions of devices today for speech recognition, translation, and much more. Over 30 years, he has proposed a lot of interesting, out-of-the-box ideas in artificial intelligence including a formal theory of creativity.</p> <p>Podcast version: <a href="https://lexfridman.com/juergen-schmidhuber">https://lexfridman.com/juergen-schmidhuber</a></p> <p>YouTube video: <a href="https://www.youtube.com/watch?v=3FIo6evmweo">https://www.youtube.com/watch?v=3FIo6evmweo</a></p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/UltraMarathonMan"> /u/UltraMarathonMan </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a8whi9/d_conversation_with_juergen_schmidhuber_on_godel/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a8whi9/d_conversation_with_juergen_schmidhuber_on_godel/">[comments]</a></span></content>
<id>t3_a8whi9</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a8whi9/d_conversation_with_juergen_schmidhuber_on_godel/" />
<updated>2018-12-23T17:29:18+00:00</updated>
<title>[D] Conversation with Juergen Schmidhuber on Godel Machines, Meta-Learning, and LSTMs</title>
</entry>
<entry>
<author>
<name>/u/Kohomology</name>
<uri>https://www.reddit.com/user/Kohomology</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><!-- SC_OFF --><div class="md"><p>Subtitle: What causes the &quot;speaking into a fan&quot; warble in Text-To-Speech models? </p> <p>I&#39;m training a TTS model (A custom modification of <a href="https://github.com/Kyubyong/dc_tts">https://github.com/Kyubyong/dc_tts</a>) on a medium-quality custom dataset (speech from TV news), and a professionally-recorded dataset. </p> <p>The results from training on the TV news dataset have the distinctive &quot;speaking into a fan&quot; warble that you can hear in samples from many open-source models.</p> <p>Examples of the warbling I&#39;m talking about: <a href="https://soundcloud.com/kyubyong-park/sets/tacotron_lj_200k">https://soundcloud.com/kyubyong-park/sets/tacotron_lj_200k</a></p> <p>and to a lesser extent: <a href="https://keithito.github.io/audio-samples/">https://keithito.github.io/audio-samples/</a></p> <p>I can&#39;t seem to get rid of the warbling when training on the TV news data, but the warbling is unnoticeable when I train on the professionally-recorded data. So I&#39;m leaning towards assuming that this is a problem with the training data that seems to be present in many open datasets as well (the two samples posted above were trained on the LJ speech dataset). But just listening to the &quot;bad&quot; and &quot;good&quot; datasets by ear, I can&#39;t tell what the problem is.</p> <p>Hence my question:</p> <p>What subtle problems can there be in a text-to-speech dataset? I am particularly interested in the cause of the warbling mentioned above, but would be glad to learn about unrelated problems as well.</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/Kohomology"> /u/Kohomology </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a90u3t/d_what_makes_a_good_texttospeech_dataset/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a90u3t/d_what_makes_a_good_texttospeech_dataset/">[comments]</a></span></content>
<id>t3_a90u3t</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a90u3t/d_what_makes_a_good_texttospeech_dataset/" />
<updated>2018-12-24T02:21:36+00:00</updated>
<title>[D] What makes a good text-to-speech dataset?</title>
</entry>
<entry>
<author>
<name>/u/victorianoi</name>
<uri>https://www.reddit.com/user/victorianoi</uri>
</author>
<category term="MachineLearning" label="r/MachineLearning"/>
<content type="html"><table> <tr><td> <a href="https://www.reddit.com/r/MachineLearning/comments/a8v8ya/project_p_analyzing_rmachinelearning_2018_posts/"> <img src="https://b.thumbs.redditmedia.com/b75HruH5OObtt67ElaABaD5CeR4m5lv5WzUnIj5I37w.jpg" alt="[Project] [P] Analyzing r/MachineLearning 2018 posts with Graphext unsupervised NLP algorithms" title="[Project] [P] Analyzing r/MachineLearning 2018 posts with Graphext unsupervised NLP algorithms" /> </a> </td><td> <!-- SC_OFF --><div class="md"><p>&#x200B;</p> <p><a href="https://i.redd.it/94jirwc3j1621.jpg">2509 posts from r/MachineLearning in 2018 clustered with Graphext</a></p> <p>We at <strong>Graphext</strong> ( <a href="https://twitter.com/graphext">@graphext</a> ) use word2vec + dimensionality reduction + network algorithms to cluster all type of data, from text to images to numerical and categorical vectors to spot unsupervised patterns, among many other things in data science. I scraped all the posts in <a href="https://www.reddit.com/r/MachineLearning">r/MachineLearning</a> from <a href="https://twitter.com/slashml">@slashml</a> uploaded to Graphext and found all these different narratives in the community.</p> <p>Each node is a post, we connect similar posts talking about similar things, then we form a network, calculate clusters with Louvain algorithm, and find what terms characterize them more. The size of the nodes is the number of retweets they got.</p> <p>Another day I would like to compare the evolution of each topic over the years to see what things are becoming more trending lately :)</p> </div><!-- SC_ON --> &#32; submitted by &#32; <a href="https://www.reddit.com/user/victorianoi"> /u/victorianoi </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/a8v8ya/project_p_analyzing_rmachinelearning_2018_posts/">[link]</a></span> &#32; <span><a href="https://www.reddit.com/r/MachineLearning/comments/a8v8ya/project_p_analyzing_rmachinelearning_2018_posts/">[comments]</a></span> </td></tr></table></content>
<id>t3_a8v8ya</id>
<link href="https://www.reddit.com/r/MachineLearning/comments/a8v8ya/project_p_analyzing_rmachinelearning_2018_posts/" />
<updated>2018-12-23T14:58:27+00:00</updated>
<title>[Project] [P] Analyzing r/MachineLearning 2018 posts with Graphext unsupervised NLP algorithms</title>
</entry>
</feed>