-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.html
301 lines (173 loc) · 19 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
<!DOCTYPE html>
<!--[if IEMobile 7 ]><html class="no-js iem7"><![endif]-->
<!--[if lt IE 9]><html class="no-js lte-ie8"><![endif]-->
<!--[if (gt IE 8)|(gt IEMobile 7)|!(IEMobile)|!(IE)]><!--><html class="no-js" lang="en"><!--<![endif]-->
<head>
<meta charset="utf-8">
<title>tjake.blog</title>
<meta name="author" content="T Jake Luciani">
<meta name="description" content="Tweet Resurgence in Neural Networks Feb 18th, 2013 If you’ve been paying attention, you’ll notice there has been a lot of news recently …">
<!-- http://t.co/dKP3o1e -->
<meta name="HandheldFriendly" content="True">
<meta name="MobileOptimized" content="320">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="canonical" href="http://tjake.github.com/">
<link href="/favicon.png" rel="icon">
<link href="/stylesheets/screen.css" media="screen, projection" rel="stylesheet" type="text/css">
<script src="/javascripts/modernizr-2.0.js"></script>
<script src="/javascripts/ender.js"></script>
<script src="/javascripts/octopress.js" type="text/javascript"></script>
<link href="/atom.xml" rel="alternate" title="tjake.blog" type="application/atom+xml">
<!--Fonts from Google"s Web font directory at http://google.com/webfonts -->
<link href="http://fonts.googleapis.com/css?family=PT+Serif:regular,italic,bold,bolditalic" rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=PT+Sans:regular,italic,bold,bolditalic" rel="stylesheet" type="text/css">
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-38874048-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</head>
<body >
<header role="banner" id="sidebar">
<!-- Logo -->
<aside id="logo" class="clearfix">
<div class="clearfix">
<a href="/">tjake.blog</a>
</div>
</aside>
<ul id="menu">
<li class="title">
<h1 id="title"><a href="/">tjake.blog</a></h1>
</li>
<li class="subtitle">
<h2 id="subtitle">code, commentary, and commas</h2>
</li>
<li class="link">
<a href="http://twitter.com/tjake/">twitter</a>
</li>
<li class="link">
<a href="http://github.com/tjake/">github</a>
</li>
<li class="link">
<a href="/about/">about</a>
</li>
<li class="link">
<a href="/blog/archives/">archives</a>
</li>
<li class="link rss">
<a href="/atom.xml">rss feed</a>
</li>
</ul>
<!-- Octopress Love -->
<aside id="octopress_linkback">
<a href="http://octopress.org/">
<span class="unicode_square">
<span class="unicode_circle">
</span>
</span>
<span class="octopress">Powered by Octopress</span>
</a>
</aside>
</header>
<section id="main">
<article class="post">
<div class="sharing-box">
<a href="http://twitter.com/share" class="twitter-share-button" data-url="http://tjake.github.com/index.html" data-via="tjake" data-counturl="http://tjake.github.com/index.html" data-size="large">Tweet</a>
</div>
<header>
<h2 class="entry-title">
<a href="/blog/2013/02/18/resurgence-in-artificial-intelligence/">Resurgence in Neural Networks</a>
</h2>
<p class="meta">
<time datetime="2013-02-18T22:21:00-05:00" pubdate data-updated="true">Feb 18<span>th</span>, 2013</time>
</p>
</header>
<p>If you’ve been paying attention, you’ll notice there has been <a href="http://www.wired.com/wiredscience/2012/06/google-x-neural-network/">a lot</a> of <a href="http://arstechnica.com/science/2013/02/us-government-to-back-massive-effort-to-understand-the-brain/">news</a> recently about <a href="http://www.wired.com/wiredenterprise/2013/02/android-neural-network/">neural networks</a> and <a href="http://www.popsci.com/science/article/2013-02/how-simulate-human-brain-one-neuron-time-13-billion">the brain</a>.
A few years ago the idea of virtual brains seemed so far from reality, especially for me, but in the past few years there has been a breakthrough that has turned neural networks from nifty little toys to actual useful things that keep
getting better at doing tasks computers are traditionally very bad at. In this post I’ll cover some background on Neural networks and my experience with them. Then go over the recent discoveries I’ve learned about. At the end of the post I’ll share a sweet little github project I wrote that implements this new neural network approach.</p>
<h3>Background</h3>
<p>When I was in <a href="http://lehigh.edu">college</a> I studied <a href="http://en.wikipedia.org/wiki/Cognitive_science">Cognitive Science</a>, which is a interdisciplinary study of the mind and brain.</p>
<ul>
<li>Philosophy</li>
<li>Psychology</li>
<li>Linguistics</li>
<li>Neuroscience</li>
<li>Artificial Intelligence</li>
</ul>
<p>I ended up focusing on A.I. and eventually majored in Computer Science because of it.
The prospect of using computers to simulate how our brains work was just fascinating to me.<br/>
And the possibility of applying those techniques would be nothing short of revolutionary. Now, when I say Artificial Intelligence I’m really only referring to Neural Networks. There are many other kinds of A.I. out there (e.g. <a href="http://en.wikipedia.org/wiki/Expert_system">Expert Systems</a>, <a href="http://en.wikipedia.org/wiki/Statistical_classification">Classifiers</a> and the like) but none of those store information like our brain does (between connections across billions of neurons).</p>
<p>The problem with using computers to simulate a brain is they work nothing like them. Computers have Central Processing Units. Brains are fully distributed. You teach computers what to-do by literally coding every action and response. Neural networks however, is a more generalized approach to computing. You teach computers how to learn… or at least that is the goal.</p>
<p>So how can you make a computer more like a brain? Well, a classic <a href="http://en.wikipedia.org/wiki/Feedforward_neural_network">feed forward neural network</a> is depicted below. In a neural network you have very simple primitives, nodes and weighted connections. Nodes connected to each other and the strength of that connection is how information is passed between them. By combining many nodes and connections together you can relay a variety of information that is quite stable and can cope with failures and unknown inputs. A common academic application for this kind of neural net is <a href="http://yann.lecun.com/exdb/mnist/">recognizing handwritten digits</a>.</p>
<p><img src="/images/Classic%20Neural%20Network.png" alt="Classic Neural Network" /></p>
<p>The input layer is where your input signal goes. For example, pixel intensities of a picture where each pixel maps to an input node. Each input node is connected to a node in the middle layer and each of those middle layer nodes are connected to the output layer. In the case of the handwriting example, each output node would map to a digit 0-9. Data feeds through the network in only one direction (hence the name feed forward). Now, the way our brain and a neural network stores information isn’t in the nodes themselves but actually in the connections between nodes. By using a training algorithm you can teach the network to adjust its weights so it will naturally extract features from the underlying data in the middle layer. As an example a ‘7’ digit has a sharp corner on the upper right.</p>
<p>The training algorithm taught to me in college years ago was called <a href="http://en.wikipedia.org/wiki/Backpropagation">Backpropagation</a>. It is tedious and not very accurate but it was state of the art A.I. It works like this:</p>
<p>You initialize the weights between nodes with some Gaussian noise. You process a sample input through the network and look at the activation of the output nodes. Based on the difference between the networks output and the desired output you adjust the contributing weights in the previous layers connections to dampen the wrong output and promote the right output. Then you move to the middle layer and adjust the connection weights from the input layer to the middle layer based on their contributions to the final output. This is a non-linear process so it is slow and requires A LOT of training cases. You can easily end up with weights stuck in <a href="http://en.wikipedia.org/wiki/Maxima_and_minima">local minima</a>.</p>
<p>My first job out of college was building a neural network based model using backpropagation training to predict the yield of fiber optic lasers that met particular specs. The trouble was a linear regression model did a better job predicting the yield then my fancy neural network. This also happened to a lot of people who tried to use neural networks in the real world. Neural networks were replaced by things like <a href="http://en.wikipedia.org/wiki/Principle_Component_Analysis">PCA</a> and <a href="http://en.wikipedia.org/wiki/Baysian_classifier">Bayesian classifiers</a>. These proved to be powerful tools that have created a whole <a href="http://en.wikipedia.org/wiki/Data_scientist">new industry</a> but they have little todo with how the brain works.</p>
<p>Over time I began to forget about neural networks and ended up getting interested in <a href="http://cassandra.apache.org">distributed databases</a>.
I was reminded, however, of how the brain works after having kids. To see them grow and learn you really
appreciate how little we come into this world with. Everything from using their hands to grasp objects to
recognizing your face is learned mainly unsupervised over months and months. A billion little neurons all working together to
build a model of the world around them and how to interact with it.</p>
<h3>A Breakthrough: Deep Learning with Restricted Boltzmann Machines</h3>
<p>A few months ago I took a <a href="https://www.coursera.org/course/neuralnets">online course with coursera on neural networks</a> and learned that while I had given up and moved on from neural networks, the original pioneers of neural nets hadn’t. And in 2002 <a href="http://www.cs.toronto.edu/~hinton/">Geoffrey Hinton</a> (et al.) discovered a technique called called <a href="http://www.cs.toronto.edu/~hinton/absps/nccd.pdf">Contrastive Divergence</a> that could quickly model inputs in a <a href="http://en.wikipedia.org/wiki/Restricted_Boltzmann_machine">Restricted Boltzmann Machine</a>. A Restricted Boltzmann Machine (RBM for short) is a two layer network of interconnected nodes (a <a href="http://en.wikipedia.org/wiki/Bipartite_graph">bipartite graph</a>). Unlike the feed forward network above these connections are two way. This is important because rather than using backpropagation and explicit output cases you try to model the input by bouncing it through the network. See below:</p>
<p><img src="/images/Boltzmann%20Machine.png" alt="RBM" /></p>
<p>You pass the inputs up through the weights to the second layer then back down again, the goal being to re-generate the input. Then compare how close the input and the re-created input are and adjust the weights accordingly to improve the hidden layers representation of the input. Unlike backpropagation, this process is not very computationally complex. So you can quickly train this two layer network to do a good job of re-creating the input of a class of things.</p>
<p>Remember the example of hand written digits? The screenshot below shows what a RBM trained on digits looks like. The ‘3’ in the upper left is the never seen before test input and the 3’s below it are the RBM recreations for nine up/down cycles. The boxes to the right are renderings of the connections in the RBM. Each one represents all the input weights to a single hidden node. The green represents positive weights, the red represents negative weights. You can see the weights have changed to extract features from the class on inputs. Some are specifically focused on a single digit class while others are an amalgamation of different digit features.</p>
<p><img src="/images/MinstRBM.png" alt="MNISTRBM" /></p>
<p>Now you might be asking yourself what’s the value here if this network is only good at re-creating inputs? Well, first it can recognize new inputs of the same class it’s never seen before and redraw them using the features it’s discovered from training (as above). Even better it can not-recognize inputs that in no way represent the data it was trained on. For example if I show it a face when it was trained on digits you can easily look at the error and tell it’s not a digit.</p>
<p>More interestingly, Hinton discovered that you could stack these Restricted Boltzmann Machines one on top of the other and create what’s called a <a href="http://en.wikipedia.org/wiki/Deep_learning">Deep Belief Network</a>(DBN for short). By taking the output of one RBM and use it as the input of another RBM you can learn features of features and integrate different types of inputs together. This models a theory of how the brain works.</p>
<blockquote><p> The neocortex of the brain [is] a hierarchy of filters where each layer captures some of the information in the operating environment, and then passes the remainder, as well as modified base signal, to other layers further up the hierarchy. The result of this process is a self-organizing stack of transducers, well-tuned to their operating environment. <a href="http://en.wikipedia.org/wiki/Deep_learning">1</a></p></blockquote>
<p>In Hinton’s online class, his example is of a deep network builds on the handwriting example and connects four RBMs together.
The first RBM is as described above. The second RBM takes the output from the first and uses it as it’s input. The third RBM takes the output from the second and uses it as it’s input along with 10 extra inputs that represents the digit class. Finally, the output of the third RBM is used as the input of the fourth. (The diagram below is from his coursera class).</p>
<p><img src="/images/MinstDBN.png" alt="MNISTDBN" /></p>
<p>Training in its simplest form just builds on Contrastive divergence. You train the bottom RBM(‘A’ below) with the raw data first till it reaches equilibrium. Then start training ‘B’ by passing a training case through ‘A’ and using the output as a input for ‘B’. You do this over and over till ‘B’ is trained then do the same with ‘C’. The difference with C is you concatenate the Feature output of B along with the label input so the last RBM learns how to model these two.</p>
<p><img src="/images/DBN.png" alt="TrainingDBN" /></p>
<p>Once this DBN is trained you can take a test case and pass it through the layers to see what the output of the label nodes are activated, So passing a image ‘3’ will highlight the label ‘3’. So we have built the functional equivalent of the classic neural network I initially described but it’s a whole lot simpler, faster and more accurate.</p>
<p>Deep belief networks like this have subsequently be used to:</p>
<ul>
<li>Win the <a href="http://www.stat.osu.edu/~dmsl/GrandPrize2009_BPC_BellKor.pdf">Netflix prize</a></li>
<li>Power <a href="http://psych.stanford.edu/~jlm/pdfs/Hinton12IEEE_SignalProcessingMagazine.pdf">Google Voice</a></li>
<li>Improve <a href="http://blog.kaggle.com/2012/11/01/deep-learning-how-i-did-it-merck-1st-place-interview/">Predictive drug interactions</a></li>
</ul>
<p>There are many variations on RBMs that have been developed and it’s fun to dig into how slight variations to this fundamental technique can be used to make it good at learning things other than just static inputs.</p>
<h3>Generative Machines</h3>
<p>By far the most interesting thing I’ve learned about Deep Belief Networks is their generative properties. Meaning you can look inside the ‘mind’ of a DBN and see what it’s imagining. Since a deep belief networks are two-way like restricted boltzmann machines you can make hidden inputs generate valid visual inputs. Continuing with our handwritten digit example you can start with the label input say a ‘3’ label and activate it then go reverse through the DBN and out the other end will pop out a picture of a ‘3’ based on the features of the inner layers. This is equivalent to our ability to visualize things using words, go ahead imagine a ‘3’, now rotate it.</p>
<p><img src="/images/GenerativeDBN.png" alt="Generative DBN" /></p>
<h3>Summary and Sample Project</h3>
<p>Neural Networks have come a long way since I was in college. And what I like is they now fit the model of the mind I can recognize. The
brain is truly an example of the whole is greater than the sum of it’s parts. And it seems like the stackable and generative properties of Restricted Boltzmann Machines have this as well.</p>
<p>If you want to really learn about this stuff <em>please take the <a href="https://www.coursera.org/course/neuralnets">coursera course I mentioned</a></em>. It’s taught by the foremost expert who created most everything I’ve covered.</p>
<p>I’ve implemented a simple RBM and DBN to create the handwriting example you’ve seen above. If you want to take a look and play with it the code is on my github. It’s java and you’ll see it’s really not that complex. A real implementation usually runs on the GPU to really speed up the training but it seems to run quickly enough and it’s got no external dependencies.</p>
<p><a href="http://github.com/tjake/rbm-dbn-MNIST">Link to the github project</a></p>
<p>Next time I’ll dig a little deeper into the different RBM types out there and some of the more advanced training techniques (I’m still learning them).</p>
</article>
<nav role="navigation" id="pagination">
</nav>
</section>
<script type="text/javascript">
var disqus_shortname = 'tjakeblog';
var disqus_script = 'count.js';
(function () {
var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
dsq.src = 'http://' + disqus_shortname + '.disqus.com/' + disqus_script;
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
}());
</script>
<script type="text/javascript">
(function(){
var twitterWidgets = document.createElement('script');
twitterWidgets.type = 'text/javascript';
twitterWidgets.async = true;
twitterWidgets.src = 'http://platform.twitter.com/widgets.js';
document.getElementsByTagName('head')[0].appendChild(twitterWidgets);
})();
</script>
</body>
</html>