-
Notifications
You must be signed in to change notification settings - Fork 0
/
'
219 lines (194 loc) · 12 KB
/
'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>Abhinav Vishnu, AMD Research</title>
<link rel="stylesheet" type="text/css" href="style.css" media="screen" />
<style type="text/css">
<!--
.style3 {font-size: medium}
.style9 {font-size: 10px}
.style13 {font-size: small}
-->
</style>
</head>
<body>
<div id="header">
<h1><a href="http://abhinavvishnu.github.io">Abhinav Vishnu, AMD Research</a></h1>
</div>
<div id="content">
<div id="right">
<table width="850" border="0">
<tr>
<th width="120" scope="col"><img src="abhinav_vishnu.jpeg" alt=""
width="120" height="160" /> </th>
<td width="600" align="left" scope="col"><h2><span style="color: rgb(153,
0, 0);">Abhinav Vishnu</span></h2>
<h3><a href="http://abhinavvishnu.github.io/abhinav_vishnu_cv.pdf">(CV)</a>
<a href="http://dblp.uni-trier.de/pers/hd/v/Vishnu:Abhinav">(DBLP)</a>
<a href=https://scholar.google.com/citations?user=PgLExogAAAAJ&hl=en">(Google Scholar)</a></h3>
<!-- <h2> ( <img src="resource/name.gif" alt="" width="70" height="24"> -->
<!-- ) </h2> -->
<!-- <span class="style13" style="color: rgb(153,0,0);">(<img src="resource/name.gif" alt="" width="52" height="20">)</span><br> -->
<span class="style13"></span><br>
<span class="style13"> Principal Member of Technical Staff,<br>
<a href="www.amd.com">AMD Research</a><br \>
</span><br>
</td>
</th></tr>
</table>
<p>
I am a Principal Member of Technical Staff at AMD Research.
<br>
My
research intrests are in designing scalable, fault tolerant and energy
efficient Machine Learning and Data Mining (MLDM) algorithms. A few examples
include Deep Learning algorithms (with Keras, TensorFlow and Caffe),
Support Vector Machines (SVM), Frequent Pattern Mining (FP-Growth) and
several others such as K-Nearest Neighbors (k-NN), k-means using MPI and PGAS
models, such as Global Arrays. The MLDM research is integrated in <a
href="https://github.com/matex-org/matex/wiki">Machine Learning Toolkit for Extreme Scale
(MaTEx)</a>. I am also interested in applications of Machine Learning including
fault, performance modeling and domain sciences.
<br>
<br>
Previously, I have been involved in designing scalable programming models and communication
subsystems A by-product of our research in PGAS programming models (<a
href="https://github.com/GlobalArrays/ga/wiki"> Global Arrays </a>) is
Communication Runtime for Extreme Scale (<a
href="http://hpc.pnl.gov/comex">ComEx</a>). ComEx is released
with Global Arrays.
In past life (during PhD), I was heavily involved in designing MPI runtimes
with InfiniBand and other interconnects. My research is integrated with <a
href="http://mvapich.cse.ohio-state.edu">MVAPICH</a> (300K downloads in last
decade!!).
<h2><span style="color: rgb(153, 0, 0);">Available Positions</span></h2><p>
<li>I am looking for students and post-doctorate RAs with passion for Deep Learning and large-scale computing. Please contact me for more details at abhinav DOT vishnu AT amd DOT com</li>
</p>
<h2><span style="color: rgb(153, 0, 0);">Research Interests</span></h2>
<ol>
<li>Extreme Scale Machine Learning and Data Mining (MLDM) Algorithms</li>
<li>Scalable, Fault tolerant and Energy Efficient Runtime Systems</li>
<li>Applications of Machine Learning such as Performance, Fault and Energy Modeling</li>
</ol>
</p>
<h2>Recent Professional Activities</h2>
<ol>
<li><strong>Journal Editorships:</strong>
ParCo'16, ParCo'15 (Special Issue on Energy Efficient Supercomputing), ParCo'15 (Special Issue on Programming Models and Systems Software), ParCo'13 (Special Issue on Programming Models and Systems Software), JoSC'13 (Special Issue on Systems Sofwtare)
<li><strong>Program Committees:</strong>
ICPP'17,IPDPS'17,ESPM2'16, COM-HPC'16,CCGrid16, IPDPS'16, HiPC'16, FTXS'15, HiPC'15, CCGrid'14, IPDPS'14, HiPC'14, Cluster'15, Cluster'12, Cluster'10, ICPP'12, NCP'12, PASA'13, CASS'13, CASS'12
<li><strong>Organizing Committees:</strong>
P2S2'17, P2S2'16, P2S2'15 (Program co-chair), ParLearning'14 (Program co-chair), P2S2'14 (Program co-chair),
P2S2'13 (Program co-chair), P2S2'12 (Program co-chair), E2SC'15 (Publicity chair), E2SC'14 (Proceedings chair)
</li>
<li></li>
<li><strong>Panelists:</strong> DOE Machine Learning Workshop'2015, DOE SBIR'11</li>
</ol>
<h2><span style="color: rgb(153, 0, 0);">Select Recent Publications</span><span class="style3">
</h2>
<blockquote>
<p>
<strong>[EuroMPI/USA'17]</strong> "What does fault tolerant Deep Learning need from MPI?", V. Amatya, A. Vishnu, C. Siegel and J. Daily.
<br>
<strong>[ICS'17]</strong> "ScalaFSM: Enabling Scalability-Sensitive Speculative Parallelization for FSM Computations", J. Qiu, Z. Zhao, B. Wu, A. Vishnu, and S. Song.
<br>
<strong>[JCC'17]</strong> "Deep Learning on Computational Chemistry", G. Goh, A. Vishnu and N. Hodas.
<br>
<strong>[IPDPS'17]</strong> "Generating Performance Models for Irregular Applications", R. Friese, N. Tallent, A. Vishnu, D. Kerbyson and A. Hoisie.
<br>
<strong>[BigData'16, Arxiv'16 [2]]</strong> "Adaptive Neuron Apoptosis for Accelerating Deep Learning on Large Scale Systems", C. Siegel, J. Daily, and A. Vishnu.
<br>
<strong>[HiPC'16]</strong> "Fault Tolerant Frequent Pattern Mining", S. Shohdy, A. Vishnu, and G. Agrawal.
<br>
<strong>[ICPADS'16]</strong> "Accelerating Deep Learning with Shrinkage and Recall", S. Zheng, A. Vishnu, and C. Ding.
<br>
<strong>[ICPP'16]</strong> "Fault Tolerant Support Vector Machines", S. Shohdy, A. Vishnu, and G. Agrawal.
<br>
<strong>[Arxiv'16 [1]]</strong> "Distributed TensorFlow with MPI", C. Siegel, J. Daily, and A. Vishnu.
<br>
<strong>[IPDPS'16]</strong> "Fault Modeling of Extreme Scale Applications
using Machine Learning", A. Vishnu, H. v. Dam, N. Tallent, D. Kerbyson and
A. Hoisie.
<br>
<strong>[SC'15]</strong> "A Case for Application-Oblivious Energy Efficient
MPI Runtime", A. Venkatesh, A. Vishnu, K. Hamidouche, N. Tallent, D.
Kerbyson, A. Hoisie, D. Panda. (Best Student Paper Finalist, SC15)
<br>
<strong>[PPoPP'15]</strong> "Diagnosing the Causes and Severity of
One-sided Message Contention", N. Tallent, A. Vishnu, H. v. Dam, J. Daily,
D. Kerbyson, and A. Hoisie
(Please refer to CV for a complete list of publications)
</blockquote>
<blockquote>
<p><br>
</p>
</blockquote>
</div>
<div id="left">
<div class="box">
<h2><small style="color: rgb(153, 0, 0);">News
:</small></h2>
<small>
<p><strong>4/2018 </strong>: I have been invited to serve on FTXS@SC'18 Program Committee!
<p><strong>4/2018 </strong>: I have been invited to serve on HiPC'18 Program Committee!
<p><strong>3/2018 </strong>: I am serving as a Program co-chair for GraML'18 workshop!
<p><strong>3/2018 </strong>: Our paper on <strong>Effective Machine Learning Based Format Selection and Performance Modeling for SpMV on GPUs</strong> has been accepted by iWAPT'18!
<p><strong>3/2018 </strong>: I am serving as a Program co-chair for GraML'18 workshop!
<p><strong>2/2018 </strong>: I have been invited to serve on SC'18 Program Committee!
<p><strong>1/2018 </strong>: Our paper on <strong>NUMA-Caffe: NUMA-Aware Deep Learning Neural Networks</strong> is accepted for publication at <strong>TACO</strong>.!!
<p><strong>1/2018 </strong>: Our paper on <strong>How Much Chemistry Does a Deep Neural Network Need to Know to Make Accurate Predictions?</strong> is accepted for publication at <strong>Winter Applications for Applications of Computer Vision </strong>.!!
<p><strong>12/2017 </strong>: I have joined AMD Research as a Principal Member of Technical Staff.
<p><strong>7/2017 </strong>: Our paper on <strong>What does fault tolerant Deep Learning need from MPI?</strong> is accepted for publication at <strong>EuroMPI/USA'17</strong>.!!
<p><strong>5/2017 </strong>: I am appointed as Team Lead for Scalable Machine Learning at PNNL.
<p><strong>4/2017 </strong>: Our open source release of MaTEx-TensorFlow is now available at MaTEx github page. Kudos to the MaTEx team members!!
<p><strong>3/2017 </strong>: Our paper on <strong>ScalaFSM: Enabling Scalability-Sensitive Speculative Parallelization for FSM Computations</strong> is accepted by <strong>ICS'17</strong>!!.
<p><strong>2/2017 </strong>: Our paper on <strong>Deep Learning on Computational Chemistry</strong> is accepted by <strong>JCC'17</strong>!!.
<p><strong>2/2017 </strong>: Our paper on <strong>Comparing NVIDIA DGX-1/Pascal and Intel Knights Landing on Deep Learning Workloads</strong> is accepted by <strong>ParLearning'17</strong>!!.
<p><strong>1/2017 </strong>: Our paper on <strong>Generating Performance Models for Irregular Applications</strong> is accepted by <strong>IPDPS'17</strong>!!.
<p><strong>11/2016 </strong>: Our proposal on <strong>xGA: Global Arrays on Extreme Scale Architectures</strong> is accepted by <strong>Exascale Computing Program (ECP)</strong>.
<p><strong>10/2016 </strong>: Our paper on <strong>Adaptive Neuron Apoptosis for Accelerating Deep Learning on Large Scale Systems</strong> is accepted at IEEE Conference on BigData'16.
<p><strong>9/2016 </strong>: Our research on <strong>Convergence of Machine Learning and Deep Learning for HPC Modeling and Simulation</strong> is funded by Advanced Scientific Computing Research (ASCR)!!.
<p><strong>9/2016 </strong>: Our paper on <strong>Fault Tolerant Frequent Pattern Mining</strong> is accepted at HiPC'16.
<p><strong>9/2016 </strong>: Our proposal on <strong>Learning Control on Building Systems</strong> is accepted at Control of Complex Systems Initiative (CCSI).
<p><strong>7/2016 </strong>: We have received <strong>Oak Ridge Director's Discretionary Award</strong> for conducting research on Extreme Scale Deep Learning algorithms with MaTEx.
<p><strong>5/2016 </strong>: Our paper on <strong>Fault Tolerant Support Vector Machines</strong> is accepted at ICPP'16.
<p><strong>4/2016 </strong>: I am serving as a PC member for NAS Conference and
reviewer for Computer and TPDS Jouranals.
<p><strong>3/2016 </strong>: We released our MaTEx with Distributed
TensorFlow using MPI and a paper -- Distributed TensorFlow with MPI.
<p><strong>1/2016 </strong>: I am serving as a co-editor on a Parallel Computing (ParCo) special issue
<p><strong>12/2015 </strong>: A paper on Application Fault Modeling using Machine Learning accepted in IPDPS'16.
<p><strong>11/2015</strong>: Featured Presentation on Extreme Scale Machine Learning Research at DOE Booth @ SC'15.
<p><strong>11/2015</strong>: Invited Presentation on role of Interconnects in Machine Learning at Mellanox Booth @ SC'15.
<p><strong>11/2015</strong>: Akshay Venkatesh (Summer student - 2014) presented best student paper nominee @ SC'15. Kudos!
<p><strong>10/2015</strong>: Invited Presentation on Global Arrays at Japan LENS workshop
<p><strong>10/2015</strong>: Presented a PNNL wide talk on <strong> What can Large Scale Machine Learning do for you?</strong>
<p><strong>9/2015</strong>: Paper Presentation in Cluster'15 on Extreme Scale Support Vector Machines
<p><strong>9/2015</strong>: Paper Presentation in Cluster'15 on Work Stealing based Frequent Pattern Mining
<p><strong>8/2015</strong>: Invited Presentation in MUG'15 on role of MPI in Large Scale Machine Learning
<p><strong>7/2015</strong>: Joint work with OSU on Machine Learning accepted for publication in OpenSHMEM Workshop
<p><strong>7/2015</strong>: Our SC'15 paper is nominated for best student paper!!
</small>
<div class="box">
<h2><small>Contact :</small></h2>
<ul>
<li></li>
<!-- <li><img src="resource/em.gif" alt="email" width="130" height="14" /></li> -->
<small>
<li>abhinav (DOT) vishnu (AT) amd (DOT) com
<li>Phone: 509-372-4794</li>
<li>Mailing address:</li>
<li>PO Box 999, MSIN J4-30</li>
<li>Richland, WA, 99352</li>
</small>
</ul>
</div>
<div class="box">
<div style="font-size: 0.8em;">Last update: 10/2016</div>
</div>
</div>
</div>
<p> </p>
</body>
</html>