-
Notifications
You must be signed in to change notification settings - Fork 2
/
index.html
1313 lines (1139 loc) · 94.7 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description"
content="SSNLP 2023">
<meta name="author" content="">
<meta name="keywords"
content="SSNLP 2023, Singapore Symposium">
<title>SSNLP 2023: The 2023 Singapore Symposium on Natural Language Processing
</title>
<!-- Bootstrap core CSS -->
<link href="vendor/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="css/scrolling-nav.css" rel="stylesheet">
<link rel="shortcut icon" type="image/x-icon" href="favicon.png">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<link rel="stylesheet" type="text/css" href="fonts/font-awesome-4.7.0/css/font-awesome.min.css">
<!--===============================================================================================-->
<link rel="stylesheet" type="text/css" href="vendor/animate/animate.css">
<!--===============================================================================================-->
<link rel="stylesheet" type="text/css" href="vendor/select2/select2.min.css">
<!--===============================================================================================-->
<link rel="stylesheet" type="text/css" href="vendor/perfect-scrollbar/perfect-scrollbar.css">
<!--===============================================================================================-->
<link rel="stylesheet" type="text/css" href="css/util.css">
<link rel="stylesheet" type="text/css" href="css/main.css">
<style type="text/css">
.navbar-text > a {
color: inherit;
text-decoration: none;
}
.white_bg {
background-color: #eef7fa;
padding: 3px;
}
.line2 {
margin: 5px 0;
height: 2px;
background: repeating-linear-gradient(to right, black 0, black 10px, transparent 10px, transparent 12px)
/*10px red then 2px transparent -> repeat this!*/
}
.bordered, .hover2, xximg:hover {
border-color: #AAAAAA;
border-style: solid;
border-width: 1px;
border-collapse: separate /* otherwise does not work in IE inside tables */;
}
.hover2 {
-webkit-box-shadow: 2px 2px 2px rgba(, , 120, 0.6);
-moz-box-shadow: 2px 2px 2px rgba(, , 120, 0.6);
-o-box-shadow: 2px 2px 2px rgba(, , 120, 0.6);
box-shadow: 0px 0px 10px rgba(, , 120, 0.6);
}
figure figcaption {
text-align: center;
margin: 10px;
}
figure {
display: inline-block;
margin: 0px;
}
figure img {
vertical-align: top;
border: 1px solid #ddd;
border-radius: 0px;
padding: 0px;
}
figure img:hover {
opacity: 0.7;
filter: alpha(opacity=70);
-webkit-box-shadow: -2px 4px 10px 0px rgba(0, 0, 0, 1);
-moz-box-shadow: -2px 4px 10px 0px rgba(0, 0, 0, 1);
box-shadow: -2px 4px 10px 0px rgba(0, 0, 0, 1);
-webkit-transition: all .2s ease-in-out;
transition: all .2s ease-in-out;
}
</style>
</head>
<body id="page-top">
<!-- Navigation -->
<nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top" id="mainNav">
<div class="container">
<div class="navbar-header">
<a class="navbar-brand js-scroll-trigger" href="#">
SSNLP 2023
</a>
</div>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarResponsive"
aria-controls="navbarResponsive" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarResponsive">
<ul class="navbar-nav ml-auto">
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#overview">Overview</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#programme">Programme</a>
</li>
<!-- Speakers -->
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#organizers">Organizers</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#partners">Partners</a>
</li>
<!-- Partners, Photos -->
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#location">Location</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#past-ssnlp">Past SSNLP</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#contact">Contact</a>
</li>
</ul>
</div>
</div>
</nav>
<header class="bg-primary text-white">
<div class="container text-center">
<h1 style="font-size: 60px;font-weight: bold;color: #e48a52;">SSNLP 2023</h1>
<h2 style="font-size: 35px;color: #ffffff;background-color: rgba(0, 123, 255, .25);">The 2023 Singapore Symposium on Natural Language Processing</h2>
</div>
</header>
<section id="overview" class="bg-light">
<div class="container">
<div class="row">
<div class="col-lg-8 mx-auto">
<h2>Welcome!</h2>
<p style="text-align: justify;">We are pleased to announce that <b>the Singapore Symposium on Natural Language Processing (SSNLP 2023)</b> will be held on <b>Monday, December 4</b> (full day). This is an annual Singapore-based pre-conference practice workshop for both local students, practitioners and faculty working in Natural Language Processing to network. It has been successfully held in 2018, 2019, 2020, and 2022, becoming an increasingly popular and impactful event.
<!-- [ <a href="https://www.comp.nus.edu.sg/maps/photos#com1">Map</A> ]
[ <a href="https://www.comp.nus.edu.sg/images/resources/content/mapsvenues/COM1_L2.jpg">Floorplan</A> ]-->
</p>
<p style="text-align: justify;">We are further excited about the unique opportunity presented to Singapore on 6-10 December as the <a href="https://2023.emnlp.org/">EMNLP 2023</a>, a premier Computational Linguistics and NLP conference, is being held in Singapore. Leveraging the attendance of many reputed academicians, we’re looking forward to hosting them as a part of SSNLP – and, further, taking the opportunity to invite them to the School of Computing of National University of Singapore (NUS) to engage with us.
</p>
<p style="text-align: justify;">
The venue will be held at <b>the Shaw Foundation Alumni House Auditorium, NUS (11 Kent Ridge Dr, Singapore 119244, [<a href="https://www.google.com/maps/place/Shaw+Foundation+Alumni+House/@1.2932317,103.7708068,17z/data=!3m2!4b1!5s0x31da1aff3ee4a48f:0x732a458e6add7ca5!4m6!3m5!1s0x31da1aff3f1cf5b1:0x7ae21f4141402cfd!8m2!3d1.2932263!4d103.7733817!16s%2Fg%2F11bx1h3zc_?entry=ttu">Map</a>], [<a href="images/Shaw-1.jpg">Outdoor photo</a>], [<a href="images/Shaw-2.jpg">Indoor photo</a>])</b>.
We are planning a hybrid (onsite & online) format for SSNLP to ensure maximal outreach.
Please note our venues can only accommodate a limited seating capacity; in the event of oversubscription, we may offer online attendance to registered participants.
So make <b>early bird registration</b> as quick as possible to secure seats.
</p>
<center>
<p>
<a href="https://forms.office.com/r/P7GFnaHHqu" target="_blank" class="btn disabled btn-secondary">Registration closed</a>
</p>
</center>
<!-- <p>Due to fire code restrictions, our venues cannot accommodate additional onsite registrations. However, we have plenty of capacity for virtual attendance. Feel free to request the link for the virtual registration below.
</p> -->
<p>Right now, our in-person registration is closed, but feel free to make virtual registrations for online attendance by dropping an email. We will send you a Zoom link.</p>
<center>
<p>
<a href="https://docs.google.com/forms/d/e/1FAIpQLScBFhd2XQf8ciLpRcNPAlim3mFXLc4CpDxTn-8Cef3AKd5j9Q/viewform?pli=1" target="_blank" class="btn disabled btn-secondary">Virtual Registration closed</a>
<!-- <a href="mailto:haofei37@nus.edu.sg?subject=SSNLP 2023 Virtual Registration&body=Hi, I'd like to receive the Zoom link for the upcoming SSNLP event on 5 December 2023. Can you send it to me? Thank you!" target="_blank" class="btn btn-primary">Register to get the Virtual Attendance Zoom links</a> -->
</p>
</center>
<h3>Latest news</h3>
<br>
<p><span class="white_bg"><strong>Dec 19, 2023</strong> — All the Posters and relevant materials can be downloaded from <a href="https://drive.google.com/drive/folders/1Njknx9-8A1Vk2d40vP7Pmce6_6tTDAQ3?usp=sharing
">here.</a></span></p>
<p><span class="white_bg"><strong>Nov 30, 2023</strong> — Program is confirmed, see you all</span></p>
<p><span class="white_bg"><strong>Oct 26, 2023</strong> — Registration is open, please register now</span></p>
<p><span class="white_bg"><strong>Sep 30, 2023</strong> — The date is confirmed: December 4, 2023</span></p>
</div>
</div>
</div>
</section>
<section id="programme" class="bg-light">
<div class="container">
<div class="row">
<div class="col-lg-8 mx-auto">
<h2>Programme</h2>
<p style="text-align: justify;">We've planned to host Oral and Post sessions for paper presentations, and also have invited Keynote Presentations</p>
<div class="container-table100">
<div class="wrap-table100">
<div class="table100 ver5 m-b-10">
<table data-vertable="ver5">
<thead>
<tr class="row100 head">
<th class="column100 column1" data-column="column1"><strong>Time</strong></th>
<th class="column100 column2" data-column="column2"><strong>Event</strong></th>
</tr>
</thead>
<tbody>
<tr class="row100">
<td class="column100 column1" data-column="column1">08:00 - 08:15</td>
<td class="column100 column2" data-column="column2"><strong>Welcome and Opening
Remarks</strong>
<br> <em>introducer:</em> TAN Kian Lee
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">08:15 - 09:15</td>
<td class="column100 column2" data-column="column2">
<a style="color: #ffc107;">
<strong>Paper session 1</strong>
</a>
<br> <em>session chair:</em> Sun Shuo
<!-- <br>
<em>speaker:</em> <font color="red">Heng Ji</font> :: <em>chaired
by:</em> Nancy Chen -->
</td>
</tr>
<!-- <tr class="row100">
<td class="column100 column1" data-column="column1">09:30 - 10:30</td>
<td class="column100 column2" data-column="column2"><strong>Tea Break</strong></td>
</tr> -->
<tr class="row100">
<td class="column100 column1" data-column="column1">09:30 - 10:30</td>
<td class="column100 column2" data-column="column2">
<a style="color: #000000;">
<strong>Keynote 1 and Keynote 2 (each 20mins + 10mins Q&A)</strong>
</a>
<br> <em>speakers:</em> <font color="red">Preslav</font> & <font color="red">Farah</font>
<br> <em>session chair:</em> Su Jian
<!-- <em>chaired by:</em> Francis Bond -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">10:30 - 11:30</td>
<td class="column100 column2" data-column="column2">
<a style="color: #ffc107;">
<strong>Paper session 2</strong>
</a>
<br> <em>session chair:</em> Tan Qingyu
<!-- <br> <em>speaker:</em> <font color="red">Rada
Mihalcea</font> :: <em>chaired by:</em> Francis Bond -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">11:30 - 12:30</td>
<td class="column100 column2" data-column="column2">
<a style="color: #000000;">
<strong>Keynote 3 and Keynote 4 (each 20mins + 10mins Q&A) </strong>
</a>
<br> <em>speakers:</em> <font color="red">Vivian</font> & <font color="red">Tanya</font>
<br> <em>session chair:</em> Nancy Chen
<!-- <em>chaired by:</em> Francis Bond -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">12:30 - 13:00</td>
<td class="column100 column2" data-column="column2">
<a style="color: #2bc94f;">
<strong>Poster lightning talks</strong>
</a>
<br> <em>session chair:</em> Yanxia Qin
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">13:00 - 14:00</td>
<td class="column100 column2" data-column="column2">
<a style="color: #007bff;"><strong>Poster session 1 and Lunch</strong> </a>
<!-- <br> <em>speaker:</em> <font color="red">Eduard Hovy</font> ::
<em>chaired by:</em> Li Haizhou -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">14:00 - 15:00</td>
<td class="column100 column2" data-column="column2">
<a style="color: #ffc107;">
<strong>Paper session 3</strong>
</a>
<br> <em>session chair:</em> Taha Aksu
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">15:00 - 16:00</td>
<td class="column100 column2" data-column="column2">
<a style="color: #000000;">
<strong>Keynote 5 and Keynote 6 (each 20mins + 10mins Q&A) </strong>
</a>
<br> <em>speakers:</em> <font color="red">Diyi</font> & <font color="red">Joao</font>
<br> <em>session chair:</em> Kokil Jaidka
<!-- :: <em>chaired by:</em> Li Haizhou -->
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">16:00 - 16:30</td>
<td class="column100 column2" data-column="column2">
<a style="color: #007bff;">
<strong>Poster session 2 and Tea</strong>
</a>
</td>
</tr>
<tr class="row100">
<td class="column100 column1" data-column="column1">16:30 - 18:30</td>
<td class="column100 column2" data-column="column2">
<a style="color: #000000;">
<strong>Industry session: Speaker presentations (each 20mins + 10mins Q&A)</strong>
</a>
<br> <em>speakers:</em> <font color="red">Daniel</font> & <font color="red">Huda</font> & <font color="red">Alessandro</font> & <font color="red">Lidong</font>
<br> <em>session chairs:</em> Suzanna Sia & Gao Wei
</td>
</tr>
<!-- <tr class="row100">
<td class="column100 column1" data-column="column1">18:00 - 18:30</td>
<td class="column100 column2" data-column="column2"><strong>Townhall</strong>
</td>
</tr> -->
<!-- <tr class="row100">
<td class="column100 column1" data-column="column1">17:30 - 18:00</td>
<td class="column100 column2" data-column="column2">
</td>
</tr> -->
</tbody>
</table>
</div>
</div>
</div>
<!-- <p>
Oral sessions are for 12 minutes plus 3 minutes for immediate questions. There will be ample time after a session to engage in the breaks directly after the session. Session chairs should record each session and check with the speakers if they want their post-recorded session made public.
Questions will be solicited via crowdsourcing via Padlets.
<div class="container-table">
<div class="wrap-table">
<div class="table ver5 m-b-10">
<table data-vertable="ver5">
<thead>
<tr class="row100 head">
<th class="column100" data-column="column1"><strong>5 December,
2023</strong></th>
<th class="column100" data-column="column2">Seminar Room 2 (SR2)</th>
<th class="column100" data-column="column3">Seminar Room 3 (SR3)</th>
</tr>
</thead>
<tbody>
<tr class="row100">
<td class="column100" data-column="column1">09:00—09:45</td>
<td class="column100" data-column="column2"><strong>Session 1A</strong><br/>
<strong>Information Extraction</strong><br/>(<i>Session Chair: Yanxia Qin</i>)
<p style="text-align:left; font-size:small">
[1] Yubo Ma, Zehao Wang, Yixin Cao, Mukai Li, Meiqi Chen, Kun Wang, Jing Shao. <i>Prompt for Extraction? PAIE: Prompting Argument Interaction for Event Argument Extraction.</i><BR/>
[2] Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng. <i>Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation.</i><BR/>
[3] Timothy Liu and De Wen Soh. <i>Towards Better Characterization of Paraphrases.</i>
</p>
</td>
<td class="column100" data-column="column3"><strong>Session 1B</strong><br/>
<strong>NLP Applications</strong><br/>(<i>Session Chair: Prathyusha Jwalapuram</i>)
<p style="text-align:left; font-size:small">
[1] Hai Ye, Hwee Tou Ng, Wenjuan Han. <i>On the Robustness of Question Rewriting Systems to Questions of Varying Hardness.</i><BR/>
[2] Mathieu Ravaut, Shafiq Joty, Nancy F. Chen. <i>SummaReranker: A Multi-Task Mixture-of-Experts Reranking Framework for Abstractive Summarization.</i><BR/>
[3] Xiaobing Sun, Wei Lu. <i>Implicit N-grams Induced by Recurrence.</i>
</p>
</td>
</tr>
<tr class="row100">
<td class="column100" data-column="column1">09:45—10:15</td>
<td colspan="2" class="column100" data-column="column2"><strong>Break 1</strong></td>
</tr>
<tr class="row100">
<td class="column100" data-column="column1">10:15—11:00</td>
<td class="column100" data-column="column2"><strong>Session 2A</strong><br/>
<strong>Syntax and Discourse</strong><br/>(<i>Session Chair: Qingyu Tan</i>)
<p style="text-align:left; font-size:small">
[1] Muhammad Reza Qorib, Seung-Hoon Na, Hwee Tou Ng. <i>Frustratingly Easy System Combination for Grammatical Error Correction.</i> <BR/>
[2] Liying Cheng, Lidong Bing, Ruidan He, Qian Yu, Yan Zhang, Luo Si. <i>IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument Mining Tasks.</i><BR/>
[3] Prathyusha Jwalapuram, Shafiq Joty, Xiang Lin. <i>Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling.</i>
</p>
</td>
<td class="column100" data-column="column-3"><strong>Session 2B</strong><BR/>
<strong>Multimodality</strong><br/>(<i>Session Chair: Liangming Pan</i>)
<p style="text-align:left; font-size:small">
[1] Allan Jie, Jierui Li, Wei Lu. <i>Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction.</i><BR/>
[2] Shankar Kantharaj, Rixie Leong, Xiang Lin, Ahmed Masry, Megh Thakkar, Enamul Hoque, Shafiq Joty. <i>Chart2Text: A large-scale benchmark for chart summarization. </i><BR/>
[3] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, Enamul Hoque. <i>ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning.</i>
</p>
</td>
</tr>
<tr class="row100">
<td class="column100" data-column="column1">11:00—13:00</td>
<td colspan="2" class="column100" data-column="column2"><strong>Lunch (courtesy ByteDance)</strong></td>
</tr>
<tr class="row100">
<td class="column100" data-column="column1">12:00—13:00</td>
<td colspan="2" class="column100" data-column="column2"><strong>Poster Session 1</strong></td>
</tr>
<tr class="row100">
<td class="column100" data-column="column1">13:00—13:45</td>
<td class="column100" data-column="column2"><strong>Session 3A</strong><br/>
<strong>Dialog</strong><br/>(<i>Session Chair: Min-Yen Kan</i>)
<p style="text-align:left; font-size:small">
[1] Bosheng Ding, Junjie Hu, Lidong Bing, Mahani Aljunied, Shafiq Joty, Luo Si, Chunyan Miao. <i>GlobalWoZ: Globalizing MultiWoZ to Develop Multilingual Task-Oriented Dialogue Systems. </i><BR/>
[2] Hung Le, Nancy F. Chen, Steven Hoi. <i>Multimodal Dialogue State Tracking.</i> <BR/>
[3] Deepanway Ghosal, Siqi Shen, Navonil Majumder, Rada Mihalcea, Soujanya Poria. <i>CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues.</i>
</p>
</td>
<td class="column100" data-column="column3"><strong>Session 3B</strong><br/>
<strong>Domain Adaptation / Few Shot Learning</strong><br/>(<i>Session Chair: Liying Cheng</i>)
<p style="text-align:left; font-size:small">
[1] Lulu Zhao, Fujia Zheng, Weihao Zeng, Keqing He, Weiran Xu, Huixing Jiang, Wei Wu, Yanan Wu. <i>Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable Fine-tuning for Zero-Shot Dialogue Summarization.</i> <BR/>
[2] Chengwei Qin, Shafiq Joty. <i>Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation.</i><BR/>
[3] Chia Yew Ken, Lidong Bing, Soujanya Poria, Luo Si. <i>RelationPrompt: Leveraging Prompts to Generate Synthetic Data for Zero-Shot Relation Triplet Extraction.</i>
</p>
</td>
</tr>
<tr class="row100">
<td class="column100" data-column="column-1">13:45—14:15</td>
<td colspan="2" class="column100" data-column="column-1"><strong>Break 2</strong></td>
</tr>
<tr class="row100">
<td class="column100" data-column="column-1">14:15—15:00</td>
<td class="column100" data-column="column-2"><strong>Session 4A</strong><BR/>
<strong>Language Models</strong><br/>(<i>Session Chair: Bosheng Ding</i>)
<p style="text-align:left; font-size:small">
[1] Minghuan Tan, Yong Dai, Duyu Tang, Zhangyin Feng, Guoping Huang, Jing Jiang, Jiwei Li, Shuming Shi. <i>Exploring and Adapting Chinese GPT to Pinyin Input Method.</i><BR/>
[2] Xiaosen Zheng, Jing Jiang. <i>An Empirical Study of Memorization in NLP.</i><BR/>
[3] Yunxiang Zhang, Liangming Pan, Samson Tan, Min-Yen Kan. <i>Interpreting the Robustness of Neural NLP Models to Textual Perturbations.</i>
</p>
</td>
<td class="column100" data-column="column3"><strong>Session 4B</strong><br/>
<strong>Generation</strong><br/>(<i>Session Chair: Yisong Miao</i>)
<p style="text-align:left; font-size:small">
[1] Abhinav Ramesh Kashyap, Devamanyu Hazarika, Min-Yen Kan, Roger Zimmermann, Soujanya Poria. <i>So Different Yet So Alike! Constrained Unsupervised Text Style Transfer.</i><BR/>
[2] Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, Luo Si. <i>MReD: A Meta-Review Dataset for Structure-Controllable Text Generation.</i><BR/>
[3] Zhengyuan Liu, Nancy F. Chen. <i>Learning from Bootstrapping and Stepwise Reinforcement Reward: A Semi-Supervised Framework for Text Style Transfer.</i>
</p>
</td>
</tr>
<tr class="row100">
<td class="column100" data-column="column-1">15:00—16:00</td>
<td colspan="2" class="column100" data-column="column-2"><strong>Poster Session 2</strong>
</tr>
</table>
</div>
<h3>Posters</h3>
<p>Posters will be shown in the foyer of Seminar Room (SR2). Posters are short, workshop, work-in-progress papers or last minute additions to our programme. Poster boards can accommodate A1 sized posters, in either portrait or landscape.
<div class="table ver5 m-b-10">
<table data-vertable="ver5">
<tbody>
<tr class="row100">
<td class="column100" data-column="column1">
<p style="text-align:left; font-size:small">
[1] Ruixi Lin, Hwee Tou Ng. <i>Does BERT Know that the IS-A Relation is Transitive? </i><BR/>
[2] Saurabh Jain, Yisong Miao, Min-Yen Kan. <i>Comparative Snippet Generation.</i><BR/>
[3] Sicheng Yu, Qianru Sun, Hao Zhang, Jing Jiang. <i>Translate-Train Embracing Translationese Artifacts.</i> <BR/>
[4] Ibrahim Taha Aksu, Zhengyuan Liu, Min-Yen Kan and Nancy F. Chen. <i>N-Shot Learning for Augmenting Task-Oriented Dialogue State Tracking.</i><BR/>
[5] Hung Le, Nancy F. Chen, Steven Hoi. <i>VGNMN: Video-grounded Neural Module Networks for Video-Grounded Dialogue Systems.</i><BR/>
[6] Moxin Li, Fuli Feng, Hanwang Zhang, Xiangnan He, Fengbin Zhu, Tat-Seng Chua. <i>Learning to Imagine: Integrating Counterfactual Thinking in Neural Discrete Reasoning.</i><BR/>
[7] Fengzhu Zeng, Wei Gao. <i>Early Rumor Detection Using Neural Hawkes Process with a New Benchmark Dataset.</i>
</p>
</td>
</tr>
</table>
</div>
</div>
</div>-->
</div>
</div>
</div>
</section>
<section id="speakers" class="bg-light">
<div class="container">
<div class="row">
<div class="col-lg-8 mx-auto">
<h3>Oral & Poster Papers</h3>
<p style="text-align: justify;">Papers are mainly exported from the EMNLP 2023 and ACL 2023.
Oral papers are for 12 minutes plus 3 minutes for immediate questions.
<!-- Posters are short, workshop, work-in-progress papers or last-minute additions to our programme. -->
Poster boards can accommodate (1m x 1m) sized posters, in either portrait or landscape.
We split all the posters into Poster session 1 and Poster session 2, divided as follows, with each containing 13 poster boards.
<!-- There will be ample time after a session to engage in the breaks directly after the session. Session chairs should record each session and check with the speakers if they want their post-recorded session made public. Questions will be solicited via crowdsourcing via Padlets. -->
</p>
<div id="Oral" data-toggle="collapse" data-parent="#accordion1">
<a href="#Oral-list" data-toggle="collapse"><b>Click to see the paper list ↓</b></a>
<div class="accordion1" id="accordion1">
<div id="Oral-list" class="collapse" data-parent="#accordion1">
<div class="table ver5 m-b-10">
<table data-vertable="ver5">
<tbody>
<tr class="row100">
<td valign="middle" style="font-weight: bold;">Paper session 1 </td>
<td class="column100" data-column="column1">
<p style="text-align:left; font-size:small">
[1] Ye, Hai, & Xie, Qizhe, & Ng, Hwee Tou. <i>Multi-Source Test-Time Adaptation as Dueling Bandits for Extractive Question Answering</i> (<b>Slot: <a style="color: red;">08:15 - 08:30</a></b>)<BR/>
[2] Zhengyuan Liu, Yong Keong Yap, Hai Leong Chieu and Nancy F. Chen. <i>Guiding Computational Stance Detection with Expanded Stance Triangle Framework</i> (<b>Slot: <a style="color: red;">08:30 - 08:45</a></b>)<BR/>
[3] Ahmed Masry*, Parsa Kavehzadeh*, Xuan Long Do, Enamul Hoque, Shafiq Joty. <i>UniChart: A Universal Vision-language Pretrained Model for Chart Comprehension and Reasoning</i> (<b>Slot: <a style="color: red;">08:45 - 09:00</a></b>)<BR/>
[4] Ibrahim Taha Aksu, Devamanyu Hazarika, Shikib Mehri, Seokhwan Kim, Dilek Hakkani-Tur, Yang Liu, Mahdi Namazifar. <i>CESAR: Automatic Induction of Compositional Instructions for Multi-turn Dialogs</i> (<b>Slot: <a style="color: red;">09:00 - 09:15</a></b>)<BR/>
</p>
</td>
</tr>
<tr class="row100">
<td valign="middle" style="font-weight: bold;">Paper session 2 </td>
<td class="column100" data-column="column1">
<p style="text-align:left; font-size:small">
[1] Hannan Cao, Liping Yuan, Yuchen Zhang, Hwee Tou Ng. <i>Unsupervised Grammatical Error Correction Rivaling Supervised Methods</i> (<b>Slot: <a style="color: red;">10:30 - 10:45</a></b>)<BR/>
[2] Moxin Li, Wenjie Wang, Fuli Feng, Yixin Cao, Jizhi Zhang, Tat-Seng Chua. <i>Robust Prompt Optimization for Large Language Models Against Distribution Shifts</i> (<b>Slot: <a style="color: red;">10:45 - 11:00</a></b>)<BR/>
[3] Ratish Puduppully, Anoop Kunchukuttan, Raj Dabre, Ai Ti Aw, Nancy F. Chen. <i>Decomposed Prompting for Machine Translation between Related Languages using Large Language Models</i> (<b>Slot: <a style="color: red;">11:00 - 11:15</a></b>)<BR/>
[4] Zhiyuan Liu, Sihang Li, Yanchen Luo, Hao Fei, Yixin Cao, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua. <i>MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter.</i> (<b>Slot: <a style="color: red;">11:15 - 11:30</a></b>)<BR/>
</p>
</td>
</tr>
<tr class="row100">
<td valign="middle" style="font-weight: bold;">Paper session 3 </td>
<td class="column100" data-column="column1">
<p style="text-align:left; font-size:small">
[1] Jinggui Liang, Lizi Liao. <i>ClusterPrompt: Cluster Semantic Enhanced Prompt Learning for New Intent Discovery</i> (<b>Slot: <a style="color: red;">14:00 - 14:15</a></b>)<BR/>
[2] Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, Roy Ka-Wei Lee. <i>LLM-Adapter: An Empirical Study of Adapter-based Parameter-Efficient Fine-Tuning for Large Language Models</i> (<b>Slot: <a style="color: red;">14:15 - 14:30</a></b>)<BR/>
[3] Minzhi Li, Taiwei Shi, Caleb Ziems, Min-Yen Kan, Nancy F Chen, Zhengyuan Liu, Diyi Yang. <i>CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation</i> (<b>Slot: <a style="color: red;">14:30 - 14:45</a></b>)<BR/>
[4] Quanyu Long, Wenya Wang, Sinno Jialin Pan. <i>Adapt in Contexts: Retrieval-Augmented Domain Adaptation via In-Context Learning</i> (<b>Slot: <a style="color: red;">14:45 - 15:00</a></b>)<BR/>
</p>
</td>
</tr>
<tr class="row100">
<td valign="middle" style="font-weight: bold;">Poster session 1 </td>
<td class="column100" data-column="column1">
<p style="text-align:left; font-size:small">
[1] Yikang Pan, Liangming Pan, Wenhu Chen, Preslav Nakov, Min-Yen Kan. <i>On the Risk of Misinformation Pollution with Large Language Models</i> (<b>Board: <a style="color: red;">P101</a></b>)<BR/>
[2] Fengzhu Zeng, Wei Gao. <i>Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models</i> (<b>Board: <a style="color: red;">P102</a></b>)<BR/>
[3] Shuo Sun, Yuchen Zhang, Jiahuan Yan, Yuze GAO, Donovan Ong, Bin Chen, Jian Su. <i>Battle of the Large Language Models: Dolly vs LLaMA vs Vicuna vs Guanaco vs Bard vs ChatGPT - A Text-to-SQL Parsing Comparison</i> (<b>Board: <a style="color: red;">P105</a></b>)<BR/>
[4] Liangming Pan, Alon Albalak, Xinyi Wang, William Yang Wang. <i>Logic-LM: Empowering large language models with symbolic solvers for faithful logical reasoning</i> (<b>Board: <a style="color: red;">P106</a></b>)<BR/>
[5] Jiaxi Li and Wei Lu. <i>Contextual Distortion Reveals Constituency: Masked Language Models are Implicit Parsers</i> (<b>Board: <a style="color: red;">P109</a></b>)<BR/>
[6] Shaz Furniturewala, Abhinav Java, Surgan Jandial, Simra Shahid, Pragyan Banerjee, Balaji Krishnamurthy, Sumit Bhatia and Kokil Jaidka. <i>Evaluating the Efficacy of Prompting Techniques for Debiasing Language Model Outputs</i> (<b>Board: <a style="color: red;">P110</a></b>)<BR/>
[7] Tan, Qingyu, & Xu, Lu, & Bing, Lidong, & Ng, Hwee Tou. <i>Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data</i> (<b>Board: <a style="color: red;">P103</a></b>)<BR/>
[8] Yixi Ding, Yanxia Qin, Qian Liu, Min Yen Kan. <i>CocoSciSum: A Scientific Summarization Toolkit with Compositional Controllability</i> (<b>Board: <a style="color: red;">P104</a></b>)<BR/>
[9] Shaurya Rohatgi, Yanxia Qin, Benjamin Aw, Niranjana Unnithan, Min-Yen Kan. <i>The ACL OCL Corpus: Advancing Open Science in Computational Linguistics</i> (<b>Board: <a style="color: red;">P107</a></b>)<BR/>
[10] Xinyuan Lu, Liangming Pan, Qian Liu, Preslav Nakov, Min-Yen Kan. <i>SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim Verification on Scientific Tables</i> (<b>Board: <a style="color: red;">P108</a></b>)<BR/>
[11] Yeo, Gerard., Jaidka K. <i>The PEACE-Reviews dataset: Modeling Cognitive Appraisals in Emotion Text Analysis</i> (<b>Board: <a style="color: red;">P111</a></b>)<BR/>
[12] Kankan Zhou, Eason Lai, Wei Bin Au Yeong, Kyriakos Mouratidis, Jing Jiang. <i>ROME: Evaluating Pre-trained Vision-Language Models on Reasoning beyond Visual Common Sense</i> (<b>Board: <a style="color: red;">P112</a></b>)<BR/>
[13] Xiaobing Sun, Jiaxi Li, and Wei Lu. <i>Unraveling Feature Extraction Mechanisms in Neural Networks</i> (<b>Board: <a style="color: red;">P113</a></b>)<BR/>
</p>
</td>
</tr>
<tr class="row100">
<td valign="middle" style="font-weight: bold;">Poster session 2 </td>
<td class="column100" data-column="column1">
<p style="text-align:left; font-size:small">
[1] Xuan Long Do, Bowei Zou, Shafiq Joty, Anh Tai Tran, Liangming Pan, Nancy F. Chen, Ai Ti Aw. <i>Modeling What-to-ask and How-to-ask for Answer unaware Conversational Question Generation</i> (<b>Board: <a style="color: red;">P201</a></b>)<BR/>
[2] Rui Cao, Jing Jiang. <i>Modularized Zero-shot VQA with Pre-trained Models</i> (<b>Board: <a style="color: red;">P202</a></b>)<BR/>
[3] Bin Wang, Zhengyuan Liu, Nancy F. Chen. <i>Instructive Dialogue Summarization with Query Aggregations</i> (<b>Board: <a style="color: red;">P205</a></b>)<BR/>
[4] Huy Quang Dao, Lizi Liao, Dung D. Le, Yuxiang Nie. <i>Reinforced Target-driven Conversational Promotion</i> (<b>Board: <a style="color: red;">P206</a></b>)<BR/>
[5] Ibrahim Taha Aksu, Min-Yen Kan and Nancy F. Chen. <i>Zero-shot Adaptive Prefixes for Dialogue State Tracking Domain Adaptation</i> (<b>Board: <a style="color: red;">P209</a></b>)<BR/>
[6] Guangsheng Bao, Zhiyang Teng, Hao Zhou, Jianhao Yan, Yue Zhang. <i>Non-Autoregressive Document-Level Machine Translation</i> (<b>Board: <a style="color: red;">P210</a></b>)<BR/>
[7] Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, Preslav Nakov. <i>Fact-Checking Complex Claims with Program-Guided Reasoning</i> (<b>Board: <a style="color: red;">P203</a></b>)<BR/>
[8] Muhammad Reza Qorib, Hwee Tou Ng. <i>System Combination via Quality Estimation for Grammatical Error Correction</i> (<b>Board: <a style="color: red;">P204</a></b>)<BR/>
[9] Tan, Qingyu, & Ng, Hwee Tou, & Bing, Lidong. <i>Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models</i> (<b>Board: <a style="color: red;">P207</a></b>)<BR/>
[10] Ruichao Yang, Wei Gao, Jing Ma, Zhiwei Yang. <i>WSDMS: Debunk Fake News via Weakly Supervised Detection of Misinforming Sentences with Contextualized Social Wisdom</i> (<b>Board: <a style="color: red;">P208</a></b>)<BR/>
[11] Ratish Puduppully, Parag Jain, Nancy F. Chen and Mark Steedman. <i>Multi-Document Summarization with Centroid-Based Pretraining</i> (<b>Board: <a style="color: red;">P211</a></b>)<BR/>
[12] Mathieu Ravaut, Shafiq Joty, Nancy F. Chen. <i>Unsupervised Summarization Re-ranking</i> (<b>Board: <a style="color: red;">P212</a></b>)<BR/>
[13] Zhiqiang Hu, Nancy F. Chen, Roy Ka-Wei Lee. <i>Adapter-TST: A Parameter Efficient Method for Multiple-Attribute Text Style Transfer</i> (<b>Board: <a style="color: red;">P213</a></b>)<BR/>
</p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<section id="speakers" class="bg-light">
<div class="container">
<div class="row">
<div class="col-lg-8 mx-auto">
<h3>Keynote Speakers</h3>
<p>
The following speakers are invited to give keynotes at SSNLP 2023. Please click the profile image to view the detailed
content of the talk.
</p>
<div class="accordion" id="accordion">
<div class="table-responsive">
<table class="table">
<tbody>
<tr align="center">
<td>
<a href="#Preslav" data-toggle="collapse">
<figure>
<img src="images/speaker/preslav-nakov.jpeg" class="hover2" height="185">
<figcaption><h5>Preslav Nakov</h5></figcaption>
</figure>
</a>
</td>
<td>
<a href="#Farah" data-toggle="collapse">
<figure>
<img src="images/speaker/Farah-Benamara.jpeg" class="hover2" align="center"
height="185">
<figcaption>
<h5>Farah Benamara</h5>
</figcaption>
</figure>
</a>
</td>
<td>
<a href="#Vivian" data-toggle="collapse">
<figure>
<img src="images/speaker/Vivian-Chen.jpg" class="hover2" height="185">
<figcaption><h5>Vivian Chen</h5></figcaption>
</figure>
</a>
</td>
<td>
<a href="#Tanya" data-toggle="collapse">
<figure>
<img src="images/speaker/tanya-goyal.jpeg" class="hover2"
height="185">
<figcaption><h5>Tanya Goyal</h5></figcaption>
</figure>
</a>
</td>
</tr>
</tbody>
</table>
</div>
<div id="Preslav" class="collapse" data-parent="#accordion">
<strong>Title: </strong>Jais and Jais-chat: Building the World's Best Open Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models
<br>
<strong>Speaker: </strong><a href="https://mbzuai.ac.ae/study/faculty/preslav-nakov/">Preslav Nakov</a>
<br>
<br>
<p style="text-align: justify;">
<strong> Abstract: </strong> I will discuss Jais and Jais-chat, two state-of-the-art Arabic-centric foundation and instruction-tuned open generative large language models (LLMs). The models are based on the GPT-3 decoder-only architecture and are pretrained on a mixture of Arabic and English texts, including source code in various programming languages. The models demonstrate better knowledge and reasoning capabilities in Arabic than previous open Arabic and multilingual models by a sizable margin, based on extensive evaluation. Moreover, they are competitive in English compared to English-centric open models of similar size, despite being trained on much less English data. I will discuss the training, the tuning, the safety alignment, and the evaluation, as well as the lessons we learned. </p>
<br>
<p style="text-align: justify;">
<strong>Bio:</strong>
Dr. Preslav Nakov is Professor and Department Chair for NLP at the Mohamed bin Zayed University of Artificial Intelligence. Previously, he was Principal Scientist at the Qatar Computing Research Institute, HBKU, where he led the Tanbih mega-project, developed in collaboration with MIT, which aims to limit the impact of "fake news", propaganda and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. He received his PhD degree in Computer Science from the University of California at Berkeley, supported by a Fulbright grant. He is Chair-Elect of the European Chapter of the Association for Computational Linguistics (EACL), Secretary of ACL SIGSLAV, and Secretary of the Truth and Trust Online board of trustees. Formerly, he was PC chair of ACL 2022, and President of ACL SIGLEX. He is also member of the editorial board of several journals including Computational Linguistics, TACL, ACM TOIS, IEEE TASL, IEEE TAC, CS&L, NLE, AI Communications, and Frontiers in AI. He authored a Morgan & Claypool book on Semantic Relations between Nominals, two books on computer algorithms, and 250+ research papers. He received a Best Paper Award at ACM WebSci'2022, a Best Long Paper Award at CIKM'2020, a Best Demo Paper Award (Honorable Mention) at ACL'2020, a Best Task Paper Award (Honorable Mention) at SemEval'2020, a Best Poster Award at SocInfo'2019, and the Young Researcher Award at RANLP’2011. He was also the first to receive the Bulgarian President's John Atanasoff award, named after the inventor of the first automatic electronic digital computer. His research was featured by over 100 news outlets, including Reuters, Forbes, Financial Times, CNN, Boston Globe, Aljazeera, DefenseOne, Business Insider, MIT Technology Review, Science Daily, Popular Science, Fast Company, The Register, WIRED, and Engadget, among others.
</p><br>
<br>
</div>
<div id="Farah" class="collapse" data-parent="#accordion">
<strong>Title: </strong>Generative IA for Social Good: A Myth or a Reality?
<br>
<strong>Speaker: </strong><a href="https://www.irit.fr/~Farah.Benamara/">Farah Benamara</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract: </strong>
Linguistically-informed processing of unstructured textual interactions offers an important testing ground for hybrid AI and AI for social good, in particular when the attempt is to automatically under- stand beyond what is said. This talk is about the implicit nature of linguistic expressions investigating the role of context in their automatic processing: If humans need context, what about machines? I attempt to answer this question on two particular NLP applications: Hate speech detection and crisis management. I review main findings of current studies and question the use of generative IA models in applications with great social and ethical implications for society.
</p>
<br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Farah Benamara is a Full Professor of computer science at Toulouse University Paul Sabatier. She is member of IRIT laboratory and co-head of the MELODI group. Her research concerns Natural Language Processing and focuses on the development of semantic and pragmatic models for language understanding with a particular attention on evaluative language processing, discourse processing and information extraction from texts. She published more than 100 publications in peer-reviewed international conference and journal papers. She has been designed to be area chair at ACL 2019, EACL 2021, EACL 2024 and Senior Area Chair at NAACL 2024. She is member of the editorial board of the journal of Dialogue and Discourse, IEEE Affective Computing and Traitement Automatique des Langues. She co-edited a special issue on contextual phenomena in evaluative language processing in the journal of Computational Linguistics. She is PI of several ongoing projects among which DesCartes at CNRA@CREATE Singapore on hybrid IA for NLP, Sterheotypes an EU project on the detection of racial stereotypes, QualityOnto an ANR-DFG project on fact-checking for knowledge graph validation and finally INTACT, a CNRS prematuration project on NLP-based crisis management from social media.
</p><br>
<br>
</div>
<div id="Vivian" class="collapse" data-parent="#accordion">
<strong>Title: </strong> From Bots to Buddies: Making Conversational Agents More Human-Like
<br>
<strong>Speaker: </strong><a href="https://www.csie.ntu.edu.tw/~yvchen/">Vivian Chen</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract: </strong>
While today's conversational agents are equipped with impressive capabilities, there remains a clear distinction between the intuitive prowess of humans and the operational limits of machines. An example of this disparity is evident in the human ability to infer implicit intents from users' utterances, subsequently guiding conversations toward specific topics or recommending appropriate tasks or products. This talk aims to elevate conversational agents to a more human-like realm, enhancing user experience and practicality. By exploring innovative strategies and frameworks that leverages commonsense knowledge, we delve into the potential ways conversational agents can evolve to offer more seamless, contextually aware, and user-centric interactions. The goal is to not only close the gap between human and machine interactions but also to unlock new possibilities in how conversational agents can be utilized in our daily lives.
</p><br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Yun-Nung (Vivian) Chen is currently an associate professor in the Department of Computer Science & Information Engineering at National Taiwan University. She earned her Ph.D. degree from Carnegie Mellon University, where her research interests focus on spoken dialogue systems and natural language processing. She was recognized as the Taiwan Outstanding Young Women in Science and received Google Faculty Research Awards, Amazon AWS Machine Learning Research Awards, MOST Young Scholar Fellowship, and FAOS Young Scholar Innovation Award. Her team was selected to participate in the first Alexa Prize TaskBot Challenge in 2021. Prior to joining National Taiwan University, she worked in the Deep Learning Technology Center at Microsoft Research Redmond.
</p><br>
<br>
</div>
<div id="Tanya" class="collapse" data-parent="#accordion">
<strong>Title: </strong>Evaluation in the era of GPT-4
<br>
<strong>Speaker: </strong><a href="https://tagoyal.github.io/">Tanya Goyal</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract:</strong>
As large language models become more embedded in user applications, there is a push to align their outputs with human preferences. But human preferences are highly subjective, making both model alignment and evaluation extremely challenging. In this talk, I will first outline work that highlights this subjectivity, for a relatively well-defined tasks like summarization, and its effects on downstream model evaluations. Next, I will discuss how effectively trained models can capture human preferences and the impact of integrating these models into RLHF pipelines.
</p>
<br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Tanya Goyal is an incoming (Fall 2024) assistant professor of Computer Science at Cornell University. For the 2023-2024 academic year, she is a postdoctoral researcher at the Princeton Language and Intelligence (PLI) group. Her current research focuses on designing scalable and cost-effective evaluation techniques for LLMs. Particularly, she is interested in understanding and modeling the subjectivity in human feedback, and how this affects both evaluation and training of LLMs at scale. Previously, she received her Ph.D. in computer science from the University of Texas at Austin in 2023, advised by Dr. Greg Durrett. Her thesis research focused on building tools to automatically detect attribution errors in generated text.
</p>
<br>
<br>
</div>
<div class="table-responsive">
<table class="table">
<tbody>
<tr align="center">
<td>
<a href="#Diyi" data-toggle="collapse">
<figure>
<img src="images/speaker/Diyi_Yang.jpg" class="hover2"
height="185">
<figcaption><h5>Diyi Yang</h5></figcaption>
</figure>
</a>
</td>
<td>
<a href="#Joao" data-toggle="collapse">
<figure>
<img src="images/speaker/joao-sedoc.webp" class="hover2" height="185">
<figcaption><h5>João Sedoc</h5></figcaption>
</figure>
</a>
</td>
<td>
<a href="#Daniel" data-toggle="collapse">
<figure>
<img src="images/speaker/Daniel Preotiuc.jpeg" class="hover2"
height="185">
<figcaption><h5>Daniel Preoțiuc-Pietro</h5></figcaption>
</figure>
</a>
</td>
<td>
<a href="#Huda" data-toggle="collapse">
<figure>
<img src="images/speaker/HudaKhayrallah.jpg" class="hover2"
height="185">
<figcaption><h5>Huda Khayrallah</h5></figcaption>
</figure>
</a>
</td>
</tr>
</tbody>
</table>
</div>
<div id="Diyi" class="collapse" data-parent="#accordion">
<strong>Title: </strong> Human-AI Interaction in the Age of LLMs
<br>
<strong>Speaker: </strong><a href="https://cs.stanford.edu/~diyiy/">Diyi Yang</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract:</strong>
Large language models have revolutionized the way humans interact with AI systems, transforming a wide range of fields and disciplines. In this talk, I share two distinct approaches to empowering human-AI interaction using LLMs. The first one explores how large language models transform computational social science, and how human-AI collaboration can reduce costs and improve the efficiency of social science research. The second part focuses on social skill learning via LLMs by empowering therapists with LLM-empowered feedback and deliberative practices. These two works demonstrate how human-AI interaction via LLMs can foster positive change.
</p>
<br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research focuses on natural language processing for social impact. She has received multiple best paper awards and recognitions at leading conferences in NLP and HCI. She is a recipient of IEEE “AI 10 to Watch” (2020), Intel Rising Star Faculty Award (2021), Samsung AI Researcher of the Year (2021), Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), and an ONR Young Investigator Award (2023).
</p><br>
<br>
</div>
<div id="Joao" class="collapse" data-parent="#accordion">
<strong>Title: </strong>New Dimensions for the Evaluation of Conversational Agents
<br>
<strong>Speaker: </strong><a href="https://www.stern.nyu.edu/faculty/bio/joao-sedoc">João Sedoc</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract: </strong>
The rapid advances in large language models brought about disruptive innovations in the field of conversational agents. However, recent advances also present new challenges in evaluating the quality of such systems, as well as the underlying models and methods. As conversational agents increasingly match and or even surpass human performance in dimensions like 'coherence,' we must shift our focus to the qualities of conversational agents that are fundamental to human-like conversation (e.g., empathy and emotion). In this talk, I will focus on how we can integrate psychological metrics for evaluating conversational agents along dimensions such as emotion, empathy, and user traits.
</p><br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. João Sedoc is an Assistant Professor of Information Systems in the Department of Technology, Operations and Statistics at New York University Stern School of Business. He is also affiliated with the Center for Datascience ML^2 Lab at NYU. His research areas are at the intersection of machine learning and natural language processing. His interests include conversational agents, hierarchical models, deep learning, and time series analysis. Before joining NYU Stern, he worked as an Assistant Research Professor in the Department of Computer Science at Johns Hopkins University. He received his PhD in Computer and Information Science from the University of Pennsylvania.
</p><br>
<br>
</div>
<div id="Daniel" class="collapse" data-parent="#accordion">
<strong>Title: </strong> Modular Language Modeling through Model Merging
<br>
<strong>Speaker: </strong><a href="https://www.preotiuc.ro/">Daniel Preoțiuc-Pietro</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract:</strong>
Pre-trained language models are the cornerstone of most NLP applications and their performance is dependent on using large and diverse data sets. Combining knowledge of multiple data sets in a single model either in pretraining or fine-tuning can lead to better overall performance on in-domain data and can better generalize on out-of-domain data. This talk will present methods and experiments with model merging, defined as combining multiple models into a single one in parameter space without access to data or retraining. This enables models to be modular by design, where models trained on individual data sets can be dynamically used, combined or, if needed, removed under arbitrary constraints. Merging is compute and parameter efficient and allows leveraging models without access to potentially private data used in their training.
</p><br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Daniel Preoțiuc-Pietro is a Senior Research Scientist and the manager of the NLP Platforms group in the Bloomberg AI Engineering group. The group's work powers products for news, financial documents, social media and search. His main research interests are on understanding and modelling the social, pragmatic and temporal aspects of text, especially from social media, with applications in domains such as Psychology, Law, Political Science and Journalism. His research was featured in popular press including the Washington Post, BBC, Scientific American or FiveThirtyEight. He is a co-organizer of the Natural Legal Language processing workshop series. Prior to joining Bloomberg, he obtained his PhD from the University of Sheffield and was a postdoctoral researcher at the University of Pennsylvania.
</p><br>
<br>
</div>
<div id="Huda" class="collapse" data-parent="#accordion">
<strong>Title: </strong> Perplexity-Driven Case Encoding Needs Augmentation for CAPITALIZATION Robustness
<br>
<strong>Speaker: </strong><a href="https://khayrallah.github.io/">Huda Khayrallah</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract:</strong>
For most NLP models, upper and lower case letters are represented with distinct code-points. In contrast, most people naturally connect upper and lower-cased letters as highly similar and therefore expect NLP models to perform similarly on inputs that only differ in casing. However, that is often not the case, and NLP models are often unstable on non-standard casings. Subword segmentation methods (e.g., BPE (Sennrich et al., 2016) and SPM (Kudo and Richardson, 2018)) handle the sparsity introduced by a variety of linguistic features (e.g. concatenative morphology) by learning a segmentation of words into shorter sequences of characters. However, such methods do not currently handle the sparsity introduced by casing well and can lead to terrible quality on ALL CAPS data. Prior work (Berard et al., 2019; Etchegoyhen and Gete, 2020) overcame the quality drop in machine translation but did so in a way that breaks the encoding optimality of perplexity driven methods, leading to impractical sequence length/runtime. In this work, we re-encode capitalization to allow the perplexity-driven subword segmentation model to learn how to best segment this linguistic feature. Naturally occurring data accurately describes the prevalence of capitalization but underestimates the importance humans ascribe to capitalization robustness. We propose data augmentation to fill this gap. Overall, we increase translation quality on data with different casings (compared to standard SPM), with minimal impact on decoding speed on standard cased data and large speed improvements on ALL CAPS data.
</p><br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Huda Khayrallah is a senior researcher at Microsoft, working on the Microsoft Translator team. She holds a PhD in computer science from The Johns Hopkins University (JHU), where she was advised by Philipp Koehn. She also holds a bachelor’s in computer science from UC Berkeley. She has worked on a variety of topics in machine translation and NLP including: low resource MT, noisy data in MT, domain adaptation, chatbots, and more.
</p><br>
<br>
</div>
<div class="table-responsive">
<table class="table">
<tbody>
<tr align="center">
<td>
<a href="#Alessandro" data-toggle="collapse">
<figure>
<img src="images/speaker/Alessandro Moschitti.png" class="hover2" height="185">
<figcaption><h5>Alessandro Moschitti</h5></figcaption>
</figure>
</a>
</td>
<td>
<a href="#Lidong" data-toggle="collapse">
<figure>
<img src="images/speaker/lidong_bing.jpeg" class="hover2"
height="185">
<figcaption><h5>Lidong Bing</h5></figcaption>
</figure>
</a>
</td>
</tr>
</tbody>
</table>
</div>
<div id="Alessandro" class="collapse" data-parent="#accordion">
<strong>Title: </strong>Retrieval-Augmented Large Language Models for Personal Assistants
<br>
<strong>Speaker: </strong><a href="https://www.linkedin.com/in/alessandro-moschitti-10999a4/">Alessandro Moschitti</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract: </strong>
Recent work has shown that Large Language Models (LLMs) can potentially answer any question with high accuracy, also providing justifications of the generated output. At the same time, other research work has shown that even the most powerful and accurate models, such as ChatGPT 4, produce hallucinations, which often invalidated their answers. Retrieval-Augmented LLMs are currently a practical solution that can effectively solve the above-mentioned problem. However, the quality of grounding is essential in order to improve the model, since noisy context deteriorates the overall performance. In this talk, we present our experience with Generative Question Answering, which uses basic search engines and accurate passage rerankers to augment relatively small language models. Interestingly, our approach provides a more direct interpretation of knowledge grounding for LLMs.
</p><br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Alessandro Moschitti is a Principal Research Scientist of Amazon Alexa AI, where he has been leading the science of Alexa information service since 2018. He designed the Alexa QA system based on unstructured text and more recently the first Generative QA system to extend the answer skills of Alexa. He obtained his Ph.D. in CS from the University of Rome in 2003, and then did his postdoc at The University of Texas at Dallas for two years. He was professor of the CS Dept. of the University of Trento, Italy, from 2007 to 2021. He participated to the Jeopardy! Grand Challenge with the IBM Watson Research center (2009 to 2011), and collaborated with them until 2015. He was a Principal Scientist of the Qatar Computing Research Institute (QCRI) for five years (2013-2018). His expertise concerns theoretical and applied machine learning in the areas of NLP, IR and Data Mining. He is well-known for his work on structural kernels and neural networks for syntactic/semantic inference over text, documented by more than 330 scientific articles. He has received four IBM Faculty Awards, one Google Faculty Award, and five best paper awards. He was the General Chair of EACL 2023 and EMNLP 2014, a PC co-Chair of CoNLL 2015, and has had a chair role in more than 70 conferences and workshops. He is currently a senior action/associate editor of ACM Computing Survey and JAIR. He has led ~30 research projects, e.g., with MIT CSAIL.
</p><br>
<br>
</div>
<div id="Lidong" class="collapse" data-parent="#accordion">
<strong>Title: </strong>Research and Implementation of Large Language Models at Alibaba DAMO Academy
<br>
<strong>Speaker: </strong><a href="https://lidongbing.github.io/">Lidong Bing</a>
<br>
<br>
<p style="text-align: justify;"> <strong> Abstract:</strong>
Over the past year, Large Language Models (LLMs) have brought about a significant transformation in the field of Natural Language Processing (NLP) and artificial intelligence (AI). This presentation will provide an overview of the research and practical initiatives carried out by Alibaba DAMO Academy in the domain of LLMs. On the practical front, the team has introduced an LLM called <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b">SeaLLMs</a>, which demonstrates remarkable capabilities across major languages in the ASEAN region. When compared to models with similar parameter sizes, SeaLLMs has achieved state-of-the-art performance on various datasets, spanning from fundamental NLP tasks to complex general task solving. Additionally, SeaLLMs has been meticulously customized to enhance safety in these languages and improve its understanding of local cultures. On the research side, the presenter will introduce several recent projects undertaken by the team to advance the development of superior multilingual LLMs. These initiatives include the creation of a multilingual evaluation benchmark for LLMs, an extensive investigation into multilingual jailbreak, a framework that enhances LLMs by incorporating adaptive knowledge sources, a method for extending context length in pretraining, and a framework aimed at making LLMs more effective for low-resource languages. Lastly, the presenter will offer insights into the directions that the team will investigate in the near future. Additionally, he will provide information about career opportunities at DAMO Academy.
</p><br>
<p style="text-align: justify;"> <strong>Bio:</strong>
Dr. Lidong Bing is the director of the Language Technology Lab at DAMO Academy of Alibaba Group. He received a Ph.D. from The Chinese University of Hong Kong and was a postdoc research fellow at Carnegie Mellon University. His research interests include various low-resource and multilingual NLP problems, large language models and their applications, etc. He has published over 150 papers on these topics in top peer-reviewed venues. Currently, he is serving as an Action Editor for Transactions of the Association for Computational Linguistics (TACL) and ACL Rolling Review (ARR), as well as an Area Chair for AI conferences and Associate Editors for AI journals. </p><br>
<br>
</div>
</div>
</div>
</div>
</section>
<!--
<section id="panelists" class="bg-light">
<div class="container">
<div class="row">
<div class="col-lg-8 mx-auto">
<h3>Panel Discussions: Ethics in AI</h3>
<p>
The following speakers have accepted to serve as panelists for the panel discussion at SSNLP 2023.
You can view their detailed
information by clicking the images. Eduard Hovy and other academic speakers will also be discussants
on the panel (TBC).
</p>
<div class="accordion" id="accordion2">
<div class="table-responsive">
<table class="table">
<tbody>
<tr align="center">
<td>
<a href="#liling" data-toggle="collapse">
<figure>
<img src="images/liling.png" class="hover2" width="160" height="192">
<figcaption><h5>Liling Tan</h5></figcaption>
</figure>
</a>
</td>
<td>
<a href="#shafiq" data-toggle="collapse">
<figure>
<img src="images/shafiq.jpg" class="hover2" align="center" width="160"
height="192">
<figcaption>
<h5>Shafiq Joty</h5>
</figcaption>
</figure>
</a>
</td>
<td>
<a href="#guillermo" data-toggle="collapse">