-
Notifications
You must be signed in to change notification settings - Fork 0
/
res
4037 lines (4003 loc) · 205 KB
/
res
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
#topic:actor
#language:english
#medium:paper
Actor Model of Computation: Scalable Robust Information Systems
The Actor Model is a mathematical theory that treats “Actors” as the
universal primitives of digital computation. The model has been used
both as a framework for a theoretical understanding of concurrency, and as
the theoretical basis for several practical implementations of concurrent
systems. The advent of massive concurrency through client-cloud computing
and many-core computer architectures has galvanized interest in the Actor Model
^
@https://arxiv.org/pdf/1008.1459.pdf
#topic:actor
#language:english
#medium:paper
Formalizing common sense reasoning for scalable inconsistency-robust information integration using Direct LogicTM Reasoning and the Actor Model
People use common sense in their interactions with large information systems.
This common sense needs to be formalized so that it can be used by computer
systems. Unfortunately, previous formalizations have been inadequate. For
example, classical logic is not safe for use with pervasively inconsistent
information. The goal is to develop a standard foundation for reasoning in
large-scale Internet applications (including sense making for natural
language)
^
@https://arxiv.org/pdf/0812.4852.pdf
#topic:bigraphs
#topic:category theory
#language:english
#medium:paper
Bigraphical reactive systems: basic theory
A notion of bigraph is proposed as the basis for a model of mobile
interaction. A bigraph consists of two independent structures: a topograph
representing locality and a monograph representing connectivity. Bigraphs
are equipped with reaction rules to form bigraphical reactive systems (BRSs),
which include versions of the -calculus and the ambient calculus. Bigraphs are
shown to be a special case of a more abstract notion, wide reactive systems
(WRSs), not assuming any particular graphical or other structure but equipped
with a notion of width, which expresses that agents, contexts and reactions
may all be widely distributed entities
^
@https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-523.pdf
#topic:mbus
#topic:wireless mbus
#topic:security
#language:english
#medium:paper
Wireless M-Bus Security Whitepaper
This work aims to analyse the security of the Meter Bus as specified
in the relevant International organisation for Standardization (ISO)
documentation. M-Bus has its roots in the heat metering industries and
was continuously adopted to fit more complex applications. M-Bus is the
communication bus of choice of several meter manufacturers and its applications
span from drive-by wireless meter reading over to meter-to-meter and mesh
networking to meter-to-collector communication. M-Bus implementations support
different media types such as power line carrier (PLC) or twisted-pair
bus. To avoid the wiring efforts at the distribution level, utilities,
metering companies and manufacturers tend to more frequently choose wireless
protocols for communication. Accordingly, the analysis will mainly concentrate
on M-Bus wireless based communication – wM-Bus
^
@https://www.compass-security.com/fileadmin/Datein/Research/Praesentationen/blackhat_2013_wmbus_security_whitepaper.pdf
#topic:fpga
#topic:neural network
#topic:machine learning
#language:english
#medium:paper
FPGA Implementations of Neural Networks
During the 1980s and early 1990s there was significant work in the design
and implementation of hardware neurocomputers. Nevertheless, most of
these efforts may be judged to have been unsuccessful: at no time have
have hardware neurocomputers been in wide use. This lack of success may be
largely attributed to the fact that earlier work was almost entirely aimed
at developing custom neurocomputers, based on ASIC technology, but for such
niche areas this technology was never sufficiently developed or competitive
enough to justify large-scale adoption. On the other hand, gate-arrays of
the period mentioned were never large enough nor fast enough for serious
artificial-neuralnetwork (ANN) applications. But technology has now improved:
the capacity and performance of current FPGAs are such that they present
a much more realistic alternative. Consequently neurocomputers based on
FPGAs are now a much more practical proposition than they have been in the
past. This book summarizes some work towards this goal and consists of 12
papers that were selected, after review, from a number of submissions.
^
@http://lab.fs.uni-lj.si/lasin/wp/IMIT_files/neural/doc/Omondi2006.pdf
#topic:fpga
#topic:neural network
#topic:machine learning
#language:english
#medium:paper
A General Neural Network Hardware Architecture on FPGA
Field Programmable Gate Arrays (FPGAs) plays an increasingly important
role in data sampling and processing industries due to its highly
parallel architecture, low power consumption, and flexibility in custom
algorithms. Especially, in the artificial intelligence field, for training and
implement the neural networks and machine learning algorithms, high energy
efficiency hardware implement and massively parallel computing capacity are
heavily demanded. Therefore, many global companies have applied FPGAs into
AI and Machine learning fields such as autonomous driving and Automatic
Spoken Language Recognition (Baidu) [1] [2] and Bing search (Microsoft)
[3]. Considering the FPGAs great potential in these fields, we tend to
implement a general neural network hardware architecture on XILINX ZU9CG
System On Chip (SOC) platform [4], which contains abundant hardware resource
and powerful processing capacity. The general neural network architecture
on the FPGA SOC platform can perform forward and backward algorithms in deep
neural networks (DNN) with high performance and easily be adjusted according
to the type and scale of the neural networks.
^
@https://arxiv.org/ftp/arxiv/papers/1711/1711.05860.pdf
#topic:file system
#language:english
#medium:paper
TagFS: A simple tag-based filesystem
TagFS is a simple yet effective tag-based filesystem. Instead of organizing files and documents in
a strict hierarchy (like traditional filesystems), TagFS allows users to assign descriptive attributes
(called tags) to files and subsequently locate those files by searching for tags of interest.
^
@https://web.mit.edu/6.033/2011/wwwdocs/writing-samples/sbezek_dp1.pdf
#topic:optimisation
#topic:neural network
#topic:genetic algorithms
#topic:differential evolution
#topic:bee swarm algorithm
#topic:ant colony optimisation
#language:english
#medium:book
Clever Algorithms: Nature-Inspired Programming Recipes
Implementing an Artificial Intelligence algorithm is difficult. Algorithm
descriptions may be incomplete, inconsistent, and distributed across a
number of papers, chapters and even websites. This can result in varied
interpretations of algorithms, undue attrition of algorithms, and ultimately
bad science. This book is an effort to address these issues by providing
a handbook of algorithmic recipes drawn from the fields of Metaheuristics,
Biologically Inspired Computation and Computational Intelligence, described in
a complete, consistent, and centralized manner. These standardized descriptions
were carefully designed to be accessible, usable, and understandable. Most
of the algorithms described were originally inspired by biological and
natural systems, such as the adaptive capabilities of genetic evolution
and the acquired immune system, and the foraging behaviors of birds, bees,
ants and bacteria. An encyclopedic algorithm reference, this book is intended
for research scientists, engineers, students, and interested amateurs. Each
algorithm description provides a working code example in the Ruby Programming
Language.
^
@https://raw.githubusercontent.com/clever-algorithms/CleverAlgorithms/master/release/clever_algorithms.pdf
#topic:category theory
#topic:haskell
#topic:c++
#topic:template metaprogramming
#topic:type system
#author:Bartosz Milewski
#language:english
#medium:book
Category Theory for Programmers
For some time now I’ve been floating the idea of writing a book about
category theory that would be targeted at programmers. Mind you, not computer
scientists but programmers — engineers rather than scientists. I know
this sounds crazy and I am properly scared. I can’t deny that there is
a huge gap between science and engineering because I have worked on both
sides of the divide. But I’ve always felt a very strong compulsion to
explain things. I have tremendous admiration for Richard Feynman who was
the master of simple explanations. I know I’m no Feynman, but I will try
my best. I’m starting by publishing this preface — which is supposed
to motivate the reader to learn category theory — in hopes of starting a
discussion and soliciting feedback.
^
@https://github.com/hmemcpy/milewski-ctfp-pdf/releases/download/v1.3.0/category-theory-for-programmers.pdf
#topic:category theory
#topic:version control
#author:Samuel Mimram
#author:Cinzia Di Giusto
#language:english
#medium:paper
A Categorical Theory of Patches
When working with distant collaborators on the same documents, one often
uses a version control system, which is a program tracking the history of
files and helping importing modifications brought by others as patches. The
implementation of such a system requires to handle lots of situations depending
on the operations performed by users on files, and it is thus difficult to
ensure that all the corner cases have been correctly addressed. Here, instead
of verifying the implementation of such a system, we adopt a complementary
approach: we introduce a theoretical model, which is defined abstractly
by the universal property that it should satisfy, and work out a concrete
description of it. We begin by defining a category of files and patches,
where the operation of merging the effect of two coinitial patches is defined
by pushout. Since two patches can be incompatible, such a pushout does not
necessarily exist in the category, which raises the question of which is the
correct category to represent and manipulate files in conflicting state. We
provide an answer by investigating the free completion of the category of files
under finite colimits, and give an explicit description of this category:
its objects are finite sets labeled by lines equipped with a transitive
relation and morphisms are partial functions respecting labeling and relations.
^
@https://arxiv.org/pdf/1311.3903
#topic:algorithm
#author:Eugene W. Myers
#language:english
#medium:paper
An O(ND) Difference Algorithm and Its Variations∗
The problems of finding a longest common subsequence of two sequences A
and B and a shortest edit script for transforming A into B have long been
known to be dual problems. In this paper, they are shown to be equivalent to
finding a shortest/longest path in an edit graph. Using this perspective,
a simple O(ND) time and space algorithm is developed where N is the sum of
the lengths of A and B and D is the size of the minimum edit script for A
and B. The algorithm performs well when differences are small (sequences are
similar) and is consequently fast in typical applications. The algorithm is
shown to have O(N + D^2) expected-time performance under a basic stochastic
model. A refinement of the algorithm requires only O(N) space, and the use
of suffix trees leads to an O(NlgN + D^2) time variation
^
@http://www.xmailserver.org/diff2.pdf
#topic:sorting
#author:Sergei Bespamyatnikh
#author:Michael Segal
#language:english
#medium:paper
Enumerating Longest Increasing Subsequences and Patience Sorting (2000)
In this paper we present three algorithms that solve three combinatorial
optimization problems related to each other. One of them is the patience
sorting game, invented as a practical method of sorting real decks of
cards. The second problem is computing the longest monotone increasing
subsequence of the given sequence of n positive integers in the range 1; : :
: ; n. The third problem is to enumerate all the longest monotone increasing
subsequences of the given permutation.
^
@http://www.ii.uni.wroc.pl/~lorys/IPL/article76-1-1.pdf
#topic:single static assignment
#topic:compiler
#topic:compiler optimisation
#author:Lori Carter
#author:Beth Simon
#author:Brad Calder
#author:Larry Carter
#author:Jeanne Ferrante
#language:english
#medium:paper
Predicated Static Single Assignment
Increases in instruction level parallelism are needed to exploit the potential
parallelism available in future wide issue architectures. Predicated execution
is an architectural mechanism that increases instruction level parallelism
by removing branches and allowing simultaneous execution of multiple paths
of control, only committing instructions from the correct path. In order
for the compiler to expose such parallelism, traditional compiler data-flow
analysis needs to be extended to predicated code. In this paper, we present
Predicated Static Single Assignment (PSSA) to enable aggressive predicated
optimization and instruction scheduling. PSSA removes false dependences by
exploiting renaming and information about the multiple control paths. We
demonstrate the usefulness of PSSA for Predicated Speculation and Control
Height Reduction. These two predicated code optimizations used during
instruction scheduling reduce the dependence length of the critical paths
through a predicated region. Our results show that using PSSA to enable
speculation and control height reduction reduces execution time from 10%
to 58%.
^
@http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.18.7756&rep=rep1&type=pdf
#topic:single static assignment
#topic:compiler
#topic:compiler optimisation
#language:english
#medium:book
Static Single Assignment Book
^
@http://ssabook.gforge.inria.fr/latest/book.pdf
#topic:neural network
#topic:auto-encoding
#author:Diederik P. Kingma
#author:Max Welling
#language:english
#medium:paper
Auto-Encoding Variational Bayes
How can we perform efficient inference and learning in directed probabilistic
models, in the presence of continuous latent variables with intractable
posterior distributions, and large datasets? We introduce a stochastic
variational inference and learning algorithm that scales to large datasets
and, under some mild differentiability conditions, even works in the
intractable case. Our contributions is two-fold. First, we show that a
reparameterization of the variational lower bound yields a lower bound
estimator that can be straightforwardly optimized using standard stochastic
gradient methods. Second, we show that for i.i.d. datasets with continuous
latent variables per datapoint, posterior inference can be made especially
efficient by fitting an approximate inference model (also called a recognition
model) to the intractable posterior using the proposed lower bound estimator.
Theoretical advantages are reflected in experimental results.
^
@https://arxiv.org/pdf/1312.6114.pdf
#topic:neural network
#author:Thomas N. Kipf
#author:Max Welling
#language:english
#medium:paper
Variational Graph Auto-Encoders
We introduce the variational graph auto-encoder (VGAE), a framework for
unsupervised learning on graph-structured data based on the variational
auto-encoder (VAE). This model makes use of latent variables and is capable
of learning interpretable latent representations for undirected graphs. We
demonstrate this model using a graph convolutional network (GCN) encoder and
a simple inner product decoder. Our model achieves competitive results on a
link prediction task in citation networks. In contrast to most existing models
for unsupervised learning on graph-structured data and link prediction, our
model can naturally incorporate node features, which significantly improves
predictive performance on a number of benchmark datasets.
^
@https://arxiv.org/pdf/1611.07308.pdf
#topic:neural network
#author:Carl Doersch
#language:english
#medium:paper
Tutorial on Variational Autoencoders
In just three years, Variational Autoencoders (VAEs) have emerged as one
of the most popular approaches to unsupervised learning of complicated
distributions. VAEs are appealing because they are built on top of standard
function approximators (neural networks), and can be trained with stochastic
gradient descent. VAEs have already shown promise in generating many kinds
of complicated data, including handwritten digits, faces, house numbers,
CIFAR images, physical models of scenes, segmentation, and predicting the
future from static images. This tutorial introduces the intuitions behind
VAEs, explains the mathematics behind them, and describes some empirical
behavior. No prior knowledge of variational Bayesian methods is assumed.
^
@https://arxiv.org/pdf/1606.05908.pdf
#topic:neural network
#author:Ronald Yu
#language:english
#medium:paper
A Tutorial on VAEs: From Bayes' Rule to Lossless Compression
The Variational Auto-Encoder (VAE) is a simple, efficient, and popular
deep maximum likelihood model. Though usage of VAEs is widespread, the
derivation of the VAE is not as widely understood. In this tutorial, we will
provide an overview of the VAE and a tour through various derivations and
interpretations of the VAE objective. From a probabilistic standpoint, we will
examine the VAE through the lens of Bayes' Rule, importance sampling, and the
change-of-variables formula. From an information theoretic standpoint, we will
examine the VAE through the lens of lossless compression and transmission
through a noisy channel. We will then identify two common misconceptions
over the VAE formulation and their practical consequences. Finally, we will
visualize the capabilities and limitations of VAEs using a code example
(with an accompanying Jupyter notebook) on toy 2D data.
^
@https://arxiv.org/pdf/2006.10273.pdf
#topic:signed distance field
#author:Chris Green
#language:english
#medium:paper
Improved Alpha-Tested Magnification for Vector Textures and Special Effects
A simple and efficient method is presented which allows improved rendering
of glyphs composed of curved and linear elements. A distance field is
generated from a high resolution image, and then stored into a channel of
a lower-resolution texture. In the simplest case, this texture can then be
rendered simply by using the alphatesting and alpha-thresholding feature of
modern GPUs, without a custom shader. This allows the technique to be used
on even the lowest-end 3D graphics hardware. With the use of programmable
shading, the technique is extended to perform various special effect
renderings, including soft edges, outlining, drop shadows, multi-colored
images, and sharp corners.
^
@https://steamcdn-a.akamaihd.net/apps/valve/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
#topic:signed distance field
#author:Yue Jiang
#author:Dantong Ji
#author:Zhizhong Han
#author:Matthias Zwicker
#language:english
#medium:paper
SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape Optimization
We propose SDFDiff, a novel approach for image-based shape optimization using
differentiable rendering of 3D shapes represented by signed distance functions
(SDF). Compared to other representations, SDFs have the advantage that
they can represent shapes with arbitrary topology, and that they guarantee
watertight surfaces. We apply our approach to the problem of multi-view 3D
reconstruction, where we achieve high reconstruction quality and can capture
complex topology of 3D objects. In addition, we employ a multi-resolution
strategy to obtain a robust optimization algorithm. We further demonstrate
that our SDF-based differentiable renderer can be integrated with deep
learning models, which opens up options for learning approaches on 3D objects
without 3D supervision. In particular, we apply our method to single-view
3D reconstruction and achieve state-of-the-art results.
^
@http://www.cs.umd.edu/~yuejiang/papers/SDFDiff.pdf
#topic:signed distance field
#topic:multi-channel signed distance field
#author:Viktor Chlumsk´
#language:english
#medium:paper
Shape Decomposition for Multi-channel Distance Fields
This work explores the possible improvements to a popular text rendering
technique widely used in 3D applications and video games. It proposes a
universal and efficient method of constructing a multi-channel distance
field for vector-based shapes, such as font glyphs, and describes its usage
in rendering with improved fidelity.
^
@https://github.com/Chlumsky/msdfgen/files/3050967/thesis.pdf
#topic:neural network
#author:Jianwen Xie
#author:Ruiqi Gao
#author:Zilong Zheng
#author:Song-Chun Zhu
#author:Ying Nian Wu
#language:english
#medium:paper
Learning Dynamic Generator Model by Alternating Back-Propagation Through Time
This paper studies the dynamic generator model for spatialtemporal processes
such as dynamic textures and action sequences in video data. In this model,
each time frame of the video sequence is generated by a generator model,
which is a non-linear transformation of a latent state vector, where the
non-linear transformation is parametrized by a top-down neural network. The
sequence of latent state vectors follows a non-linear auto-regressive model,
where the state vector of the next frame is a non-linear transformation of the
state vector of the current frame as well as an independent noise vector that
provides randomness in the transition. The non-linear transformation of this
transition model can be parametrized by a feedforward neural network. We show
that this model can be learned by an alternating back-propagation through time
algorithm that iteratively samples the noise vectors and updates the parameters
in the transition model and the generator model. We show that our training
method can learn realistic models for dynamic textures and action patterns.
^
@http://www.stat.ucla.edu/~jxie/DynamicGenerator/DynamicGenerator_file/doc/DynamicGenerator.pdf
#topic:neural network
#topic:cellular neural network
#author:N H Wulff
#author:J A Hertz t
#language:english
#medium:paper
Learning Cellular Automaton Dynamics with Neural Networks
We have trained networks of E - II units with short-range connections
to simulate simple cellular automata that exhibit complex or chaotic
behaviour. Three levels of learning are possible (in decreasing order of
difficulty): learning the underlying automaton rule, learning asymptotic
dynamical behaviour, and learning to extrapolate the training history. The
levels of learning achieved with and without weight sharing for different
automata provide new insight into their dynamics.
^
@https://papers.nips.cc/paper/703-learning-cellular-automaton-dynamics-with-neural-networks.pdf
#topic:neural network
#topic:cellular neural network
#author:Paolo Arena
#author:Luigi Fortuna
#language:english
#medium:paper
Cellular neural networks: from chaos generation to compexity modelling.
^
@https://www.researchgate.net/profile/Paolo_Arena2/publication/221165625_Cellular_neural_networks_from_chaos_generation_to_compexity_modelling/links/0a85e539fe2db469ce000000/Cellular-neural-networks-from-chaos-generation-to-compexity-modelling.pdf?origin=publication_detail
#topic:cellular automata
#author:Maria Vittoria Avolio
#author:Alessia Errera
#author:Valeria Lupiano
#author:Paolo Mazzanti
#author:Salvatore Di Gregorio
#language:english
#medium:paper
VALANCA: A Cellular Automata Model for Simulating Snow Avalanches
Numerical modelling is a major challenge in the prevention of hazards related
to the occurrence of catastrophic phenomena. Cellular Automata methods were
developed for modelling large scale (extended for kilometres) dangerous surface
flows of different nature such as lava flows, pyroclastic flows, debris
flows, rock avalanches, etc. This paper presents VALANCA, a first version
of a Cellular Automata model, developed for the simulations of dense snow
avalanches. VALANCA is largely based on the most advanced models developed
for flow-like landslides, and adopts some innovations such as outflows
characterized by the position of mass centre and explicit velocity. First
simulations of welldocumented snow avalanches occurred in the Davos region,
Switzerland (i.e. the 2006 Rüchitobel and the 2006 Gotschnawang snow
avalanches) show a satisfying agreement concerning the avalanche path, snow
cover erosion depth, deposit thickness and areal distribution. Furthermore,
preliminary simulations of the Gotschnawang snow-avalanche, by considering
the presence of mitigation structures, were performed.
^
@https://www.nhazca.it/wp-content/uploads/2020/06/VALANCA_2017.pdf
#topic:neural network
#topic:associative neural network
#language:english
#medium:paper
Neural Associative Memories
Neural associative memories (NAM) are neural network models consisting of
neuronlike and synapse-like elements. At any given point in time the state of
the neural network is given by the vector of neural activities, it is called
the activity pattern. Neurons update their activity values based on the inputs
they receive (over the synapses). In the simplest neural network models the
input-output function of a neuron is the identitiy function or a threshold
operation. Note that in the latter case the neural activity state is binary:
active or inactive. Information to be processed by the neural network is
represented by activity patterns (for instance, the representation of a tree
can an activity pattern where active neurons display a tree's picture). Thus,
activity patterns are the representations of the elements processed in the
network. A representation is called sparse if the ratio between active and
inactive neurons is small.
^
@https://courses.cit.cornell.edu/bionb330/readings/Associative%20Memories.pdf
#topic:neural network
#topic:associative neural network
#language:english
#medium:paper
Auto-associative Memory: The First Step in Solving Cocktail Party Problem
One of the most interesting and challenging problems in the area of Artificial
Intelligence is solving the Cocktail Party problem. This is the task of
attending to one speaker among several competing speakers and being able
to switch the attention from one speaker to another at any given time. Human
brain is remarkably efficient in solving this problem. There have been numerous
attempts to emulating this ability in machines. Independent Component Analysis
(ICA) and Blind Source Separation (BSS) are two of the most popular solutions
to this problem but they both fail to generate similar results as human brain
does. They are also very computationally expensive which makes them incapable
of producing results in real-time. Moreover, they generally require at least
two microphones to converge but as we know human brain can also work with
one ear covered. This is evident from the fact that covering one ear does
not make attending to one speaker in a cocktail party any harder.
^
@http://cs229.stanford.edu/proj2013/Youssefi-AutoassociativeMemory.pdf
#topic:cellular automata
#topic:physarum
#author:Jeff Jones
#author:Andrew Adamatzky
#language:english
#medium:paper
Emergence of Self-Organized Amoeboid Movement in a Multi-Agent Approximation of Physarum polycephalum
The giant single-celled slime mould Physarum polycephalum exhibits complex
morphological adaptation and amoeboid movement as it forages for food
and may be seen as a minimal example of complex robotic behaviour. Swarm
computation has previously been used to explore how spatiotemporal complexity
can emerge from, and be distributed within, simple component parts and their
interactions. Using a particle based swarm approach we explore the question
of how to generate collective amoeboid movement from simple non-oscillatory
component parts in a model of P. polycephalum. The model collective behaves
as a cohesive and deformable virtual material, approximating the local
coupling within the plasmodium matrix. The collective generates de-novo and
complex oscillatory patterns from simple local interactions. The origin
of this motor behaviour is distributed within the collective rendering
is morphologically adaptive, amenable to external influence, and robust to
simulated environmental insult. We show how to gain external influence over the
collective movement by simulated chemo-attraction (pulling towards nutrient
stimuli) and simulated light irradiation hazards (pushing from stimuli). The
amorphous and distributed properties of the collective are demonstrated by
cleaving it into two independent entities and fusing two separate entities
to form a single device, thus enabling it to traverse narrow, separate or
tortuous paths. We conclude by summarising the contribution of the model to
swarm based robotics and soft-bodied modular robotics and discuss the future
potential of such material approaches to the field.
^
@https://arxiv.org/ftp/arxiv/papers/1212/1212.0023.pdf
#topic:casual commutative arrows
#topic:category theory
#topic:pure functional programming
#author:Jeremy Yallop
#author:Hai Liu
#language:english
#medium:paper
Causal Commutative Arrows Revisited
Causal commutative arrows (CCA) extend arrows with additional constructs and
laws that make them suitable for modelling domains such as functional reactive
programming, differential equations and synchronous dataflow. Earlier work
has revealed that a syntactic transformation of CCA computations into normal
form can result in significant performance improvements, sometimes increasing
the speed of programs by orders of magnitude. In this work we reformulate
the normalization as a type class instance and derive optimized observation
functions via a specialization to stream transformers to demonstrate that
the same dramatic improvements can be achieved without leaving the language.
^
@https://www.cl.cam.ac.uk/~jdy22/papers/causal-commutative-arrows-revisited.pdf
#topic:neural network
#topic:generative adversarial network
#author:Ian J. Goodfellow
#author:Jean Pouget-Abadie
#author:Mehdi Mirza
#author:Bing Xu
#author:David Warde-Farley
#author:Sherjil Ozair
#author:Aaron Courville
#author:Yoshua Bengio
#language:english
#medium:paper
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial
process, in which we simultaneously train two models: a generative model
G that captures the data distribution, and a discriminative model D that
estimates the probability that a sample came from the training data rather
than G. The training procedure for G is to maximize the probability of D
making a mistake. This framework corresponds to a minimax two-player game. In
the space of arbitrary functions G and D, a unique solution exists, with G
recovering the training data distribution and D equal to 1 2 everywhere. In
the case where G and D are defined by multilayer perceptrons, the entire
system can be trained with backpropagation. There is no need for any Markov
chains or unrolled approximate inference networks during either training or
generation of samples. Experiments demonstrate the potential of the framework
through qualitative and quantitative evaluation of the generated samples.
^
@https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
#topic:automata
#topic:compression
#author:Takahiro Ota
#author:Hiroyoshi Morita
#language:english
#medium:paper
Relationship between Antidictionary Automata and Compacted Substring Automata
There are two efficient static data compression algorithms called
an antidictionary coding and a lossless data compression via substring
enumeration coding. We prove that both of the encoders are isomorphic.
^
@http://ita.ucsd.edu/workshop/13/files/paper/paper_3113.pdf
#topic:emergence
#topic:cellular automata
#language:english
#medium:paper
An Emergent Approach to Game Design – Development and Play
Player enjoyment is the single-most important goal of games. Games that
are not enjoyable are not bought or played. Within games, enjoyment of the
gameplay hinges on the game world. However, game worlds are often static and
highly scripted, which leads to restricted and shallow gameplay that can
detract from player enjoyment. It is possible that player enjoyment could
be improved by the creation of more flexible game worlds that give players
more freedom and control. One way to create more flexible game worlds is
through the use of an emergent approach to designing game worlds. This thesis
investigates an emergent approach to designing game worlds, as well as the
issues, considerations and implications for game players and developers.
The research reported in this thesis consisted of three main components. The
first component involved conducting a focus group and questionnaire with
players to identify the aspects of current game worlds that affect their
enjoyment. The second component of the research involved investigating an
emergent approach to designing game worlds, in which the Emergent Games
Engine Technology (EmerGEnT) system was developed. The test-bed for the
EmerGEnT system was a strategy game world that was developed using a 3D
games engine, the Auran Jet. The EmerGEnT system consists of three main
components: the environment, objects and agents. The third component of the
research involved evaluating the EmerGEnT system against a set of criteria for
player enjoyment in games, which allowed the system’s role in facilitating
player enjoyment to be defined. In the player-centred studies, it was found
that players are dissatisfied with the static, inconsistent and unrealistic
elements of current games and that they desire more interactivity, realism
and control. The development and testing of the EmerGEnT system showed that
an emergent game world design, based on cellular automata, can v facilitate
emergent behaviour in a limited domain. The domain modelled by the EmerGEnT
system was heat, fire, rain, fluid flow, pressure and explosions in a
strategy game world. The EmerGEnT system displayed advantages relating to
its ability to dynamically determine and accommodate the specific state of
the game world due to the underlying properties of the cells, objects and
agents. It also provided a model for emergent game worlds, which allowed
more complexity than emergent objects alone. Finally, the evaluation of
enjoyment revealed that incorporating an emergent game world (such as the
EmerGEnT system) into a game could improve player enjoyment in terms of
concentration, challenge, player skills, control and feedback by allowing
more intuitive, consistent and emergent interactions with the game world.
The implications of this research are that cellular automata can facilitate
emergence in games, at least in a limited domain. Also, emergence in games
has the potential to enhance player enjoyment in areas where current game
worlds are weak. Finally, the EmerGEnT system serves as a proof of concept
of using emergence in games, provides a model for simulating environmental
systems in games and was used to identify core issues and considerations
for future development and research of emergent game worlds.
^
@https://www.cp.eng.chula.ac.th/~vishnu/gameResearch/Sweetser_Thesis.pdf
#topic:raymarching
#author:Erik S. V. Jansson
#author:Matthäus G. Chajdas
#author:Jason Lacroix
#author:Ingemar Ragnemalm
#language:english
#medium:paper
Real-Time Hybrid Hair Rendering
Rendering hair is a challenging problem for real-time applications. Besides
complex shading, the sheer amount of it poses a lot of problems, as a human
scalp can have over 100,000 strands of hair, with animal fur often surpassing
a million. For rendering, both strand-based and volume-based techniques have
been used, but usually in isolation. In this work, we present a complete
hair rendering solution based on a hybrid approach. The solution requires no
pre-processing, making it a drop-in replacement, that combines the best of
strand-based and volume-based rendering. Our approach uses this volume not
only as a level-of-detail representation that is raymarched directly, but also
to simulate global effects, like shadows and ambient occlusion in real-time.
^
@https://eriksvjansson.net/papers/rthhr.pdf
#topic:raytracing
#author:Joost van Dongen
#language:english
#medium:paper
Interior Mapping A new technique for rendering realistic buildings
Interior Mapping is a new real-time shader technique that renders the
interior of a building when looking at it from the outside, without the need
to actually model or store this interior. With Interior Mapping, raycasting
in the pixel shader is used to calculate the positions of floors and walls
behind the windows. Buildings are modelled in the same way as without Interior
Mapping and are rendered on the GPU. The number of rooms rendered does not
influence the framerate or memory usage. The rooms are lit and textured
and can have furniture and animated characters. The interiors require very
little additional asset creation and no extra memory. Interior Mapping is
especially useful for adding more depth and detail to buildings in games
and other applications that are situated in large virtual cities
^
@http://www.proun-game.com/Oogst3D/CODING/InteriorMapping/InteriorMapping.pdf
#topic:cellular automata
#author:Hidenosuke Nishio
#author:Youichi Kobuchi
#language:english
#medium:paper
Fault Tolerant Cellular Spaces*
This paper treats the problem of designing a fault tolerant cellular
space which simulates an arbitrary given cellular space in real time. A
cellular space is called fault tolerant if it behaves normally even when its
component cells misoperate. First such notions as simulation, misoperation,
and K-separated misoperation are defined. Then a new multidimensional coding
of configurations is introduced and explained using as typical example the
two-dimensional space. The first main result is Theorem 1, which states
that the introduced coding method is useful for correcting errors occurring
at most :once in every K = 5 • 5 rectangle. The general theory is given
in Section 6, where the second main result is given in the form of Theorem
8. It gives a necessary and sufficient condition for testing whether or not
a given coding is adequate for error correction.
^
@https://core.ac.uk/download/pdf/82831011.pdf
#topic:wildfire
#author:Richard C. Rothermel
#language:english
#medium:book
How to Predict the Spread and Intensity of Forest and Range Fires
This manual documents the procedures for estimating the rate of forward
spread, intensity, flame length, and size of fires burning in forests and
rangelands. It contains instructions for obtaining fuel and weather data,
calculating fire behavior, and interpreting the results for application to
actual fire problems. Potential uses include fire predict ion, fire planning,
dispatching, prescribed fires, and monitoring managed fires. Included are
sections that deal with fuel model selection, fuel moisture, wind, slope,
calculations with nomograms, TI-59 calculations, point source, line fire,
interpretations of outputs, and growth predictions.
^
@https://www.fs.fed.us/rm/pubs_int/int_gtr143.pdf
#topic:cellular automata
#author:Manzil Zaheer
#author:Michael Wick
#author:Jean-Baptiste Tristan
#author:Alex Smola
#author:Guy L. Steele Jr.
#language:english
#medium:paper
Exponential Stochastic Cellular Automata for Massively Parallel Inference
We propose an embarrassingly parallel, memory efficient inference algorithm for
latent variable models in which the complete data likelihood is in the exponential
family. The algorithm is a stochastic cellular automaton and converges to a valid
maximum a posteriori fixed point. Applied to latent Dirichlet allocation we find
that our algorithm is over an order of magnitude faster than the fastest current
approaches. A simple C++/MPI implementation on a 4-node cluster samples 570
million tokens per second. We process 3 billion documents and achieve predictive
power competitive with collapsed Gibbs sampling and variational inference.
^
@http://learningsys.org/papers/LearningSys_2015_paper_11.pdf
#topic:cellular automata
#language:english
#medium:paper
Neural Cellular Automata Manifold
Very recently, a deep Neural Cellular Automata (NCA) [1] has been proposed
to simulate the complex morphogenesis process with deep networks. This model
learns to grow an image starting from a fixed single pixel. In this paper,
we move a step further and propose a new model that extends the expressive
power of NCA from a single image to an manifold of images. In biological terms,
our approach would play the role of the transcription factors, modulating the
mapping of genes into specific proteins that drive cellular differentiation,
which occurs right before the morphogenesis. We accomplish this by introducing
dynamic convolutions inside an Auto-Encoder architecture, for the first time
used to join two different sources of information, the encoding and cell’s
environment information. The proposed model also extends the capabilities
of the NCA to a general purpose network, which can be used in a broad range
of problems. We thoroughly evaluate our approach in a dataset of synthetic
emojis and also in real images of CIFAR-10.
^
@https://arxiv.org/pdf/2006.12155.pdf
#topic:automata
#topic:polymorphism
#author:John Von Neumann
#author:Arthur W. Burks
#author:Arthur Walter
#language:english
#medium:paper
Theory of self-reproducing automata
^
@https://archive.org/download/theoryofselfrepr00vonn_0/theoryofselfrepr00vonn_0.pdf
#topic:cellular automata
#author:William Gilpin
#language:english
#medium:paper
Cellular automata as convolutional neural networks
Deep learning techniques have recently demonstrated broad success in predicting
complex dynamical systems ranging from turbulence to human speech, motivating
broader questions about how neural networks encode and represent dynamical
rules. We explore this problem in the context of cellular automata (CA),
simple dynamical systems that are intrinsically discrete and thus difficult
to analyze using standard tools from dynamical systems theory. We show that
any CA may readily be represented using a convolutional neural network
with a network-in-network architecture. This motivates our development
of a general convolutional multilayer perceptron architecture, which we
find can learn the dynamical rules for arbitrary CA when given videos of
the CA as training data. In the limit of large network widths, we find
that training dynamics are nearly identical across replicates, and that
common patterns emerge in the structure of networks trained on different
CA rulesets. We train ensembles of networks on randomly-sampled CA, and we
probe how the trained networks internally represent the CA rules using an
information-theoretic technique based on distributions of layer activation
patterns. We find that CA with simpler rule tables produce trained networks
with hierarchical structure and layer specialization, while more complex
CA produce shallower representations—illustrating how the underlying
complexity of the CA’s rules influences the specificity of these internal
representations. Our results suggest how the entropy of a physical process
can affect its representation when learned by neural networks.
^
@https://arxiv.org/pdf/1809.02942.pdf
#topic:cellular automata
#topic:self-replication
#topic:universal computer
#topic:universal constructor
#author:Tim J. Hutton
#language:english
#medium:paper
Codd’s self-replicating computer
Edgar Codd’s 1968 design for a self-replicating cellular automata machine
has never been implemented. Partly this is due to its enormous size but we
have also identified four problems with the original specification that would
prevent it from working. These problems potentially cast doubt on Codd’s
central assertion, that the 8-state space he presents supports the existence
of machines that can act as universal constructors and computers. However
all these problems were found to be correctable and we present a complete
and functioning implementation after making minor changes to the design and
transition table. The body of the final machine occupies an area that is
22,254 cells wide and 55,601 cells high, comprising over 45 million non-zero
cells in its unsheathed form. The data tape is 208 million cells long,
and self-replication is estimated to take at least 1.7 × 1018 timesteps.
^
@http://www.sq3.org.uk/papers/Hutton_CoddsSelfReplicatingComputer_2010.pdf
#topic:string search
#author:Bertrand Meyer
#language:english
#medium:paper
Incremental String Matching
The problem studied in this paper is to search a given text for occurrences of
certain strings, in the particular case where the set of strings may change
as the search proceeds. A well-known algorithm by Aho and Corasick applies
to the simpler case when the set of strings is known beforehand and does
not change. This algorithm builds a transition diagram (finite automaton)
from the strings, and uses it as a guide to traverse the text. The search can
then be done in linear time. We show how this algorithm can be modified to
allow incremental diagram construction, so that new keywords may be entered
at any time during the search. The incremental algorithm presented essentially
retains the time and space complexities of the non-incremental one.
^
@http://se.ethz.ch/~meyer/publications/string/string_matching.pdf
#topic:string search
#author:Bruce William Watson
#language:english
#medium:paper
Taxonomies and Toolkits of Regular Language Algorithms
^
@https://web.archive.org/web/20120324141930if_/http://www.diku.dk:80/hjemmesider/ansatte/henglein/papers/watson1995.pdf
#topic:javascript
#topic:exception
#topic:continuation-passing style
#author:Florian Loitsch
#language:english
#medium:paper
Exceptional Continuations in JavaScript
JavaScript, the main language for web-site development, does not feature
continuation. However, as part of client-server communication they would be
useful as a means to suspend the currently running execution. In this paper
we present our adaption of exception-based continuations to JavaScript. The
enhanced technique deals with closures and features improvements that reduce
the cost of the work-around for the missing goto-instruction. We furthermore
propose a practical way of dealing with exception-based continuations in the
context of non-linear executions, which frequently happen due to callbacks.
Our benchmarks show that under certain conditions continuations are
still expensive, but are often viable. Especially compilers translating
to JavaScript could benefit from static control flow analyses to make
continuations less costly
^
@http://www.schemeworkshop.org/2007/procPaper4.pdf
#topic:file format
#author:James D Murray
#author:William VanRyper
#language:english
#medium:book
Encyclopedia of graphics file formats
^
@https://ia800202.us.archive.org/11/items/mac_Graphics_File_Formats_Second_Edition_1996/Graphics_File_Formats_Second_Edition_1996.pdf
#topic:dithering
#topic:parallel computation
#topic:error-diffusion
#author:Panagiotis Takis Metaxas
#language:english
#medium:paper
Parallel Digital Halftoning by Error-Diffusion
Digital halftoning, or dithering, is the technique commonly used to render a
color or grayscale image on a printer, a computer monitor or other bi-level
displays. A particular halftoning technique that has been used extensively
in the past is the socalled error diffusion technique. For a number of
years it was believed that this technique is inherently sequential and
could not be parallelized. In this paper we present and analyze a simple,
yet optimal, error-diffusion parallel algorithm for digital halftoning and we
discuss an implementation on a parallel machine. In particular, we describe
implementations on data-parallel computers that contain linear arrays and
two-dimensional meshes of processing elements. Our algorithm runs in 2·n+m
parallel steps, a considerable improvement over the 10·m·n sequential
algorithm. We expect that this research will lead to the development of
faster printers and larger high-resolution monitors
^
@http://cs.wellesley.edu/~pmetaxas/pck50-metaxas.pdf
#topic:dithering
#topic:error-diffusion
#topic:parallel computation
#topic:Ishan Misra
#topic:P J Narayanan
#author:Aditya Deshpande
#language:english
#medium:paper
Hybrid Implementation of Error Diffusion Dithering
Many image filtering operations provide ample parallelism, but progressive
non-linear processing of images is among the hardest to parallelize due to
long, sequential, and non-linear data dependency. A typical example of such
an operation is error diffusion dithering, exemplified by the Floyd-Steinberg
algorithm. In this paper, we present its parallelization on multicore CPUs
using a block-based approach and on the GPU using a pixel based approach. We
also present a hybrid approach in which the CPU and the GPU operate in parallel
during the computation. High Performance Computing has traditionally been
associated with high end CPUs and GPUs. Our focus is on everyday computers
such as laptops and desktops, where significant compute power is available
on the GPU as on the CPU. Our implementation can dither an 8K × 8K image on
an off-the-shelf laptop with an Nvidia 8600M GPU in about 400 milliseconds
when the sequential implementation on its CPU took about 4 seconds.
^
@http://www.cs.cmu.edu/~./imisra/projects/dithering/HybridDithering.pdf
#topic:data type
#author:Iavor S. Diatchki
#author:Mark P. Jones
#author:Rebekah Leslie
#language:english
#medium:paper
High-level Views on Low-level Representations
This paper explains how the high-level treatment of datatypes in
functional languages—using features like constructor functions and
pattern matching—can be made to coexist with bitdata. We use this term
to describe the bit-level representations of data that are required in the
construction of many different applications, including operating systems,
device drivers, and assemblers. We explain our approach as a combination of
two language extensions, each of which could potentially be adapted to any
modern functional language. The first adds simple and elegant constructs
for manipulating raw bitfield values, while the second provides a view-like
mechanism for defining distinct new bitdata types with fine-control over the
underlying representation. Our design leverages polymorphic type inference,
as well as techniques for improvement of qualified types, to track both the
type and the width of bitdata structures. We have implemented our extensions
in a small functional language interpreter, and used it to show that our
approach can handle a wide range of practical bitdata types.
^
@http://web.cecs.pdx.edu/~mpj/pubs/bitdata-icfp05.pdf
#topic:bayesian network
#topic:probability
#author:Daphne Koller
#author:Nir Friedman
#medium:book
#language:english
Probabilistic Graphical Models
Most tasks require a person or an automated system to reason: to take
the available information and reach conclusions, both about what might be
true in the world and about how to act. For example, a doctor needs to
take information about a patient — his symptoms, test results, personal
characteristics (gender, weight) — and reach conclusions about what
diseases he may have and what course of treatment to undertake. A mobile
robot needs to synthesize data from its sonars, cameras, and other sensors
to conclude where in the environment it is and how to move so as to reach
its goal without hitting anything. A speech-recognition system needs to take
a noisy acoustic signal and infer the words spoken that gave rise to it.
In this book, we describe a general framework that can be used to allow a
computer system to answer questions of this type. In principle, one could
write a special-purpose computer program for every domain one encounters and
every type of question that one may wish to answer. The resulting system,
although possibly quite successful at its particular task, is often very
brittle: If our application changes, significant changes may be required to
the program. Moreover, this general approach is quite limiting, in that it
is hard to extract lessons from one successful solution and apply it to one
which is very different.
^
@https://djsaunde.github.io/read/books/pdfs/probabilistic%20graphical%20models.pdf
#author:Marc Ohm
#author:Henrik Plate
#author:Arnold Sykosch
#author:Michael Meier
#topic:open source
#topic:security
#topic:vulnrability
#language:english
#medium:paper
Backstabber’s Knife Collection: A Review of Open Source Software Supply Chain Attacks
A software supply chain attack is characterized by the injection of malicious
code into a software package in order to compromise dependent systems
further down the chain. Recent years saw a number of supply chain attacks
that leverage the increasing use of open source during software development,
which is facilitated by dependency managers that automatically resolve,
download and install hundreds of open source packages throughout the software
life cycle. This paper presents a dataset of 174 malicious software packages
that were used in real-world attacks on open source software supply chains,
and which were distributed via the popular package repositories npm, PyPI,
and RubyGems. Those packages, dating from November 2015 to November 2019,
were manually collected and analyzed. The paper also presents two general
attack trees to provide a structured overview about techniques to inject