From de3a341b31f57045357a170d69fac4519e6a6880 Mon Sep 17 00:00:00 2001 From: Robert-Jan Bruintjes Date: Fri, 15 Oct 2021 11:44:35 +0200 Subject: [PATCH] README fixes. --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 0c3e517..125065c 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ This repository contains the source code accompanying the paper: -[FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes](https://arxiv.org/abs/2102.02611) [[Slides]](#)
**[David W. Romero*](https://www.davidromero.ml/), [Robert-Jan Bruintjes*](https://rjbruin.github.io), [Jakub M. Tomczak](https://jmtomczak.github.io/), [Erik J. Bekkers](https://erikbekkers.bitbucket.io/), [Mark Hoogendoorn](https://www.cs.vu.nl/~mhoogen/) & [Jan C. van Gemert](https://jvgemert.github.io)**. +[FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes](#) [[Slides]](#)
**[David W. Romero*](https://www.davidromero.ml/), [Robert-Jan Bruintjes*](https://rjbruin.github.io), [Jakub M. Tomczak](https://jmtomczak.github.io/), [Erik J. Bekkers](https://erikbekkers.bitbucket.io/), [Mark Hoogendoorn](https://www.cs.vu.nl/~mhoogen/) & [Jan C. van Gemert](https://jvgemert.github.io)**. #### Abstract *When designing Convolutional Neural Networks (CNNs), one must select the size of the convolutional kernels before training. Recent works show CNNs benefit from different kernel sizes at different layers, but exploring all possible combinations is unfeasible in practice. A more efficient approach is to learn the kernel size during training. However, existing works that learn the kernel size have a limited bandwidth. These approaches scale kernels by dilation, and thus the detail they can describe is limited. In this work, we propose FlexConv, a novel convolutional operation with which high bandwidth convolutional kernels of learnable kernel size can be learned at a fixed parameter cost. FlexNets model long-term dependencies without the use of pooling, achieve state-of-the-art performance on several sequential datasets, outperform recent works with learned kernel sizes, and are competitive with much deeper ResNets on image benchmark datasets. Additionally, FlexNets can be deployed at higher resolutions than those seen during training. To avoid aliasing, we propose a novel kernel parameterization with which the frequency of the kernels can be analytically controlled. Our novel kernel parameterization shows higher descriptive power and faster convergence speed than existing parameterizations. This leads to important improvements in classification accuracy.* @@ -67,7 +67,7 @@ Please see [the Experiments readme](experiments/readme.md) for details on reprod ### Pretrained models -TODO(rjbruin) +*TODO(rjbruin)* ### Cite If you found this work useful in your research, please consider citing: