diff --git a/docs/SFRS.md b/docs/SFRS.md index 1c28971..00273d6 100644 --- a/docs/SFRS.md +++ b/docs/SFRS.md @@ -3,5 +3,5 @@ NetVLAD first proposed a VLAD layer trained with `triplet` loss, and then SARE introduced two softmax-based losses (`sare_ind` and `sare_joint`) to boost the training. Our SFRS is trained in generations with self-enhanced soft-label losses to achieve state-of-the-art performance.

- +