A clean reimplementation of the K-Adapter paper (inofficial).
-
Updated
Nov 15, 2022 - Python
A clean reimplementation of the K-Adapter paper (inofficial).
In this project we trained personalized transformer models for news recommendation using adapters [similar to (IA)^3]. With layerwise relevancy propagation, we try to explain the recommendation to the user. Using a web interface and displaying word clouds, the user can be assigned to a “filter bubble”. This allows users to reflect on their behavior
Code & Data for the paper "Fair and Argumentative Language Modeling for Computational Argumentation"
Add a description, image, and links to the transformer-adapters topic page so that developers can more easily learn about it.
To associate your repository with the transformer-adapters topic, visit your repo's landing page and select "manage topics."