diff --git a/README.md b/README.md
index c86681f..4a5689e 100644
--- a/README.md
+++ b/README.md
@@ -1,85 +1,80 @@
# nsfwbot for Matrix
-nsfwbot is a Matrix bot plugin designed to detect NSFW (Not Safe For Work) content in images posted in Matrix chat rooms. This plugin leverages the [nsfwdetection](https://github.com/gsarridis/NSFW-Detection-Pytorch) model to analyze images and return a classification result, indicating whether the content is likely to be NSFW or SFW (Safe For Work).
+`nsfwbot` is a Matrix bot plugin that attempts to detect NSFW (Not Safe For Work) images posted in
+Matrix chat rooms. It uses [nsfwdetection](https://github.com/gsarridis/NSFW-Detection-Pytorch),
+which includes a small model that can run without a GPU with low resource requirements.
## Features
-- **Image Analysis**: Automatically detects images posted in a Matrix chat and analyzes them using the NSFW detection model.
-- **Text Message Parsing**: Parses text messages for embedded `` tags and analyzes those images as well.
-- **Configurable Concurrency**: Controls the number of concurrent image processing tasks using a configurable semaphore.
-- **Customizable `via_servers`**: Allows users to customize the list of servers used in the `matrix.to` URLs for linking back to the original message.
-- **Planned Features**: While the plugin currently only returns a classification of the image content, future updates are planned to include moderation actions such as automatically deleting or flagging unwanted images.
+- **Image Analysis**: Detects and analyses images posted in Matrix chats.
+- **Text Message Parsing**: Analyses images embedded in text messages.
+- **Configurable Concurrency**: Controls concurrent image processing tasks.
+- **Custom Actions**: Configurable actions for detected content, including reporting and redacting messages.
## Requirements
-- **Maubot**: The plugin is designed to run within the Maubot framework.
-- **Python Dependencies**: The plugin relies on the `nsfwdetection` and `beautifulsoup4` Python modules. These are automatically installed by the plugin if they are not already present.
+- **Maubot**: Runs within the Maubot framework.
+- **Python Dependencies**: `nsfwdetection` and `beautifulsoup4`.
+ > **Note**: `nsfwdetection` will not run on Alpine Linux. This means the default Maubot Docker
+ > image will not work. I have built a custom Debian-based Maubot in the
+ > `ghcr.io/tcpipuk/maubot:debian` Docker image.
## Installation
-### 1. Use the Custom Maubot Docker Image
+1. **Use the Custom Maubot Docker Image**:
+ Replace the official Maubot image with a custom Debian-based image:
-The `nsfwdetection` plugin does not currently run on Alpine Linux, which is the base image for the official Maubot Docker container. To use `nsfwbot`, you need to switch to a Debian-based container.
+ ```bash
+ docker pull ghcr.io/tcpipuk/maubot:debian
+ ```
-- Replace the official Maubot image with a custom Debian-based image:
+2. a. **Install pre-prepared plugin from [repository releases](https://github.com/tcpipuk/matrix-nsfwbot/releases)**
- ```
- ghcr.io/tcpipuk/maubot:debian
- ```
-
-- This custom image is a drop-in replacement for the official Maubot image and comes pre-installed with the `nsfwdetection` and `beautifulsoup4` modules, allowing for faster deployment of `nsfwbot`.
+ b. **Clone the Repository**:
-### 2. Clone the Repository
+ ```bash
+ git clone https://github.com/tcpipuk/matrix-nsfwbot
+ ```
-Clone the `nsfwbot` plugin repository from GitHub:
+ Zip the plugin files and upload through the Maubot admin interface. Ensure the plugin is
+ configured and enabled.
-```bash
-git clone https://github.com/tcpipuk/matrix-nsfwbot
-```
-
-### 3. Configure the Plugin
-
-Edit the `base-config.yaml` file to customize the settings according to your needs:
+3. **Configure the Plugin**:
+ See configuration section below for a summary of settings in the Maubot UI.
-- **`via_servers`**: List of servers to include in the `via` parameter for `matrix.to` URLs.
-- **`max_concurrent_jobs`**: Maximum number of concurrent image processing jobs. This controls the Semaphore used to limit concurrency.
+## Configuration
-### 4. Upload the Plugin to Maubot
+Edit `base-config.yaml` to set:
-Zip the plugin files and upload the plugin through the Maubot administration interface. Ensure that the plugin is configured and enabled.
+- `max_concurrent_jobs`: Number of concurrent jobs to allow.
+- `via_servers`: List of servers for `matrix.to` URLs.
+- `actions`:
+ - `ignore_sfw`: Ignore SFW images (default: `true`).
+ - `redact_nsfw`: Redact NSFW messages (default: `false`).
+ - `direct_reply`: Reply directly in the same room (default: `false`).
+ - `report_to_room`: Room ID for reporting (not enabled by default).
+ > **Note**: This can be a room alias (like `#room:server`) but this is far less efficient,
+ as the bot will need to find the room ID (like `!room:server`) to send messages.
## Usage
-Once installed and configured, `nsfwbot` will automatically analyze images posted in the chat. The plugin replies to the message containing the image with a classification result, indicating whether the image is NSFW or SFW.
+Once installed and configured, `nsfwbot` will automatically analyse images posted in the chat and
+reply with a classification result, e.g.
-### Example Output
-
-When an image is detected and analyzed, `nsfwbot` will reply to the message with something like:
-
-```
+```markdown
mxc://matrix.org/abcd1234 in https://matrix.to/#/!roomid:matrix.org/$eventid?via=matrix.org appears NSFW with score 87.93%
```
If multiple images are detected in a text message:
-```
+```markdown
- mxc://matrix.org/abcd1234 in https://matrix.to/#/!roomid:matrix.org/$eventid?via=matrix.org appears SFW with score 2.45%
- mxc://matrix.org/efgh5678 in https://matrix.to/#/!roomid:matrix.org/$eventid?via=matrix.org appears NSFW with score 94.82%
```
-## Planned Features
-
-- **Automatic Moderation**: The ability to automatically take action (e.g., delete or flag messages) when NSFW content is detected.
-- **Custom Actions**: Allowing users to configure specific actions for different types of detected content.
-
-## Notes
-
-- **Debian-based Maubot Container**: If using Docker, the custom container image at `ghcr.io/tcpipuk/maubot:debian` is required due to compatibility issues with Alpine. This image is a direct replacement for the official Maubot image and is necessary for running `nsfwbot`.
-- **Manual Installations**: If you prefer to use a different environment, ensure that `nsfwdetection` and `beautifulsoup4` are installed, and note that Alpine is not supported.
-
## Contributing
-Contributions to `nsfwbot` are welcome! Feel free to open an issue or submit a pull request on GitHub.
+Contributions are welcome! Open an issue or submit a pull request on GitHub.
## License
diff --git a/base-config.yaml b/base-config.yaml
index 09e8ab5..b09c20a 100644
--- a/base-config.yaml
+++ b/base-config.yaml
@@ -1,6 +1,11 @@
-# Maximum concurrent image processing jobs allowed
+# All actions are true/false, except report_to_room
+# which should be the room ID in single quotes of the
+# desired room, e.g. '!room:myserver.local'
max_concurrent_jobs: 1
-
-# Via servers to use in matrix.to URLs
via_servers:
- matrix.org
+actions:
+ ignore_sfw: true
+ redact_nsfw: false
+ direct_reply: false
+ report_to_room: ""
diff --git a/maubot.yaml b/maubot.yaml
index 419b390..3121f5a 100644
--- a/maubot.yaml
+++ b/maubot.yaml
@@ -1,6 +1,6 @@
maubot: 0.1.0
id: uk.tcpip.nsfwbot
-version: 0.1.0
+version: 0.2.0
license: AGPL-3.0-or-later
modules:
- nsfwbot
diff --git a/nsfwbot.py b/nsfwbot.py
index b584839..878f436 100644
--- a/nsfwbot.py
+++ b/nsfwbot.py
@@ -7,10 +7,12 @@
EventID,
MediaMessageEventContent,
MessageType,
+ RoomAlias,
RoomID,
TextMessageEventContent,
)
from mautrix.util.config import BaseProxyConfig, ConfigUpdateHelper
+from mautrix.errors import MBadJSON, MForbidden
from maubot import Plugin, MessageEvent # type:ignore
from maubot.handlers import command
from nsfw_detector import Model
@@ -25,19 +27,22 @@ class Config(BaseProxyConfig):
def do_update(self, helper: ConfigUpdateHelper) -> None:
helper.copy("max_concurrent_jobs")
helper.copy("via_servers")
+ helper.copy("actions")
class NSFWModelPlugin(Plugin):
model = Model()
semaphore = Semaphore(1)
via_servers = []
+ actions = {}
+ report_to_room = ""
@classmethod
def get_config_class(cls) -> Type[BaseProxyConfig]:
return Config
async def start(self) -> None:
- """Initializes the plugin by loading the configuration and setting up resources."""
+ """Initialise plugin by loading config and setting up semaphore."""
await super().start()
# Check if config exists
if not isinstance(self.config, Config):
@@ -46,11 +51,22 @@ async def start(self) -> None:
# Load in config
self.config.load_and_update()
# Load via_servers from config, with a default fallback
- self.via_servers = self.config["via_servers"] # type:ignore
- # Initialize the Semaphore based on the max_concurrent_jobs setting
- max_concurrent_jobs = self.config["max_concurrent_jobs"] # type:ignore
+ self.via_servers = self.config["via_servers"]
+ # Load actions from config
+ self.actions = self.config["actions"]
+ # Initialise the Semaphore based on the max_concurrent_jobs setting
+ max_concurrent_jobs = self.config["max_concurrent_jobs"]
self.semaphore = Semaphore(max_concurrent_jobs)
- # Initialize the NSFW model
+ # Resolve room ID from alias if it starts with #
+ self.report_to_room = str(self.actions.get("report_to_room", ""))
+ if self.report_to_room.startswith("#"):
+ report_to_info = await self.client.resolve_room_alias(
+ RoomAlias(self.report_to_room)
+ )
+ self.report_to_room = report_to_info.room_id
+ elif self.report_to_room and not self.report_to_room.startswith("!"):
+ self.log.warning("Invalid room ID or alias provided for report_to_room")
+ # Initialise the NSFW model
self.log.info("Loaded nsfwbot successfully")
@command.passive(
@@ -66,9 +82,10 @@ async def handle_image_message(self, evt: MessageEvent, url: Tuple[str]) -> None
results = await self.process_images([evt.content.url])
# Create matrix.to URL for the original message
matrix_to_url = self.create_matrix_to_url(evt.room_id, evt.event_id)
- # Prepare and send the response message
+ # Prepare the response message
response = self.format_response(results, matrix_to_url)
- await evt.respond(response)
+ # Send responses based on actions
+ await self.send_responses(evt, response, results)
@command.passive(
'^ None:
all_results = await self.process_images([ContentURI(url) for url in img_urls])
# Create matrix.to URL for the original message
matrix_to_url = self.create_matrix_to_url(evt.room_id, evt.event_id)
- # Prepare and send the response message
+ # Prepare the response message
response = self.format_response(all_results, matrix_to_url)
- await evt.respond(response)
+ # Send responses based on actions
+ await self.send_responses(evt, response, all_results)
async def process_images(self, mxc_urls: List[ContentURI]) -> dict:
"""Download and process the images using the NSFW model."""
@@ -141,3 +159,35 @@ def format_response(self, results: dict, matrix_to_url: str) -> str:
return "- " + "\n- ".join(response_parts)
else:
return "\n".join(response_parts)
+
+ async def send_responses(self, evt: MessageEvent, response: str, results: dict) -> None:
+ """Send responses or take actions based on config."""
+ # Check if we should ignore SFW images
+ ignore_sfw = self.actions.get("ignore_sfw", False)
+ nsfw_results = [res for res in results.values() if res["Label"] == "NSFW"]
+ # If all images were SFW and should be ignored
+ if ignore_sfw and not nsfw_results:
+ self.log.info(f"Ignored SFW images in {evt.room_id}")
+ return
+
+ # Direct reply in the same room
+ if self.actions.get("direct_reply", False):
+ await evt.reply(response)
+ self.log.info(f"Replied to {evt.room_id}")
+
+ # Report to a specific room
+ if self.report_to_room:
+ try:
+ await self.client.send_text(room_id=RoomID(self.report_to_room), text=response)
+ self.log.info(f"Sent report to {RoomID(self.report_to_room)}")
+ except MBadJSON as e:
+ self.log.warning(f"Failed to send message to {RoomID(self.report_to_room)}: {e}")
+
+ # Redact the message if it's NSFW and redacting is enabled
+ redact_nsfw = self.actions.get("redact_nsfw", False)
+ if nsfw_results and redact_nsfw:
+ try:
+ await self.client.redact(room_id=evt.room_id, event_id=evt.event_id, reason="NSFW")
+ self.log.info(f"Redacted NSFW message in {evt.room_id}")
+ except MForbidden:
+ self.log.warning(f"Failed to redact NSFW message in {evt.room_id}")