Skip to content

Commit

Permalink
Arm64 build (#61)
Browse files Browse the repository at this point in the history
  • Loading branch information
UTXOnly authored Jun 30, 2024
1 parent d0dbc64 commit a7bb700
Show file tree
Hide file tree
Showing 9 changed files with 117 additions and 77 deletions.
53 changes: 34 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 +2,25 @@

![Pylint_score](./pylint.svg)

A purely Python, easy to deploy nostr relay using `asyncio` & `websockets` to server Nostr clients
A purely Python, easy to deploy nostr relay using `asyncio` & `websockets` to serve Nostr clients

## Description

![Image 2023-09-15 at 9 53 46 AM](https://github.com/UTXOnly/nost-py/assets/49233513/724cfbeb-03a0-4d10-b0d1-6b638ac153c4)



A containerized Python relay paried with a Postgres databse, reachable via a NGINX reverse proxy. This has been tested on [Nostrudel](https://nostrudel.ninja/), [Iris.to](https://Iris.to) and [Snort.social](https://Snort.social) clients and works for the NIPS listed below.
A 100% containerized Python relay backed by a Postgres database, behind a NGINX reverse proxy. This has been tested on [Nostrudel](https://nostrudel.ninja/), [Iris.to](https://Iris.to) and [Snort.social](https://Snort.social, [Damus.io](https://damus.io/)) clients and works for the NIPS listed below.

Numerous branches in development, trying to improve performance, reliability and ease of use. The Datadog branch deploys a Datadog agent container to collect logs, metrics and traces to better observe application performance.
Numerous branches in development, trying to improve performance, reliability and ease of use.

### Requirements

* Ubuntu 22.04 amd64 host server (Will likely work on other versions but this is all that has been tested)
* At least 2 GB of available RAM ( 4GB recomended )
* Ubuntu 22.04 server (Will likely work on other versions but this is all that has been tested)
* Both `arm64` and `amd64` supported
* At least 2 GB of available RAM
* Your own domain
* Right now the main branch deploys the Datadog agent along with the application containers and has APM, DBM, and NPM preconfigured as well as some custom nostr StatsD metrics.
* If you don't have a Datadog developer account, you can apply for a developer account [here](https://partners.datadoghq.com/s/login/?ec=302&startURL=%2Fs%2F), or sign up for a trial [here](https://www.datadoghq.com/free-datadog-trial/) to get a Datadog API key.
* If you don't want to use the Datadog agent, simply don't enter the `DD_API_KEY` variable in the `.env` file and comment the service out from the `docker-compose.yaml` file.


## Instructions

Expand All @@ -30,23 +29,33 @@ Numerous branches in development, trying to improve performance, reliability and
To setup this program, you need to update the variables in the `nostpy/docker_stuff/.env`, for example:

```
POSTGRES_DB=nostr
POSTGRES_USER=nostr
POSTGRES_PASSWORD=nostr
POSTGRES_PORT=5432
POSTGRES_HOST=172.28.0.4
PGDATABASE_WRITE=<POSTGRES_WRITE_DATABASE>
PGUSER_WRITE=<POSTGRES_WRITE_USER>
PGPASSWORD_WRITE=<POSTGRES_WRITE_PASSWORD>
PGPORT_WRITE=<POSTGRES_WRITE_PORT>
PGHOST_WRITE=<POSTGRES_WRITE_HOST>
PGDATABASE_READ=<POSTGRES_READ_DATABASE>
PGUSER_READ=<POSTGRES_READ_USER>
PGPASSWORD_READ=<POSTGRES_READ_PASSWORD>
PGPORT_READ=<POSTGRES_READ_PORT>
PGHOST_READ=<POSTGRES_READ_HOST>
DD_ENV=<DATADOG_ENV_TAG>
DD_API_KEY=<YOUR_DATADOG_API_KEY>
DOMAIN_NAME=<YOUR_DOMAIN_NAME>
HEX_PUBKEY=<YOUR_HEX_PUBLIC_KEY_FOR_NIP_11>
CONTACT=<YOUR_EMAIL_OR_NPUB>
EVENT_HANDLER_PORT=8009
EVENT_HANDLER_SVC=172.28.0.3 # hostname or IP for event handler service
WS_PORT=8008 #Websocket handler port
REDIS_HOST=redis
REDIS_PORT=6379
DD_API_KEY=<DATADOG_API_KEY_> (If using Datadog exporter for OTel collector)
DOMAIN=<YOUR_DOMAIN_NAME>
HEX_PUBKEY=<RELAY_ADMIN_HEX_PUBKEY>
CONTACT=<RELAY_ADMIN_EMAIL>
ENV_FILE_PATH=./docker_stuff/.env
NGINX_FILE_PATH=/etc/nginx/sites-available/default
VERSION=v0.8
VERSION=v1.0.0
```

Aside from adding the environmental variables, all you need to do is run the `menu.py` script to load the menu. Once you select the `Execute server setup script` option, the script will install all dependencies, setup your NGINX reverse proxy server and request an TLS certificate, load environmental variables, build and launch the application and database containers. From there you are ready to start relaying notes!
Aside from adding the environmental variables, all you need to do is run the `menu.py` script to load the menu. Once you select the `Execute server setup script` option, the script will install all dependencies, create all service containers locally, including setting up your NGINX reverse proxy server, requesting TLS certificate. From there you are ready to start relaying notes!

To get started run the command below from the main repo direcotory to bring up the NostPy menu:

Expand Down Expand Up @@ -74,7 +83,13 @@ This will bring up the menu below and you can control the program from there!

* [Youtube video showing you how to clone and run nostpy](https://www.youtube.com/watch?v=9Fmu7K2_t6Y)

## Monitoring

This compose stack comes with a preconfigured OpenTelemety collector container and some custom instrumentation. the existing configuration collects system metrics about the host and docker containers as well as distributed tracing between the services.

Will be adding log support soon, giving you full visibility into the health of your relay.

![Screenshot from 2024-06-15 10-45-06](https://github.com/UTXOnly/nost-py/assets/49233513/36afbaf4-cf7d-497b-8bb1-d2a90b7fa0af)


### Future plans
Expand Down
1 change: 1 addition & 0 deletions changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
* OTel collector/Datadog exporter
* Support for both x86/ARM64 architecture for Python containers
* Upgraded Python containers to 3.11-slim base image
* Config option to configure seperate read/write database instances

**Removed**
* Nginx reverse proxy on host
Expand Down
7 changes: 6 additions & 1 deletion docker_stuff/.env
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,16 @@ PGPASSWORD=nostr
PGPORT=5432
PGHOST=postgres
DD_ENV=psycopg_otel
EVENT_HANDLER_PORT=
EVENT_HANDLER_SVC=172.28.0.3
WS_PORT=8008
REDIS_HOST=redis
REDIS_PORT=6739
DD_API_KEY=<YOUR_API_KEY>
DOMAIN=<YOUR_DOMAIN_HERE>
HEX_PUBKEY=<YOUR_HEX_PUBKEY>
CONTACT=your_email@email.com
ENV_FILE_PATH=./docker_stuff/.env
NGINX_FILE_PATH=/etc/nginx/sites-available/default
VERSION=v1.0
VERSION=v1.0.0

12 changes: 2 additions & 10 deletions docker_stuff/Dockerfile.event_handler
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
FROM python:3.11-slim

# Update package lists and install necessary dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
gcc \
Expand All @@ -9,18 +8,11 @@ RUN apt-get update \
g++ \
make \
&& rm -rf /var/lib/apt/lists/*

# Set the working directory in the container
WORKDIR /app

# Copy requirements file and install dependencies
COPY eh_requirements.txt .
RUN pip install --no-cache-dir -r eh_requirements.txt

# Copy application code
COPY ./python_stuff/event_handler.py .
COPY ./python_stuff/event_classes.py .
#"opentelemetry-instrument",
# Instrument the application with OpenTelemetry and set the command to run the application
CMD [ "python", "event_handler.py"]
COPY ./python_stuff/event*.py ./

CMD ["python", "event_handler.py"]
7 changes: 3 additions & 4 deletions docker_stuff/Dockerfile.websocket_handler
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,10 @@ RUN apt-get update \
g++ \
make \
&& rm -rf /var/lib/apt/lists/*

COPY ws_requirements.txt .
RUN pip install --no-cache-dir -r ws_requirements.txt

COPY ./python_stuff/websocket_handler.py .
COPY ./python_stuff/websocket_classes.py .

COPY ./python_stuff/websocket*.py ./

CMD [ "python", "websocket_handler.py"]
CMD ["python", "websocket_handler.py"]
29 changes: 19 additions & 10 deletions docker_stuff/docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@ services:
dockerfile: Dockerfile.websocket_handler
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://172.28.0.7:4317
- EVENT_HANDLER_SVC=${EVENT_HANDLER_SVC}
- EVENT_HANDLER_PORT=${EVENT_HANDLER_PORT}
- WS_PORT=${WS_PORT}
ports:
- 8008:8008
networks:
Expand All @@ -18,12 +21,19 @@ services:
dockerfile: Dockerfile.event_handler
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://172.28.0.7:4317
- EVENT_HANDLER_PORT=${EVENT_HANDLER_PORT}
- REDIS_HOST=${REDIS_HOST}
- PGDATABASE=${PGDATABASE}
- PGUSER=${PGUSER}
- PGPASSWORD=${PGPASSWORD}
- PGPORT=${PGPORT}
- PGHOST=${PGHOST}
- REDIS_PORT=${REDIS_PORT}
- PGDATABASE_WRITE=${PGDATABASE_WRITE}
- PGUSER_WRITE=${PGUSER_WRITE}
- PGPASSWORD_WRITE=${PGPASSWORD_WRITE}
- PGPORT_WRITE=${PGPORT_WRITE}
- PGHOST_WRITE=${PGHOST_WRITE=}
- PGDATABASE_READ=${PGDATABASE_READ}
- PGUSER_READ=${PGUSER_READ}
- PGPASSWORD_READ=${PGPASSWORD_READ}
- PGPORT_READ=${PGPORT_READ}
- PGHOST_READ=${PGHOST_READ}
networks:
nostpy_network:
ipv4_address: 172.28.0.3
Expand All @@ -34,9 +44,9 @@ services:
postgres:
image: postgres:14
environment:
- POSTGRES_DB=${PGDATABASE}
- POSTGRES_USER=${PGUSER}
- POSTGRES_PASSWORD=${PGPASSWORD}
- POSTGRES_DB=${PGDATABASE_WRITE}
- POSTGRES_USER=${PGUSER_WRITE}
- POSTGRES_PASSWORD=${PGPASSWORD_WRITE}
ports:
- 5432:5432
networks:
Expand Down Expand Up @@ -65,11 +75,10 @@ services:
build:
context: .
dockerfile: Dockerfile.nginx
#image: bhartford419/nginx-certbot:nostpyv4
environment:
- DOMAIN=${DOMAIN}
- DOCKER_SVC=172.17.0.1
- SVC_PORT=8008
- SVC_PORT=${EVENT_HANDLER_PORT}
- VERSION=${VERSION}
- CONTACT=${CONTACT}
- HEX_PUBKEY=${HEX_PUBKEY}
Expand Down
57 changes: 37 additions & 20 deletions docker_stuff/python_stuff/event_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,14 @@

from event_classes import Event, Subscription

# Initialize the logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)

# from otel_metrics import PythonOTEL

app = FastAPI()
Expand All @@ -35,7 +43,6 @@
span_processor = BatchSpanProcessor(otlp_exporter)
otlp_tracer = trace.get_tracer_provider().add_span_processor(span_processor)


# py_otel = PythonOTEL()

# Set up a separate tracer provider for Redis
Expand All @@ -51,28 +58,35 @@

# Instrument Redis with the separate tracer provider
RedisInstrumentor().instrument(tracer_provider=redis_tracer_provider)
redis_client = redis.Redis(host=os.getenv("REDIS_HOST"), port=6379)

logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
redis_client = redis.Redis(host=os.getenv("REDIS_HOST"), port=os.getenv("REDIS_PORT"))


def get_conn_str() -> str:
return f"""
dbname={os.getenv('PGDATABASE')}
user={os.getenv('PGUSER')}
password={os.getenv('PGPASSWORD')}
host={os.getenv('PGHOST')}
port={os.getenv('PGPORT')}
"""

def get_conn_str(db_suffix: str) -> str:
return (
f"dbname={os.getenv(f'PGDATABASE_{db_suffix}')} "
f"user={os.getenv(f'PGUSER_{db_suffix}')} "
f"password={os.getenv(f'PGPASSWORD_{db_suffix}')} "
f"host={os.getenv(f'PGHOST_{db_suffix}')} "
f"port={os.getenv(f'PGPORT_{db_suffix}')} "
)

@asynccontextmanager
async def lifespan(app: FastAPI):
app.async_pool = AsyncConnectionPool(conninfo=get_conn_str())
conn_str_write = get_conn_str('WRITE')
conn_str_read = get_conn_str('READ')
logger.info(f"Write conn string is: {conn_str_write}")
logger.info(f"Read conn string is: {conn_str_read}")

app.write_pool = AsyncConnectionPool(conninfo=conn_str_write)
app.read_pool = AsyncConnectionPool(conninfo=conn_str_read)

yield
await app.async_pool.close()

await app.write_pool.close()
await app.read_pool.close()




app = FastAPI(lifespan=lifespan)
Expand All @@ -86,7 +100,8 @@ def initialize_db() -> None:
"""
try:
conn = psycopg.connect(get_conn_str())
logger.info(f"conn string is {get_conn_str('WRITE')}")
conn = psycopg.connect(get_conn_str('WRITE'))
with conn.cursor() as cur:
# Create events table if it doesn't already exist
cur.execute(
Expand Down Expand Up @@ -138,7 +153,7 @@ async def handle_new_event(request: Request) -> JSONResponse:
with tracer.start_as_current_span("add_event") as span:
current_span = trace.get_current_span()
current_span.set_attribute(SpanAttributes.DB_SYSTEM, "postgresql")
async with request.app.async_pool.connection() as conn:
async with request.app.write_pool.connection() as conn:
async with conn.cursor() as cur:
if event_obj.kind in [0, 3]:
await event_obj.delete_check(conn, cur)
Expand Down Expand Up @@ -231,7 +246,7 @@ async def handle_subscription(request: Request) -> JSONResponse:
current_span.set_attribute(SpanAttributes.DB_STATEMENT, sql_query)
current_span.set_attribute("service.name", "postgres")
current_span.set_attribute("operation.name", "postgres.query")
async with app.async_pool.connection() as conn:
async with app.read_pool.connection() as conn:
async with conn.cursor() as cur:
await cur.execute(query=sql_query)
query_results = await cur.fetchall()
Expand Down Expand Up @@ -293,5 +308,7 @@ async def handle_subscription(request: Request) -> JSONResponse:


if __name__ == "__main__":
logger.info(f"Write conn string is: {get_conn_str('WRITE')}")
logger.info(f"Read conn string is: {get_conn_str('READ')}")
initialize_db()
uvicorn.run(app, host="0.0.0.0", port=8009)
uvicorn.run(app, host="0.0.0.0", port=int(os.getenv("EVENT_HANDLER_PORT")))
14 changes: 8 additions & 6 deletions docker_stuff/python_stuff/websocket_handler.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import asyncio
import json
import logging
import os
from typing import Any, Dict, Tuple

import aiohttp
Expand Down Expand Up @@ -36,15 +37,17 @@
otlp_exporter = OTLPSpanExporter()
span_processor = BatchSpanProcessor(
otlp_exporter
) # we don't want to export every single trace by itself but rather batch them
)
otlp_tracer = trace.get_tracer_provider().add_span_processor(span_processor)


logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")

EVENT_HANDLER_SVC= os.getenv("EVENT_HANDLER_SVC")
EVENT_HANDLER_PORT= os.getenv("EVENT_HANDLER_PORT")


async def handle_websocket_connection(
websocket: websockets.WebSocketServerProtocol,
Expand Down Expand Up @@ -89,7 +92,6 @@ async def handle_websocket_connection(
)
with tracer.start_as_current_span("send_event_to_handle") as span:
current_span = trace.get_current_span()
# current_span.set_attribute("service.name", "websocket_handler")
current_span.set_attribute(
"operation.name", "send.event.handler"
)
Expand Down Expand Up @@ -150,7 +152,7 @@ async def send_event_to_handler(
event_dict: Dict[str, Any],
websocket: websockets.WebSocketServerProtocol,
) -> None:
url: str = "http://event_handler:8009/new_event"
url: str = f"http://{EVENT_HANDLER_SVC}:{EVENT_HANDLER_PORT}/new_event"
try:
async with session.post(url, data=json.dumps(event_dict)) as response:
current_span = trace.get_current_span()
Expand All @@ -173,7 +175,7 @@ async def send_subscription_to_handler(
subscription_id: str,
websocket: websockets.WebSocketServerProtocol,
) -> None:
url: str = "http://event_handler:8009/subscription"
url: str = f"http://{EVENT_HANDLER_SVC}:{EVENT_HANDLER_PORT}/subscription"
payload: Dict[str, Any] = {
"event_dict": event_dict,
"subscription_id": subscription_id,
Expand Down Expand Up @@ -213,7 +215,7 @@ async def send_subscription_to_handler(
rate_limiter = TokenBucketRateLimiter(tokens_per_second=1, max_tokens=50000)

try:
start_server = websockets.serve(handle_websocket_connection, "0.0.0.0", 8008)
start_server = websockets.serve(handle_websocket_connection, "0.0.0.0", os.getenv("WS_PORT"))
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()

Expand Down
Loading

0 comments on commit a7bb700

Please sign in to comment.