diff --git a/posts/clox.md b/posts/clox.md new file mode 100644 index 0000000..4407ec4 --- /dev/null +++ b/posts/clox.md @@ -0,0 +1,76 @@ +--- +title: My bytecode VM Lox interpreter +date: "2024-11-06" +--- + +The aim of this post is to describe the general operation of the program and some of the mechanisms that we consider to be of interest. For full details, a link to the source code is available at the bottom of the page. + +So I continued with “Crafting Interpreters” by Robert Nystrom after making [My Tree-Walker Lox interpreter](/posts/jlox). In this part I tried to do as many challenges as possible and really understand how a VM bytecode works. + +This version is written in C, which means we have to write a lot of code ourselves, but we don't use any external libraries. + +## Compiler + +The primary purpose of our compiler is to generate a chunk of code in bytecode form for interpretation by our bytecode virtual machine. Here are a few interesting features of the front end. + +### Scanner + +The token scanner is very classic, with only one thing to say: the function responsible for identifying the language's native keywords is very dirty. The author has chosen to use a large switch statement instead of implementing a sorting function, which is certainly powerful but not very elegant. + +### Parser + +An interesting point to note is that the author chose not to use a syntax tree for the front end. We therefore implemented a single-pass compiler (directly converts compile units into bytecode). + +We also implemented a Vaughan Pratt's parser, in our case a “top-down operator precedence parser”. This means we have to define operator precedence in advance. Here's what it looks like in code. + +```c +typedef enum { + PREC_NONE, + PREC_ASSIGNMENT, // = + PREC_OR, // or + PREC_AND, // and + PREC_EQUALITY, // == != + PREC_COMPARISON, // < > <= >= + PREC_TERM, // + - + PREC_FACTOR, // * / + PREC_UNARY, // ! - + PREC_CALL, // . () + PREC_PRIMARY +} Precedence; +``` + +This precedence is simply used to control the parsing of expressions. A rule with a lower precedence than the last parsed expression is not allowed. + +## Bytecode + +To manage conditions, we emit `OP_JUMP` operation code for conditions. If a condition expression is evaluated to false, it jumps to the end of the conditionnal block / expression. To do this, we use the concept of backpatching: we overwrite the immediate value of the instruction in the chunk during compilation. + +In my implementation, all immediate values are encoded on 8 bits, with the exception of constants, which have a size of 24 bits. + +## Virtual Machine + +The VM is centered on a stack where we push operands, local variables, etc.. + +Everything at runtime is managed by callframes, even the top-level code is embed within a function object. + +## Example + +Here is a simple Lox example that can be evaluated by my interpreter. + +```text +fun fib(n) { + if (n < 2) { + return n; + } + + return fib(n - 2) + fib(n - 1); +} + +print fib(10); +``` + +## Links + +[https://github.com/theobori/lox-virtual-machine](https://github.com/theobori/lox-virtual-machine) + +  diff --git a/posts/lox.md b/posts/jlox.md similarity index 73% rename from posts/lox.md rename to posts/jlox.md index f346a79..589824d 100644 --- a/posts/lox.md +++ b/posts/jlox.md @@ -1,5 +1,5 @@ --- -title: Yet another Lox interpreter +title: My Tree-Walker Lox interpreter date: "2024-03-22" --- @@ -7,21 +7,7 @@ I wanted to learn more about designing an interpreter, so I looked around and fo I read parts I and II, which focus on concepts, common techniques and language behavior. Since I have recently read these parts, writing helps me to better understand and even re-understand certain things. -For the moment I'm not quite done, I've implemented the features below. - -- *Tokens and lexing* -- *Abstract syntax trees* -- *Recursive descent parsing* -- *Prefix and infix expressions* -- *Runtime representation of objects* -- *Interpreting code using the Visitor pattern* -- *Lexical scope* -- *Environment chains for storing variables* -- *Control flow* -- *Functions with parameters* -- *Closures* -- *Static variable resolution and error detection* -  +The aim was to have a Lox interpreter that at least supported functions and closures, so we could have a taste of the basics. ## What is lox ? @@ -48,45 +34,6 @@ Scanning is also known as lexing or lexical analysis. It takes a linear stream o The scanner must group characters into the smalles possible sequence that represents something. This blobs of characters are called lexemes. -Here are some examples of token kinds. - -```python -... -from enum import Enum -... - -class TokenKind(Enum): - """ - Represents every available token kinds - """ - - # Single-character tokens - LEFT_PAREN = "left_paren", - RIGHT_PAREN = "right_paren", - LEFT_BRACE = "left_brace", - RIGHT_BRACE = "right_brace", - ... - - # One or two character tokens - BANG = "bang", - BANG_EQUAL = "bang_equal", - EQUAL = "equal", - ... - - # Literals - IDENTIFIER = "identifier", - STRING = "string", - NUMBER = "number", - - # Keywords - AND = "and", - CLASS = "class", - ELSE = "else", - FALSE = "false", - ... - - EOF = "eof" -```   ### Parsing @@ -146,7 +93,40 @@ So here, a valid strings could be the one below. The best explanation here is probably the one in the book. > *Recursive descent is considered a top-down parser because it starts from the top or outermost grammar rule (here expression ) and works its way down into the nested subexpressions before finally reaching the leaves of the syntax tree.* +  + +## Examples + +Here are some Lox examples that can be evaluated by my interpreter. + +```text +var b = 1; +var a = "hello"; +{ + var a = b + b; + + print a; +} + +print a; + +fun fibonacci(n) { + if (n <= 1) return n; + return fibonacci(n - 1) + fibonacci(n - 2); +} + +print fibonacci(5); + +print "helo" + "world"; + +fun echo(n) { + print n; + return n; +} + +print echo(echo(1) + echo(2)) + echo(echo(4) + echo(5)); +```   diff --git a/posts/nix.md b/posts/nix.md index 209d82b..d79f8ec 100644 --- a/posts/nix.md +++ b/posts/nix.md @@ -9,6 +9,8 @@ date: "2024-06-24"   Here I share some notes and other things I've learned about Nix that I find interesting. The content of this post is mainly about me learning Nix, it's not about understanding the whole tool and language. + +Also, it's important to note that I use Nix as a non-NixOS user.   ## What is Nix? @@ -314,7 +316,8 @@ Below is a Nix expression I wrote for the Python module [callviz](https://pypi.o nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable"; }; - outputs = { self, nixpkgs }: + outputs = + { self, nixpkgs }: let supportedSystems = [ "x86_64-linux" @@ -323,26 +326,31 @@ Below is a Nix expression I wrote for the Python module [callviz](https://pypi.o "aarch64-darwin" ]; - forEachSupportedSystem = f: nixpkgs.lib.genAttrs supportedSystems (system: f { - pkgs = import nixpkgs { inherit system; }; - }); + forEachSupportedSystem = + f: nixpkgs.lib.genAttrs supportedSystems (system: f { pkgs = import nixpkgs { inherit system; }; }); in { - packages = forEachSupportedSystem ({ pkgs }: { - default = pkgs.callPackage ./. { inherit (pkgs) python311; }; - }); - - devShells = forEachSupportedSystem ({ pkgs }: { - default = pkgs.mkShell { - venvDir = ".venv"; - packages = with pkgs; [ python311 ] ++ - (with pkgs.python311Packages; [ - pip - venvShellHook - graphviz - ]); - }; - }); + # ... + # I usually also declare a default package, a code checker and formatter + devShells = forEachSupportedSystem ( + { pkgs }: + { + default = pkgs.mkShell { + venvDir = ".venv"; + packages = + with pkgs; + [ + python3 + graphviz + ] + ++ (with pkgs.python3Packages; [ + pip + venvShellHook + graphviz + ]); + }; + } + ); }; } ``` @@ -376,23 +384,24 @@ Here's what the package looks like. SDL2_mixer, zlib, unstableGitUpdater, + makeWrapper, }: - stdenv.mkDerivation (finalAttrs: { pname = "supermariowar"; - version = "2.0-unstable-2024-06-22"; + version = "2023-unstable-2024-09-17"; src = fetchFromGitHub { owner = "mmatyas"; repo = "supermariowar"; - rev = "e646679c119a3b6c93c48e505564e8d24441fe4e"; - hash = "sha256-bA/Pu47Rm1MrnJHIrRvOevI3LXj207GFcJloP94/LOA="; + rev = "6b8ff8c669ca31a116754d23b6ff65e42ac50733"; + hash = "sha256-P0jV7G81thj0UJoYLd5+H5SjjaVu4goJxc9IkbzxJgs="; fetchSubmodules = true; }; nativeBuildInputs = [ cmake pkg-config + makeWrapper ]; buildInputs = [ @@ -410,17 +419,15 @@ stdenv.mkDerivation (finalAttrs: { mkdir -p $out/bin for app in smw smw-leveledit smw-worldedit; do - chmod +x $out/games/$app - - cat << EOF > $out/bin/$app - $out/games/$app --datadir $out/share/games/smw - EOF - chmod +x $out/bin/$app + makeWrapper $out/games/$app $out/bin/$app \ + --add-flags "--datadir $out/share/games/smw" done ln -s $out/games/smw-server $out/bin/smw-server ''; + passthru.updateScript = unstableGitUpdater { }; + meta = { description = "A fan-made multiplayer Super Mario Bros. style deathmatch game"; homepage = "https://github.com/mmatyas/supermariowar"; diff --git a/public_gemini/callviz.gmi b/public_gemini/callviz.gmi new file mode 100644 index 0000000..e5c2a97 --- /dev/null +++ b/public_gemini/callviz.gmi @@ -0,0 +1,51 @@ +# A toy to visualize recursive function calls +## 2024-06-29 +Recently, I did a little project in +=> https://python.org Python +to visualise function calls, especially recursive functions. + +It takes the form of a +=> https://python.org Python +decorator applied to the desired functions. The data structure used is fairly basic, a tree with nodes that have a parent and an indefinite number of children. Each node represents a function call, and the nodes also include the arguments passed to the function when it is called and, optionally, a return value. + +To generate a visual and have an overview of all the function calls, I used +=> https://graphviz.org/ Graphviz +to manage a graph and save it as a file (DOT, SVG, PNG, etc.). + +The decorator also supports memoization, which can also be represented on the final visual. + +## How is it used? + +These are two clear examples of how the decorator is used. + +```python +from callviz.core import callviz, set_output_dir + +set_output_dir("out") + +@callviz( + _format="png", + memoization=True, + open_file=True, + show_node_result=True, +) +def fib(n: int): + if n < 2: + return n + + return fib(n - 2) + fib(n - 1) + +@callviz(_format="png", show_link_value=False) +def rev(arr, new): + if arr == []: + return new + + return rev(arr[1:], [arr[0]] + new) + +fib(5) +rev(list(range(6, 0, -1)), []) +``` + +## Links + +=> https://github.com/theobori/callviz https://github.com/theobori/callviz diff --git a/public_gemini/clox.gmi b/public_gemini/clox.gmi new file mode 100644 index 0000000..be0bcd6 --- /dev/null +++ b/public_gemini/clox.gmi @@ -0,0 +1,73 @@ +# My bytecode VM Lox interpreter +## 2024-11-06 +The aim of this post is to describe the general operation of the program and some of the mechanisms that we consider to be of interest. For full details, a link to the source code is available at the bottom of the page. + +So I continued with “Crafting Interpreters” by Robert Nystrom after making +=> /posts/jlox My Tree-Walker Lox interpreter +. In this part I tried to do as many challenges as possible and really understand how a VM bytecode works. + +This version is written in C, which means we have to write a lot of code ourselves, but we don't use any external libraries. + +## Compiler + +The primary purpose of our compiler is to generate a chunk of code in bytecode form for interpretation by our bytecode virtual machine. Here are a few interesting features of the front end. + +### Scanner + +The token scanner is very classic, with only one thing to say: the function responsible for identifying the language's native keywords is very dirty. The author has chosen to use a large switch statement instead of implementing a sorting function, which is certainly powerful but not very elegant. + +### Parser + +An interesting point to note is that the author chose not to use a syntax tree for the front end. We therefore implemented a single-pass compiler (directly converts compile units into bytecode). + +We also implemented a Vaughan Pratt's parser, in our case a “top-down operator precedence parser”. This means we have to define operator precedence in advance. Here's what it looks like in code. + +```c +typedef enum { + PREC_NONE, + PREC_ASSIGNMENT, // = + PREC_OR, // or + PREC_AND, // and + PREC_EQUALITY, // == != + PREC_COMPARISON, // < > <= >= + PREC_TERM, // + - + PREC_FACTOR, // * / + PREC_UNARY, // ! - + PREC_CALL, // . () + PREC_PRIMARY +} Precedence; +``` + +This precedence is simply used to control the parsing of expressions. A rule with a lower precedence than the last parsed expression is not allowed. + +## Bytecode + +To manage conditions, we emit OP_JUMP operation code for conditions. If a condition expression is evaluated to false, it jumps to the end of the conditionnal block / expression. To do this, we use the concept of backpatching: we overwrite the immediate value of the instruction in the chunk during compilation. + +In my implementation, all immediate values are encoded on 8 bits, with the exception of constants, which have a size of 24 bits. + +## Virtual Machine + +The VM is centered on a stack where we push operands, local variables, etc.. + +Everything at runtime is managed by callframes, even the top-level code is embed within a function object. + +## Example + +Here is a simple Lox example that can be evaluated by my interpreter. + +```text +fun fib(n) { + if (n < 2) { + return n; + } + + return fib(n - 2) + fib(n - 1); +} + +print fib(10); +``` + +## Links + +=> https://github.com/theobori/lox-virtual-machine https://github.com/theobori/lox-virtual-machine diff --git a/public_gemini/ebpf.gmi b/public_gemini/ebpf.gmi index ed61ee5..ccfddec 100644 --- a/public_gemini/ebpf.gmi +++ b/public_gemini/ebpf.gmi @@ -1,8 +1,10 @@ -# eBPF essentials +# My eBPF exploration ## 2024-01-11 Having discovered eBPF and read a few books about it, I'm writing here the essentials to remember about the basics. It's mainly a mix of my personal notes from the books "Learning eBPF" by Liz Rice and "Linux Observability with BPF" by David Calavera. The aim is to write down the essentials without going into too much technical detail, a sort of memo. +You can find my eBPF (XDP) projects at the bottom of the page. + ## What is eBPF ? eBPF stands for extended Berkeley Packet Filter. It's a virtual machine with a minimalist instructions set in the kernel (Linux) that lets you run BPF programs from user space. These BPF programs are attached to objects in the kernel and executed when these objects are triggered by events. diff --git a/public_gemini/homelab.gmi b/public_gemini/homelab.gmi new file mode 100644 index 0000000..3975133 --- /dev/null +++ b/public_gemini/homelab.gmi @@ -0,0 +1,181 @@ +# My homelab +## 2024-06-06 +I've got an old laptop that I don't use anymore, so I thought I'd turn it into a server and deploy some free, open-source web services on it. + +The aim is to create a private homelab, i.e. the machine should only be accessible via the local network. None of the services are exposed to the Internet, with the exception of Wireguard, which lets me access the services from the outside. + +The aim of this post is to present the main steps I've taken and explain how the homelab works. + +My laptop is an +=> https://laptopmedia.com/series/asus-rog-g750/ ASUS ROG G750 +with 8GB of memory and 2 HDDs of around 600GB each. It hasn't been used for about five or six years and the battery is dead. + +## First steps + +First, I decided to make an old USB key bootable. I install +=> https://www.ventoy.net/ Ventoy +on it to be able to load different image disks (ISO) without having to rewrite each time directly on the USB key. + +I put +=> https://www.memtest.org/ Memtest86+ +to test the memory, +=> https://github.com/PartialVolume/shredos.x86_64 shredos.x86_64 +to wipe the HDDs and finally +=> https://cdimage.debian.org/debian-cd/12.5.0/amd64/iso-cd/ Debian 12 +which will be the main OS. + +So when I boot on the USB key, it loads the "multiboot" boot-loader (Ventoy) and I can then load one of the three programs. + +## Pre configuration + +To be able to deploy the system configuration and reproduce it later, I'm writing an Ansible playbook and testing it on a local VM (virt-manager + KVM). + +The entire configuration is available at the bottom of the page. + +## TLS certificates + +I want communication with web applications to be encrypted and secure, so I need an HTTPS server, so I need TLS certificates and, to make things easier, a domain name. + +For the domain name I used +=> https://www.duckdns.org/ Duck DNS +and reserved the sub-domain +=> https://theobori.duckdns.org theobori.duckdns.org +which for the moment corresponds to the IPv4 of my virtual machine accessible only from the host system. + +In fact, I only need to manage one certificate with two SANs: + +* theobori.duckdns.org +* *.theobori.duckdns.org + +## Services + +Every application is deployed with the Ansible playbook are conteuneurized and managed with Docker. + +They are accessible only through port 443 managed by +=> https://traefik.io/ Traefik +. Each sub-domain of +=> https://theobori.duckdns.org theobori.duckdns.org +corresponds to a service, with the exception of the homepage, which is associated with the domain itself. + +## Firewall + +To filter incoming network traffic, I manipulate iptables with the ufw tool. There are only four ports open as declared below in the Ansible playbook configuration. + +```yaml +- role: weareinteractive.ufw + tags: ufw + ufw_enabled: true + ufw_packages: ["ufw"] + ufw_rules: + - logging: "full" + - rule: allow + to_port: "443" + - rule: allow + to_port: "80" + - rule: allow + {% raw %} to_port: "{{ ssh_port }}" {% endraw %} + # Wireguard + - rule: allow + to_port: "51820" + proto: udp + # Delete default rule + - rule: allow + name: Anywhere + delete: true + ufw_manage_config: true + ufw_config: + IPV6: "yes" + DEFAULT_INPUT_POLICY: DROP + DEFAULT_OUTPUT_POLICY: ACCEPT + DEFAULT_FORWARD_POLICY: DROP + DEFAULT_APPLICATION_POLICY: SKIP + MANAGE_BUILTINS: "no" + IPT_SYSCTL: /etc/ufw/sysctl.conf + IPT_MODULES: "" +``` + +## Identity provider + +Services with integration for protocols to verify user identity or determine permissions are all linked to the +=> https://goauthentik.io/ Authentik +user directory. + +I needed OAuth2 for +=> https://www.portainer.io/ Portainer +and LDAP for several other services such as +=> https://owncloud.com/ Owncloud +. + +If I remember correctly, the OAuth2 Outpost is embedded in the application by default, whereas the LDAP Outpost had to be configured with specific parameters for Docker. + +Here's a diagram of several services trying to retrieve the identity of an +=> https://goauthentik.io/ Authentik +user. + +## Access management + +With +=> https://goauthentik.io/ Authentik +, group policies have been created to authorize only certain groups of users to access certain services. + +For example, for +=> https://jellyfin.org/ Jellyfin +, only users in the Jellyfin group are authorized to connect. + +In this way, I was able to secure all administration services by authorizing only users in groups reserved for administration. + +I also used +=> https://traefik.io/ Traefik +and +=> https://goauthentik.io/ Authentik +to secure access to services not protected by authentication. + +I added middleware to the reverse proxy to enable HTTP ForwardAuth with +=> https://goauthentik.io/ Authentik +. In practical terms, this places a connection portal in front of the targeted web services. + +Let's say I want to access +=> https://duplicati.theobori.duckdns.org duplicati.theobori.duckdns.org +, it could be schematized as follows. + +## Media stack + +One of the main objectives was to be able to manage movies and series and watch them from any device on the local network. + +So I set up a stack for managing and downloading media, which would then be streamed to devices by +=> https://jellyfin.org/ Jellyfin +. + +Here's what the media stack looks like. + +## Backup and restore + +To back up container data, I use +=> https://duplicati.com/ Duplicati +. It lets you encrypt data and manage retention very easily via a web interface. + +These backups can then be restored on my old computer. + +## Monitoring + +To keep abreast of service status, I've opted for +=> https://uptime.kuma.pet/ Uptime Kuma +, which will alert me via Discord when a service is down for n seconds. + +I also have a +=> https://prometheus.io/ Prometheus +and +=> https://grafana.com/ Grafana +stack that lets me collect metrics on the system and on Docker containers. As for +=> https://uptime.kuma.pet/ Uptime Kuma +, I'm alerted by Discord according to limits defined for RAM and available storage space, for example. + +This is how the monitoring stack looks. + +## Final home page + +Here's an overview of the dashboard, featuring all the services exposed to the local network. In a way it's the end result of service implementation. + +## Links + +=> https://github.com/theobori/homelab https://github.com/theobori/homelab diff --git a/public_gemini/index.gmi b/public_gemini/index.gmi index a1f194e..4dd9f12 100644 --- a/public_gemini/index.gmi +++ b/public_gemini/index.gmi @@ -8,31 +8,32 @@ Hi, I'm Théo, -I support FOSS, FLOSS and pubnix(es) values, I love Linux and UNIX systems. +I support F(L)OSS and pubnixes values, I love UNIX systems and I also really like Arch Linux and Nix. +Currently I'm maintaining teedata.net (skins.tw until 2024) and I offer free services that respect privacy. -Everything I make is open source and available on GitHub and Gitea. -I also have a CTFtime and a LinkedIn profile. - -Currently I'm maintaining teedata.net (skins.tw until 2024). -I offer free services that respect privacy. +If you're interested you can have a look at my blog posts, everything I make is open source and available on GitHub and Gitea. => https://git.theobori.cafe/nagi Gitea => https://www.github.com/theobori GitHub -=> https://www.linkedin.com/in/theo-bori/ LinkedIn -=> https://ctftime.org/user/67138/ CTFtime => https://teedata.net teedata.net => https://teedata.net skins.tw => https://services.theobori.cafe services ## Contact -I can be reached via Discord (b0th) or via nagi@cock.li. -=> /pgp.gmi PGP +My links and contact details are available here. +=> https://links.theobori.cafe LinkStack ## Other protocols => https://theobori.cafe HTTPS => gopher://tilde.pink:70/1/~nagi Gopher ## Posts +=> /~nagi/clox.gmi My bytecode VM Lox interpreter - nov 2024 +=> /~nagi/callviz.gmi A toy to visualize recursive function calls - jun 2024 +=> /~nagi/nix.gmi My Nix exploration - jun 2024 +=> /~nagi/homelab.gmi My homelab - jun 2024 +=> /~nagi/terraform_chaos_teeworlds.gmi Teeworlds Terraform chaos engineering - mar 2024 +=> /~nagi/jlox.gmi My Tree-Walker Lox interpreter - jun 2024 => /~nagi/openbsd_ports.gmi Porting X11 apps to OpenBSD - mar 2024 => /~nagi/chezmoi.gmi Manage dotfiles with chezmoi - mar 2024 => /~nagi/ebpf.gmi eBPF essentials - jan 2024 @@ -43,7 +44,7 @@ I can be reached via Discord (b0th) or via nagi@cock.li. => /~nagi/knockd_ufw.gmi OpenSSH port knocking with UFW - oct 2023 => /~nagi/teeworlds-utilities.gmi Teeworlds utilities - jul 2023 => /~nagi/tf-neuvector.gmi Terraform NeuVector provider - jun 2023 -=> /~nagi/tf-doom.gmi Terraform chaos engineering - jun 2023 +=> /~nagi/terraform_chaos_doom.gmi Terraform chaos engineering - jun 2023 => /~nagi/tinywad.gmi DOOM modding library - may 2023 => /~nagi/websites.gmi Interesting websites - mar 2023 => /~nagi/tinychip.gmi CHIP-8 emulator - mar 2023 diff --git a/public_gemini/jlox.gmi b/public_gemini/jlox.gmi new file mode 100644 index 0000000..7f655c5 --- /dev/null +++ b/public_gemini/jlox.gmi @@ -0,0 +1,112 @@ +# My Tree-Walker Lox interpreter +## 2024-03-22 +I wanted to learn more about designing an interpreter, so I looked around and found the free "Crafting Interpreters" by Robert Nystrom. + +I read parts I and II, which focus on concepts, common techniques and language behavior. Since I have recently read these parts, writing helps me to better understand and even re-understand certain things. + +The aim was to have a Lox interpreter that at least supported functions and closures, so we could have a taste of the basics. + +## What is lox ? + +To sum up +=> https://craftinginterpreters.com/the-lox-language.html this page +, Lox is a small, high-level scripting language, with dynamic types and automatic memory management. It is similar to Javascript, Lua and Scheme. + +A cool fact is that Lox is Turing complete, it means it is able to run a Turing machine. + +## Essentials basics + +I've learned some important key concepts, and here are a few of the most important. + +### Scanning + +Scanning is also known as lexing or lexical analysis. It takes a linear stream of characters and chunks them into tokens (words). + +The scanner must group characters into the smalles possible sequence that represents something. This blobs of characters are called lexemes. + +### Parsing + +It takes the flat sequence of tokens and builds a tree structure that represent the nested nature of the grammar. This three is called an Abstract Syntax Tree (AST) + +The Lox interpreter I made is a Tree-Walk Interpreter, it means it traverses the AST one branch and leaf at a time and it evaluates. + +### Context-Free Grammars + +It is a formal grammar, it allows us to define an infinite set of rules that are in the grammar, it specicies which strings are valid and which are not. + +### Rules for grammars + +We use rules to generate strings that are in the grammar, it is called derivation, each is derived from an existing rule on the grammar. + +> *The rules are called productions because they produce strings in the grammar* + +Each production has a head (its name) and a body (a list of symbols). + +A symbol can be: + +* A terminal, it is like an endpoint, it simply produces it. +* A non-terminal, it refers to other rule in the grammar. + +A grammar example from the book, see below. + +```python +breakfast → protein ( "with" breakfast "on the side" )? + | bread ; + +protein → "really"+ "crispy" "bacon" + | "sausage" + | ( "scrambled" | "poached" | "fried" ) "eggs" ; + +bread → "toast" | "biscuits" | "English muffin" ; +``` + +The ponctuations is based on the regex behaviors, as example, the ? means it is optional. + +So here, a valid strings could be the one below. + +```python +"poached" "eggs" "with" "toast" "on the side" +``` + +### Recursive Descent Parsing + +The best explanation here is probably the one in the book. + +> *Recursive descent is considered a top-down parser because it starts from the top or outermost grammar rule (here expression ) and works its way down into the nested subexpressions before finally reaching the leaves of the syntax tree.* + +## Examples + +Here are some Lox examples that can be evaluated by my interpreter. + +```text +var b = 1; +var a = "hello"; + +{ + var a = b + b; + + print a; +} + +print a; + +fun fibonacci(n) { + if (n <= 1) return n; + return fibonacci(n - 1) + fibonacci(n - 2); +} + +print fibonacci(5); + +print "helo" + "world"; + +fun echo(n) { + print n; + return n; +} + +print echo(echo(1) + echo(2)) + echo(echo(4) + echo(5)); +``` + +## Links + +=> https://github.com/theobori/tinylox https://github.com/theobori/tinylox diff --git a/public_gemini/nix.gmi b/public_gemini/nix.gmi new file mode 100644 index 0000000..4453263 --- /dev/null +++ b/public_gemini/nix.gmi @@ -0,0 +1,450 @@ +# My Nix exploration +## 2024-06-24 + +Here I share some notes and other things I've learned about Nix that I find interesting. The content of this post is mainly about me learning Nix, it's not about understanding the whole tool and language. + +Also, it's important to note that I use Nix as a non-NixOS user. + +## What is Nix? + +Nix is actually several things! + +It's a cross platform package manager. It would be a little more accurate to say that it's a deployment tool used as a package manager. + +And it's also a purely functional programming language, dynamically typed and lazily evaluated. + +## Learning the programming language + +I started by learning the basics of the language and then went on to explore it in a bit more depth. + +### The basics + +I read +=> https://nix.dev/tutorials/nix-language#reading-nix-language Nix language basics +and to get used to the language I practised with +=> https://nixcloud.io/tour A tour of Nix +which has several levels of difficulty from "easy" to "hard". + +One interesting thing about this language is that it has only one argument per function. To simulate several arguments, you can, for example, write a function with one argument that returns a function with one argument that returns a function with one argument, and so on. The syntax of the language makes it easy to do this. + +I was taught that it has a name, it's called +=> https://en.wikipedia.org/wiki/Currying Currying +. It's the transformation of a function with several arguments into a function with one argument that returns a function on the rest of the arguments. Here's an example with arguments 3 and 4. + +```nix +nix-repl> (a: b: a + b) 3 4 +7 +``` + +A Python equivalent might be something like the following. + +```python +>>> (lambda a: lambda b: a + b)(3)(4) +7 +``` + +Another solution that is often used, particularly in +=> https://github.com/NixOS/nixpkgs Nixpkgs +, is to have an attribute set as a parameter to the function, and to use the attributes as arguments. For example, this might look like the expression below. + +```nix +nix-repl> ({a, b}: a + b){a = 3; b = 4;} +7 +``` + +### Fake dynamic binding + +Although the blog post +=> http://r6.ca/blog/20140422T142911Z.html How to Fake Dynamic Binding in Nix +talks about this very well, I find it interesting to offer my own thoughts and approach. + +The language is statically scoped, i.e. binding decisions are made according to the scope at declaration time. + +Let's look at the rec keyword, which allows an attribute set to access its own attributes (recursive binding). Here's an example. + +```nix +nix-repl> rec { a = 1; b = a + 1;} +{ + a = 1; + b = 2; +} +``` + +This is an interesting feature, but it remains static because the binding is done before the runtime. This poses problems, particularly when it comes to overriding attributes, as shown in the example below. + +```nix +nix-repl> rec { a = 1; b = a + 1; } // { a = 10; } +{ + a = 10; + b = 2; +} +``` + +In this example, we would like b to be equal to 11, not 2. + +To solve this problem, we can look at the concept of a fixed point. A fixed point is a value of x that validates the equation x = f(x). + +We can therefore write the following function. + +```nix +nix-repl> fix = f: let + result = f result; +in + result +``` + +So here we have the function fix which takes a function f as a parameter and returns the fixed point result of the function f. + +You might be tempted to say that the f function calls itself ad infinitum (f(f(f(f(..))))), but Nix evaluates expressions lazily, so this isn't the case. + +We can literally see that the f function returns a fixed point (result), because result = f result, which respects the definition of a fixed point. + +The fix function will allow us to emulate the rec keyword, as shown in the example below. + +```nix +nix-repl> fix (self: { a = 3; b = 4; c = self.a + self.b; }) +{ + a = 3; + b = 4; + c = 7; +} +``` + +To better understand how it works, I've written the result of the fix function differently with the argument used previously. + +```nix +nix-repl> let + result = { a = 3; b = 4; c = result.a + result.b;}; +in + { a = 3; b = 4; c = result.a + result.b;} +{ + a = 3; + b = 4; + c = 7; +} +``` + +Finally, I've written the following function, which will allow the attributes to be overridden dynamically as initially intended. + +```nix +nix-repl> fix = let + fixWithOverride = f: overrides: let + result = (f result) // overrides; + in + result // { override = x: fixWithOverride f x; }; +in +f: fixWithOverride f {} + +attrFunction = self: { a = 3; b = 4; c = self.a+self.b; } + +attrFunctionFixedPoint = fix attrFunction + +nix-repl> attrFunctionFixedPoint +{ + a = 3; + b = 4; + c = 7; + override = «lambda override @ «string»:5:30»; +} + +nix-repl> attrFunctionFixedPoint.override { b = 1; } +{ + a = 3; + b = 1; + c = 4; + override = «lambda override @ «string»:5:30»; +} +``` + +## The essential Nix tool + +As already mentioned, the main use of Nix is cross platform package management. In this section I'm just trying to share and summarise some of the essential parts of my notes. If you want more details, I recommend you read the excellent +=> https://nixos.org/guides/nix-pills/ Nix Pills +. It's rather long but well worth the read! + +### How does it work ? + +To sum up, I'd say that the Nix language has a very interesting native function called derivation ( +=> https://nix.dev/manual/nix/2.22/language/derivations see documentation +) on which many Nix expressions are based. I'm not going to redefine the term because the documentation has a very comprehensible version, but the important thing to remember is that a derivation is a construction specification, it's an immutable Nix building block. With another package manager, you could see it as a literal package. + +Nix technology will enable us to build these derivations, in the following stages. + +The .drv files contain specifications on how to build the derivation, they are intermediate files comparable to .o files, and the .nix files are comparable to .c files. + +The construction result is immutable and will be stored in /nix/store/, a synchronisation with the +=> https://www.sqlite.org/ SQLite +database. I said it was immutable, in fact it is because Nix creates a hash for the path in the /nix/store/ from the input derivation (not from the construction result). + +It's pretty hard to imagine all this, so I'll give you a concrete example. Let's imagine I want to create a derivation for the famous software +=> https://www.gnu.org/software/hello/ GNU Hello +. The Nix derivation could look something like this. + +```nix +# default.nix + +let + pkgs = import { }; +in + { + hello = pkgs.stdenv.mkDerivation { + pname = "hello"; + version = "2.12.1"; + + src = fetchTarball { + url = "https://ftp.gnu.org/gnu/hello/hello-2.12.1.tar.gz"; + sha256 = "1kJjhtlsAkpNB7f6tZEs+dbKd8z7KoNHyDHEJ0tmhnc="; + }; + }; + } +``` + +> The mkDerivation function is based on the derivation builtin function. + +It can be built with the following command. + +```bash +nix-build +``` + +The build result has been created in /nix/store/x9cc4jsylk5q01iaxmxf941b59chws5h-hello-2.12.1 and a symbolic link named result pointing to this folder has been created in the current folder. We can then find the binary in ./result/bin/hello. + +Before the build, a .drv file was created, which can be found by running the following command. + +```bash +nix derivation show ./result | jq "keys[0]" +``` + +The full path to the .drv file is found in the first key of the JSON object, so the path to the .drv file is /nix/store/dp5z62k3chf019biikg77p2acmz17phx-hello-2.12.1.drv. + +As it is in binary format we can use nix derivation show to display the construction information it contains with the following command. + +```bash +nix derivation show (nix derivation show ./result | jq "keys[0]" | tr -d "\"") +# Or +nix derivation show /nix/store/dp5z62k3chf019biikg77p2acmz17phx-hello-2.12.1.drv +# ^ +# | Same output +# v +nix derivation show ./result +``` + +### Nixpkgs + +In the Nix expression used previously (the +=> https://www.gnu.org/software/hello/ GNU Hello +derivation), I used the mkDerivation function from stdenv. + +This function is not builtin, it comes from the pkgs identifier which has the value import { };. + +Before explaining this import, I think it's very important to understand what +=> https://github.com/NixOS/nixpkgs Nixpkgs +is. It's a Git repository that contains all the Nix expressions and modules. When this folder is evaluated, it produces an attribute set containing stdenv, which is itself an attribute set containing our mkDerivation function. + +Getting back to pkgs, is just a special Nix syntax, which, when evaluated, gives a path to a folder containing a collection of Nix expressions, i.e. Nixpkgs. + +Incidentally has an equivalence in Nix as shown below. + +```nix +nix-repl> +/home/nagi/.nix-defexpr/channels/nixpkgs + +nix-repl> builtins.findFile builtins.nixPath "nixpkgs" +/home/nagi/.nix-defexpr/channels/nixpkg + +nix-repl> :p builtins.nixPath +[ + { + path = "/home/nagi/.nix-defexpr/channels"; + prefix = ""; + } +] +``` + +### Managing multiple Python versions + +One of the advantages of Nix is that it naturally offers the possibility of managing several versions of the same application. Taking +=> https://www.python.org/ Python +as an example, let's say I want a Nix shell with version 3.7 and version 3.13. + +To do this, we can check for which version of +=> https://github.com/NixOS/nixpkgs Nixpkgs +Python was built on version 3.7 and target a specific version of +=> https://github.com/NixOS/nixpkgs Nixpkgs +in our Nix expression. + +To do this, there's the +=> https://floxdev.com/ flox +tool which works very well, but to make it easier to understand I prefer to use +=> https://www.nixhub.io nixhub.io +. + +So I'm looking for a version of the Nix packages that corresponds to Python version 3.7, and I find nixpkgs/aca0bbe791c220f8360bd0dd8e9dce161253b341#python37. + +```nix +# shell.nix + +let + pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/tarball/nixos-23.11") { }; + nixpkgs-python = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/aca0bbe791c220f8360bd0dd8e9dce161253b341.tar.gz") { }; +in + pkgs.mkShell { + buildInputs = [ + nixpkgs-python.python37 + pkgs.python313 + ]; + } +``` + +You can build Python derivations and enter a Nix shell with the following command. + +```bash +nix-shell +``` + +And we see that we have access to the two versions requested with the commands python3.7 and python3.13 ! + +## A Virtual environment in Python with Nix flakes + +I've recently created a development environment with Nix flakes ( +=> https://nixos.wiki/wiki/Flakes see documentation +), it's very handy as it provides a ready to use environment for Python 3.11 with the desired modules. + +Below is a Nix expression I wrote for the Python module +=> https://pypi.org/project/callviz/ callviz +, it has all the necessary dependencies and a virtual Python environment. + +```nix +# flake.nix + +{ + inputs = { + nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable"; + }; + + outputs = + { self, nixpkgs }: + let + supportedSystems = [ + "x86_64-linux" + "aarch64-linux" + "x86_64-darwin" + "aarch64-darwin" + ]; + + forEachSupportedSystem = + f: nixpkgs.lib.genAttrs supportedSystems (system: f { pkgs = import nixpkgs { inherit system; }; }); + in + { + # ... + # I usually also declare a default package, a code checker and formatter + devShells = forEachSupportedSystem ( + { pkgs }: + { + default = pkgs.mkShell { + venvDir = ".venv"; + packages = + with pkgs; + [ + python3 + graphviz + ] + ++ (with pkgs.python3Packages; [ + pip + venvShellHook + graphviz + ]); + }; + } + ); + }; +} +``` + +Note that the default package and the default development shell are compatible with all systems (supportedSystems)! + +To realise the derivations and enter the Nix shell, I can run the following command. + +```bash +nix develop +``` + +## Nixpkgs contribution + +Once I'd finished exploring and learning Nix, I wanted to make a package for +=> http://smwstuff.net/game Super Mario War +and add it to +=> https://github.com/NixOS/nixpkgs Nixpkgs +. + +Here's what the package looks like. + +```nix +{ + lib, + stdenv, + fetchFromGitHub, + cmake, + pkg-config, + enet, + yaml-cpp, + SDL2, + SDL2_image, + SDL2_mixer, + zlib, + unstableGitUpdater, + makeWrapper, +}: +stdenv.mkDerivation (finalAttrs: { + pname = "supermariowar"; + version = "2023-unstable-2024-09-17"; + + src = fetchFromGitHub { + owner = "mmatyas"; + repo = "supermariowar"; + rev = "6b8ff8c669ca31a116754d23b6ff65e42ac50733"; + hash = "sha256-P0jV7G81thj0UJoYLd5+H5SjjaVu4goJxc9IkbzxJgs="; + fetchSubmodules = true; + }; + + nativeBuildInputs = [ + cmake + pkg-config + makeWrapper + ]; + + buildInputs = [ + enet + yaml-cpp + SDL2 + SDL2_image + SDL2_mixer + zlib + ]; + + cmakeFlags = [ "-DBUILD_STATIC_LIBS=OFF" ]; + + postInstall = '' + mkdir -p $out/bin + + for app in smw smw-leveledit smw-worldedit; do + makeWrapper $out/games/$app $out/bin/$app \ + --add-flags "--datadir $out/share/games/smw" + done + + ln -s $out/games/smw-server $out/bin/smw-server + ''; + + passthru.updateScript = unstableGitUpdater { }; + + meta = { + description = "A fan-made multiplayer Super Mario Bros. style deathmatch game"; + homepage = "https://github.com/mmatyas/supermariowar"; + changelog = "https://github.com/mmatyas/supermariowar/blob/${finalAttrs.src.rev}/CHANGELOG"; + license = lib.licenses.gpl2Plus; + maintainers = with lib.maintainers; [ theobori ]; + mainProgram = "smw"; + platforms = lib.platforms.linux; + }; +}) +``` diff --git a/public_gemini/openbsd_ports.gmi b/public_gemini/openbsd_ports.gmi index 0891e43..ad446c9 100644 --- a/public_gemini/openbsd_ports.gmi +++ b/public_gemini/openbsd_ports.gmi @@ -20,7 +20,7 @@ Before making the game compatible with the distribution, it's best to fetch the ## OpenBSD environment -My test environment is just a virtual machine managed by VirtualBox on which +My test environment is a virtual machine managed by virt-manager (using libvirt to interact with KVM) on which => https://www.openbsd.org/74.html OpenBSD 7.4 has been installed, following the steps => https://www.openbsdhandbook.com/installation/ here diff --git a/public_gemini/teeworlds-utilities.gmi b/public_gemini/teeworlds-utilities.gmi index bc7feb9..0e9187f 100644 --- a/public_gemini/teeworlds-utilities.gmi +++ b/public_gemini/teeworlds-utilities.gmi @@ -11,7 +11,7 @@ and for the Teedata Discord bot. Indirectly, other people use it, for example, to render skins in a Discord channel that displays messages in real time (fokkonaut's Discord server) or in other projects like => https://teeassembler.developer.li/ TeeAssembler 2.0 -that used some part of the **teeworlds-utilites** code. +that used some part of the Teeworlds utilities code. ## Use case examples diff --git a/public_gemini/teeworlds.gmi b/public_gemini/teeworlds.gmi index 6a5b150..8fcd7df 100644 --- a/public_gemini/teeworlds.gmi +++ b/public_gemini/teeworlds.gmi @@ -79,9 +79,9 @@ Now we can create and start a new container with the teeworlds client image we j I consider that you're using X as your windowing system, rather than something like Wayland or something else. -### X Window System +### X display server -So that the game can work and we can play it. I assume you are using the X window system and that you have a X server listening at a UNIX domain socket. +So that the game can work and we can play it. I assume you are using an X display server and that it is listening at a UNIX domain socket. That is why we are forwarding the /tmp/.X11-unix/ directory that contains the UNIX domain socket(s) for the X server. diff --git a/public_gemini/terraform_chaos_doom.gmi b/public_gemini/terraform_chaos_doom.gmi new file mode 100644 index 0000000..2f9d136 --- /dev/null +++ b/public_gemini/terraform_chaos_doom.gmi @@ -0,0 +1,25 @@ +# DOOM Terraform chaos engineering +## 2023-06-03 +I first saw +=> https://github.com/storax/kubedoom kubedoom +and thought it was pretty cool, so I decided to do the same for Terraform, knowing that I was working with it for professional projects. + +The principle is very simple, each enemy represents a Terraform resource, if an enemy dies, the associated resource is destroyed. + +## How does it work? + +The main program is terraform-doom, which creates a UNIX socket, listens to it and simultaneously launches an X11 virtual server (Xvfb), a VNC server (x11vnc) attached to this X session and psdoom (DOOM writing to the UNIX socket). + +The binaries Xvfb and x11vnc are used to create a cross-platform graphical access to psdoom inside the container. + +At runtime psdoom will continuously write to the UNIX socket to signal terraform-doom to send Terraform resource information. When an enemy is killed, psdoom writes the associated resource name to the socket. + +Everything we've just described will be encapsulated in a Docker container. + +## Demonstration + +This demonstration has been realized with the example Terraform project, every steps to reproduce it are detailed in the README file on the repository. + +## Links + +=> https://github.com/theobori/terraform-doom https://github.com/theobori/terraform-doom diff --git a/public_gemini/terraform_chaos_teeworlds.gmi b/public_gemini/terraform_chaos_teeworlds.gmi new file mode 100644 index 0000000..a842b62 --- /dev/null +++ b/public_gemini/terraform_chaos_teeworlds.gmi @@ -0,0 +1,32 @@ +# Teeworlds Terraform chaos engineering +## 2024-06-05 +After doing some +=> /posts/terraform_chaos_doom/ chaos engineering for Terraform with the DOOM game +, I wanted to make a version for Teeworlds, specifically for its version 0.7 (its latest version). + +The difference with the DOOM version is that in this project, a player must capture the flag for a Terraform resource to be randomly destroyed. + +## How does it work? + +When configuring a Teeworlds server, the values below can be entered. + +```bash +# Econ configuration +ec_port 7000 +ec_password "hello_world" +ec_output_level 2 +``` + +These are prefixed with ec_ because they are associated with the econ server. This configuration binds a TCP port which will expose the Telnet protocol-based econ server. + +Through the latter, we'll be able to retrieve events from the Teeworlds server, such as a message sent, a player killed or a flag captured ! + +## Demonstration + +This demonstration has been realized with the example Terraform project, every steps to reproduce it are detailed in the README file on the repository. + +## Links + +=> https://github.com/theobori/terraform-teeworlds https://github.com/theobori/terraform-teeworlds + +=> https://github.com/theobori/teeworlds-econ https://github.com/theobori/teeworlds-econ diff --git a/public_gemini/tinychip.gmi b/public_gemini/tinychip.gmi index a999d2c..90e3cb3 100644 --- a/public_gemini/tinychip.gmi +++ b/public_gemini/tinychip.gmi @@ -1,10 +1,10 @@ -# CHIP-8 emulator -## 2023-03-08 +# CHIP-8 emulator +## 2023-03-08 I wanted to learn the basics of emulator development and emulation in general. So I decided to make a CHIP-8 emulator. -In fact it's a misuse of language to say that it's an "emulator" because CHIP-8 is a language, so we should rather say "interpreter". +In fact it's a misuse of language to say that it's an emulator because CHIP-8 is a language, so we should rather say interpreter. -## How doe it works ? +## How does it work ? So, basically there are three main components that make it works. The CPU, the API and the Core (kernel). @@ -22,4 +22,4 @@ I implemented the 36 instructions + the 4 I was taking before to be compatible w ## Links -=> https://github.com/theobori/tinychip https://github.com/theobori/tinychip +=> https://github.com/theobori/tinychip https://github.com/theobori/tinychip diff --git a/public_gopher/callviz.gph b/public_gopher/callviz.gph new file mode 100644 index 0000000..22f517c --- /dev/null +++ b/public_gopher/callviz.gph @@ -0,0 +1,55 @@ +A toy to visualize recursive function calls +2024-06-29 +Last edit: 2024-06-29 +--------------------- + +Recently, I did a little project in +[h|Python|URL:https://python.org|tilde.pink|70] + to visualise function calls, especially recursive functions. + +It takes the form of a +[h|Python|URL:https://python.org|tilde.pink|70] + decorator applied to the desired functions. The data structure used is fairly basic, a tree with nodes that have a parent and an indefinite number of children. Each node represents a function call, and the nodes also include the arguments passed to the function when it is called and, optionally, a return value. + +To generate a visual and have an overview of all the function calls, I used +[h|Graphviz|URL:https://graphviz.org/|tilde.pink|70] + to manage a graph and save it as a file (DOT, SVG, PNG, etc.). + +The decorator also supports memoization, which can also be represented on the final visual. + +## How is it used? + +These are two clear examples of how the decorator is used. + +```python +from callviz.core import callviz, set_output_dir + +set_output_dir("out") + +@callviz( + _format="png", + memoization=True, + open_file=True, + show_node_result=True, +) +def fib(n: int): + if n < 2: + return n + + return fib(n - 2) + fib(n - 1) + +@callviz(_format="png", show_link_value=False) +def rev(arr, new): + if arr == []: + return new + + return rev(arr[1:], [arr[0]] + new) + +fib(5) +rev(list(range(6, 0, -1)), []) +``` + +## Links + +[h|https://github.com/theobori/callviz|URL:https://github.com/theobori/callviz|tilde.pink|70] + diff --git a/public_gopher/clox.gph b/public_gopher/clox.gph new file mode 100644 index 0000000..a8a84ac --- /dev/null +++ b/public_gopher/clox.gph @@ -0,0 +1,75 @@ +My bytecode VM Lox interpreter +2024-11-06 +Last edit: 2024-11-06 +--------------------- + +The aim of this post is to describe the general operation of the program and some of the mechanisms that we consider to be of interest. For full details, a link to the source code is available at the bottom of the page. + +So I continued with “Crafting Interpreters” by Robert Nystrom after making [My Tree-Walker Lox interpreter](/posts/jlox). In this part I tried to do as many challenges as possible and really understand how a VM bytecode works. + +This version is written in C, which means we have to write a lot of code ourselves, but we don't use any external libraries. + +## Compiler + +The primary purpose of our compiler is to generate a chunk of code in bytecode form for interpretation by our bytecode virtual machine. Here are a few interesting features of the front end. + +### Scanner + +The token scanner is very classic, with only one thing to say: the function responsible for identifying the language's native keywords is very dirty. The author has chosen to use a large switch statement instead of implementing a sorting function, which is certainly powerful but not very elegant. + +### Parser + +An interesting point to note is that the author chose not to use a syntax tree for the front end. We therefore implemented a single-pass compiler (directly converts compile units into bytecode). + +We also implemented a Vaughan Pratt's parser, in our case a “top-down operator precedence parser”. This means we have to define operator precedence in advance. Here's what it looks like in code. + +```c +typedef enum { + PREC_NONE, + PREC_ASSIGNMENT, // = + PREC_OR, // or + PREC_AND, // and + PREC_EQUALITY, // == != + PREC_COMPARISON, // < > <= >= + PREC_TERM, // + - + PREC_FACTOR, // * / + PREC_UNARY, // ! - + PREC_CALL, // . () + PREC_PRIMARY +} Precedence; +``` + +This precedence is simply used to control the parsing of expressions. A rule with a lower precedence than the last parsed expression is not allowed. + +## Bytecode + +To manage conditions, we emit `OP_JUMP` operation code for conditions. If a condition expression is evaluated to false, it jumps to the end of the conditionnal block / expression. To do this, we use the concept of backpatching: we overwrite the immediate value of the instruction in the chunk during compilation. + +In my implementation, all immediate values are encoded on 8 bits, with the exception of constants, which have a size of 24 bits. + +## Virtual Machine + +The VM is centered on a stack where we push operands, local variables, etc.. + +Everything at runtime is managed by callframes, even the top-level code is embed within a function object. + +## Example + +Here is a simple Lox example that can be evaluated by my interpreter. + +```text +fun fib(n) { + if (n < 2) { + return n; + } + + return fib(n - 2) + fib(n - 1); +} + +print fib(10); +``` + +## Links + +[h|https://github.com/theobori/lox-virtual-machine|URL:https://github.com/theobori/lox-virtual-machine|tilde.pink|70] + diff --git a/public_gopher/ebpf.gph b/public_gopher/ebpf.gph index cf9f3f1..8135134 100644 --- a/public_gopher/ebpf.gph +++ b/public_gopher/ebpf.gph @@ -1,10 +1,12 @@ -eBPF essentials +My eBPF exploration 2024-01-11 Last edit: 2024-01-11 --------------------- Having discovered eBPF and read a few books about it, I'm writing here the essentials to remember about the basics. It's mainly a mix of my personal notes from the books "Learning eBPF" by Liz Rice and "Linux Observability with BPF" by David Calavera. The aim is to write down the essentials without going into too much technical detail, a sort of memo. +You can find my eBPF (XDP) projects at the bottom of the page. + ## What is eBPF ? eBPF stands for extended Berkeley Packet Filter. It's a virtual machine with a minimalist instructions set in the kernel (Linux) that lets you run BPF programs from user space. These BPF programs are attached to objects in the kernel and executed when these objects are triggered by events. diff --git a/public_gopher/homelab.gph b/public_gopher/homelab.gph new file mode 100644 index 0000000..699a9b3 --- /dev/null +++ b/public_gopher/homelab.gph @@ -0,0 +1,168 @@ +My homelab +2024-06-06 +Last edit: 2024-06-06 +--------------------- + +I've got an old laptop that I don't use anymore, so I thought I'd turn it into a server and deploy some free, open-source web services on it. + +The aim is to create a private homelab, i.e. the machine should only be accessible via the local network. None of the services are exposed to the Internet, with the exception of Wireguard, which lets me access the services from the outside. + +The aim of this post is to present the main steps I've taken and explain how the homelab works. + +My laptop is an +[h|ASUS ROG G750|URL:https://laptopmedia.com/series/asus-rog-g750/|tilde.pink|70] + with 8GB of memory and 2 HDDs of around 600GB each. It hasn't been used for about five or six years and the battery is dead. + +## First steps + +First, I decided to make an old USB key bootable. I install +[h|Ventoy|URL:https://www.ventoy.net/|tilde.pink|70] + on it to be able to load different image disks (ISO) without having to rewrite each time directly on the USB key. + +I put +[h|Memtest86+](https://www.memtest.org/) to test the memory, [shredos.x86_64](https://github.com/PartialVolume/shredos.x86_64) to wipe the HDDs and finally [Debian 12|URL:https://cdimage.debian.org/debian-cd/12.5.0/amd64/iso-cd/|tilde.pink|70] + which will be the main OS. + +So when I boot on the USB key, it loads the "multiboot" boot-loader (Ventoy) and I can then load one of the three programs. + +## Pre configuration + +To be able to deploy the system configuration and reproduce it later, I'm writing an Ansible playbook and testing it on a local VM (`virt-manager` + KVM). + +The entire configuration is available at the bottom of the page. + +## TLS certificates + +I want communication with web applications to be encrypted and secure, so I need an HTTPS server, so I need TLS certificates and, to make things easier, a domain name. + +For the domain name I used +[h|Duck DNS](https://www.duckdns.org/) and reserved the sub-domain [theobori.duckdns.org|URL:https://theobori.duckdns.org|tilde.pink|70] + which for the moment corresponds to the IPv4 of my virtual machine accessible only from the host system. + +In fact, I only need to manage one certificate with two SANs: +- `theobori.duckdns.org` +- `*.theobori.duckdns.org` + +## Services + +Every application is deployed with the Ansible playbook are conteuneurized and managed with Docker. + +They are accessible only through port 443 managed by +[h|Traefik](https://traefik.io/). Each sub-domain of [theobori.duckdns.org|URL:https://theobori.duckdns.org|tilde.pink|70] + corresponds to a service, with the exception of the homepage, which is associated with the domain itself. + +## Firewall + +To filter incoming network traffic, I manipulate iptables with the ufw tool. There are only four ports open as declared below in the Ansible playbook configuration. + +```yaml +- role: weareinteractive.ufw + tags: ufw + ufw_enabled: true + ufw_packages: ["ufw"] + ufw_rules: + - logging: "full" + - rule: allow + to_port: "443" + - rule: allow + to_port: "80" + - rule: allow + {% raw %} to_port: "{{ ssh_port }}" {% endraw %} + # Wireguard + - rule: allow + to_port: "51820" + proto: udp + # Delete default rule + - rule: allow + name: Anywhere + delete: true + ufw_manage_config: true + ufw_config: + IPV6: "yes" + DEFAULT_INPUT_POLICY: DROP + DEFAULT_OUTPUT_POLICY: ACCEPT + DEFAULT_FORWARD_POLICY: DROP + DEFAULT_APPLICATION_POLICY: SKIP + MANAGE_BUILTINS: "no" + IPT_SYSCTL: /etc/ufw/sysctl.conf + IPT_MODULES: "" +``` + +## Identity provider + +Services with integration for protocols to verify user identity or determine permissions are all linked to the +[h|Authentik|URL:https://goauthentik.io/|tilde.pink|70] + user directory. + +I needed OAuth2 for +[h|Portainer](https://www.portainer.io/) and LDAP for several other services such as [Owncloud|URL:https://owncloud.com/|tilde.pink|70] +. + +If I remember correctly, the OAuth2 Outpost is embedded in the application by default, whereas the LDAP Outpost had to be configured with specific parameters for Docker. + +Here's a diagram of several services trying to retrieve the identity of an +[h|Authentik|URL:https://goauthentik.io/|tilde.pink|70] + user. + +## Access management + +With +[h|Authentik|URL:https://goauthentik.io/|tilde.pink|70] +, group policies have been created to authorize only certain groups of users to access certain services. + +For example, for +[h|Jellyfin|URL:https://jellyfin.org/|tilde.pink|70] +, only users in the `Jellyfin` group are authorized to connect. + +In this way, I was able to secure all administration services by authorizing only users in groups reserved for administration. + +I also used [Traefik](https://traefik.io/) and +[h|Authentik|URL:https://goauthentik.io/|tilde.pink|70] + to secure access to services not protected by authentication. + +I added middleware to the reverse proxy to enable HTTP ForwardAuth with +[h|Authentik|URL:https://goauthentik.io/|tilde.pink|70] +. In practical terms, this places a connection portal in front of the targeted web services. + +Let's say I want to access +[h|duplicati.theobori.duckdns.org|URL:https://duplicati.theobori.duckdns.org|tilde.pink|70] +, it could be schematized as follows. + +## Media stack + +One of the main objectives was to be able to manage movies and series and watch them from any device on the local network. + +So I set up a stack for managing and downloading media, which would then be streamed to devices by +[h|Jellyfin|URL:https://jellyfin.org/|tilde.pink|70] +. + +Here's what the media stack looks like. + +## Backup and restore + +To back up container data, I use +[h|Duplicati|URL:https://duplicati.com/|tilde.pink|70] +. It lets you encrypt data and manage retention very easily via a web interface. + +These backups can then be restored on my old computer. + +## Monitoring + +To keep abreast of service status, I've opted for +[h|Uptime Kuma|URL:https://uptime.kuma.pet/|tilde.pink|70] +, which will alert me via Discord when a service is down for n seconds. + +I also have a [Prometheus](https://prometheus.io/) and [Grafana](https://grafana.com/) stack that lets me collect metrics on the system and on Docker containers. As for +[h|Uptime Kuma|URL:https://uptime.kuma.pet/|tilde.pink|70] +, I'm alerted by Discord according to limits defined for RAM and available storage space, for example. + +This is how the monitoring stack looks. + +## Final home page + +Here's an overview of the dashboard, featuring all the services exposed to the local network. In a way it's the end result of service implementation. + +## Links + +[h|https://github.com/theobori/homelab|URL:https://github.com/theobori/homelab|tilde.pink|70] + diff --git a/public_gopher/index.gph b/public_gopher/index.gph index a1bd134..cc6c19d 100644 --- a/public_gopher/index.gph +++ b/public_gopher/index.gph @@ -6,31 +6,32 @@ Hi, I'm Théo, -I support FOSS, FLOSS and pubnix(es) values, I love Linux and UNIX systems. +I support F(L)OSS and pubnixes values, I love UNIX systems and I also really like Arch Linux and Nix. +Currently I'm maintaining teedata.net (skins.tw until 2024) and I offer free services that respect privacy. -Everything I make is open source and available on GitHub and Gitea. -I also have a CTFtime and a LinkedIn profile. - -Currently I'm maintaining teedata.net (skins.tw until 2024). -I offer free services that respect privacy. +If you're interested you can have a look at my blog posts, everything I make is open source and available on GitHub and Gitea. [h|Gitea|URL:https://git.theobori.cafe/nagi|tilde.pink|70] [h|GitHub|URL:https://www.github.com/theobori|tilde.pink|70] -[h|LinkedIn|URL:https://www.linkedin.com/in/theo-bori|tilde.pink|70] -[h|CTFtime|URL:https://ctftime.org/user/67138|tilde.pink|70] [h|teedata.net|URL:https://teedata.net|tilde.pink|70] [h|skins.tw|URL:https://teedata.net|tilde.pink|70] [h|services|URL:https://services.theobori.cafe|tilde.pink|70] -## Contact -I can be reached via Discord (b0th) or via nagi@cock.li. -[1|PGP|/~nagi/pgp.gph|tilde.pink|70] +Contact +My links and contact details are available here. +[h|LinkStack|URL:https://links.theobori.cafe|tilde.pink|70] -## Other protocols +Other protocols [h|HTTPS|URL:https://theobori.cafe|tilde.pink|70] gemini://tilde.pink/~nagi -## Posts +Posts +[1|My bytecode VM Lox interpreter - nov 2024|/~nagi/clox.gph|tilde.pink|70] +[1|A toy to visualize recursive function calls - jun 2024|/~nagi/callviz.gph|tilde.pink|70] +[1|My Nix exploration - jun 2024|/~nagi/nix.gph|tilde.pink|70] +[1|My homelab - jun 2024|/~nagi/homelab.gph|tilde.pink|70] +[1|Teeworlds Terraform chaos engineering - mar 2024|/~nagi/terraform_chaos_teeworlds.gph|tilde.pink|70] +[1|My Tree-Walker Lox interpreter - jun 2024|/~nagi/jlox.gph|tilde.pink|70] [1|Porting X11 apps to OpenBSD - mar 2024|/~nagi/openbsd_ports.gph|tilde.pink|70] [1|Manage dotfiles with chezmoi - mar 2024|/~nagi/chezmoi.gph|tilde.pink|70] [1|eBPF essentials - jan 2024|/~nagi/ebpf.gph|tilde.pink|70] @@ -41,7 +42,7 @@ gemini://tilde.pink/~nagi [1|OpenSSH port knocking with UFW - oct 2023|/~nagi/knockd_ufw.gph|tilde.pink|70] [1|Teeworlds utilities - jul 2023|/~nagi/teeworlds-utilities.gph|tilde.pink|70] [1|Terraform NeuVector provider - jun 2023|/~nagi/tf-neuvector.gph|tilde.pink|70] -[1|Terraform chaos engineering - jun 2023|/~nagi/tf-doom.gph|tilde.pink|70] +[1|Terraform chaos engineering - jun 2023|/~nagi/terraform_chaos_doom.gph|tilde.pink|70] [1|DOOM modding library - may 2023|/~nagi/tinywad.gph|tilde.pink|70] [1|Interesting websites - mar 2023|/~nagi/websites.gph|tilde.pink|70] [1|CHIP-8 emulator - mar 2023|/~nagi/tinychip.gph|tilde.pink|70] diff --git a/public_gopher/jlox.gph b/public_gopher/jlox.gph new file mode 100644 index 0000000..df91e40 --- /dev/null +++ b/public_gopher/jlox.gph @@ -0,0 +1,115 @@ +My Tree-Walker Lox interpreter +2024-03-22 +Last edit: 2024-03-22 +--------------------- + +I wanted to learn more about designing an interpreter, so I looked around and found the free "Crafting Interpreters" by Robert Nystrom. + +I read parts I and II, which focus on concepts, common techniques and language behavior. Since I have recently read these parts, writing helps me to better understand and even re-understand certain things. + +The aim was to have a Lox interpreter that at least supported functions and closures, so we could have a taste of the basics. + +## What is lox ? + +To sum up +[h|this page|URL:https://craftinginterpreters.com/the-lox-language.html|tilde.pink|70] +, Lox is a small, high-level scripting language, with dynamic types and automatic memory management. It is similar to Javascript, Lua and Scheme. + +A cool fact is that Lox is Turing complete, it means it is able to run a Turing machine. + +## Essentials basics + +I've learned some important key concepts, and here are a few of the most important. + +### Scanning + +Scanning is also known as lexing or lexical analysis. It takes a linear stream of characters and chunks them into tokens (words). + +The scanner must group characters into the smalles possible sequence that represents something. This blobs of characters are called lexemes. + +### Parsing + +It takes the flat sequence of tokens and builds a tree structure that represent the nested nature of the grammar. This three is called an Abstract Syntax Tree (AST) + +The Lox interpreter I made is a Tree-Walk Interpreter, it means it traverses the AST one branch and leaf at a time and it evaluates. + +### Context-Free Grammars + +It is a formal grammar, it allows us to define an infinite set of rules that are in the grammar, it specicies which strings are valid and which are not. + +### Rules for grammars + +We use rules to generate strings that are in the grammar, it is called derivation, each is derived from an existing rule on the grammar. + +> *The rules are called productions because they produce strings in the grammar* + +Each production has a head (its name) and a body (a list of symbols). + +A symbol can be: +- A terminal, it is like an endpoint, it simply produces it. +- A non-terminal, it refers to other rule in the grammar. + +A grammar example from the book, see below. + +```python +breakfast → protein ( "with" breakfast "on the side" )? + | bread ; + +protein → "really"+ "crispy" "bacon" + | "sausage" + | ( "scrambled" | "poached" | "fried" ) "eggs" ; + +bread → "toast" | "biscuits" | "English muffin" ; +``` + +The ponctuations is based on the regex behaviors, as example, the `?` means it is optional. + +So here, a valid strings could be the one below. + +```python +"poached" "eggs" "with" "toast" "on the side" +``` + +### Recursive Descent Parsing + +The best explanation here is probably the one in the book. + +> *Recursive descent is considered a top-down parser because it starts from the top or outermost grammar rule (here expression ) and works its way down into the nested subexpressions before finally reaching the leaves of the syntax tree.* + +## Examples + +Here are some Lox examples that can be evaluated by my interpreter. + +```text +var b = 1; +var a = "hello"; + +{ + var a = b + b; + + print a; +} + +print a; + +fun fibonacci(n) { + if (n <= 1) return n; + return fibonacci(n - 1) + fibonacci(n - 2); +} + +print fibonacci(5); + +print "helo" + "world"; + +fun echo(n) { + print n; + return n; +} + +print echo(echo(1) + echo(2)) + echo(echo(4) + echo(5)); +``` + +## Links + +[h|https://github.com/theobori/tinylox|URL:https://github.com/theobori/tinylox|tilde.pink|70] + diff --git a/public_gopher/nix.gph b/public_gopher/nix.gph new file mode 100644 index 0000000..74fecae --- /dev/null +++ b/public_gopher/nix.gph @@ -0,0 +1,444 @@ +My Nix exploration +2024-06-24 +Last edit: 2024-06-24 +--------------------- + +Here I share some notes and other things I've learned about Nix that I find interesting. The content of this post is mainly about me learning Nix, it's not about understanding the whole tool and language. + +Also, it's important to note that I use Nix as a non-NixOS user. + +## What is Nix? + +Nix is actually several things! + +It's a cross platform package manager. It would be a little more accurate to say that it's a deployment tool used as a package manager. + +And it's also a purely functional programming language, dynamically typed and lazily evaluated. + +## Learning the programming language + +I started by learning the basics of the language and then went on to explore it in a bit more depth. + +### The basics + +I read +[h|Nix language basics](https://nix.dev/tutorials/nix-language#reading-nix-language) and to get used to the language I practised with [A tour of Nix|URL:https://nixcloud.io/tour|tilde.pink|70] + which has several levels of difficulty from "easy" to "hard". + +One interesting thing about this language is that it has only one argument per function. To simulate several arguments, you can, for example, write a function with one argument that returns a function with one argument that returns a function with one argument, and so on. The syntax of the language makes it easy to do this. + +I was taught that it has a name, it's called +[h|Currying|URL:https://en.wikipedia.org/wiki/Currying|tilde.pink|70] +. It's the transformation of a function with several arguments into a function with one argument that returns a function on the rest of the arguments. Here's an example with arguments `3` and `4`. + +```nix +nix-repl> (a: b: a + b) 3 4 +7 +``` + +A Python equivalent might be something like the following. + +```python +>>> (lambda a: lambda b: a + b)(3)(4) +7 +``` + +Another solution that is often used, particularly in +[h|Nixpkgs|URL:https://github.com/NixOS/nixpkgs|tilde.pink|70] +, is to have an attribute set as a parameter to the function, and to use the attributes as arguments. For example, this might look like the expression below. + +```nix +nix-repl> ({a, b}: a + b){a = 3; b = 4;} +7 +``` + +### Fake dynamic binding + +Although the blog post +[h|How to Fake Dynamic Binding in Nix|URL:http://r6.ca/blog/20140422T142911Z.html|tilde.pink|70] + talks about this very well, I find it interesting to offer my own thoughts and approach. + +The language is statically scoped, i.e. binding decisions are made according to the scope at declaration time. + +Let's look at the `rec` keyword, which allows an attribute set to access its own attributes (recursive binding). Here's an example. + +```nix +nix-repl> rec { a = 1; b = a + 1;} +{ + a = 1; + b = 2; +} +``` + +This is an interesting feature, but it remains static because the binding is done before the runtime. This poses problems, particularly when it comes to overriding attributes, as shown in the example below. + +```nix +nix-repl> rec { a = 1; b = a + 1; } // { a = 10; } +{ + a = 10; + b = 2; +} +``` + +In this example, we would like `b` to be equal to `11`, not `2`. + +To solve this problem, we can look at the concept of a fixed point. A fixed point is a value of `x` that validates the equation `x = f(x)`. + +We can therefore write the following function. + +```nix +nix-repl> fix = f: let + result = f result; +in + result +``` + +So here we have the function `fix` which takes a function `f` as a parameter and returns the fixed point `result` of the function `f`. + +You might be tempted to say that the `f` function calls itself ad infinitum (`f(f(f(f(..))))`), but Nix evaluates expressions lazily, so this isn't the case. + +We can literally see that the `f` function returns a fixed point (`result`), because `result = f result`, which respects the definition of a fixed point. + +The `fix` function will allow us to emulate the `rec` keyword, as shown in the example below. +```nix +nix-repl> fix (self: { a = 3; b = 4; c = self.a + self.b; }) +{ + a = 3; + b = 4; + c = 7; +} +``` + +To better understand how it works, I've written the result of the `fix` function differently with the argument used previously. + +```nix +nix-repl> let + result = { a = 3; b = 4; c = result.a + result.b;}; +in + { a = 3; b = 4; c = result.a + result.b;} +{ + a = 3; + b = 4; + c = 7; +} +``` + +Finally, I've written the following function, which will allow the attributes to be overridden dynamically as initially intended. + +```nix +nix-repl> fix = let + fixWithOverride = f: overrides: let + result = (f result) // overrides; + in + result // { override = x: fixWithOverride f x; }; +in +f: fixWithOverride f {} + +attrFunction = self: { a = 3; b = 4; c = self.a+self.b; } + +attrFunctionFixedPoint = fix attrFunction + +nix-repl> attrFunctionFixedPoint +{ + a = 3; + b = 4; + c = 7; + override = «lambda override @ «string»:5:30»; +} + +nix-repl> attrFunctionFixedPoint.override { b = 1; } +{ + a = 3; + b = 1; + c = 4; + override = «lambda override @ «string»:5:30»; +} +``` + +## The essential Nix tool + +As already mentioned, the main use of Nix is cross platform package management. In this section I'm just trying to share and summarise some of the essential parts of my notes. If you want more details, I recommend you read the excellent +[h|Nix Pills|URL:https://nixos.org/guides/nix-pills/|tilde.pink|70] +. It's rather long but well worth the read! + +### How does it work ? + +To sum up, I'd say that the Nix language has a very interesting native function called `derivation` ( +[h|see documentation|URL:https://nix.dev/manual/nix/2.22/language/derivations|tilde.pink|70] +) on which many Nix expressions are based. I'm not going to redefine the term because the documentation has a very comprehensible version, but the important thing to remember is that a derivation is a construction specification, it's an immutable Nix building block. With another package manager, you could see it as a literal package. + +Nix technology will enable us to build these derivations, in the following stages. + +The `.drv` files contain specifications on how to build the derivation, they are intermediate files comparable to `.o` files, and the `.nix` files are comparable to `.c` files. + +The construction result is immutable and will be stored in `/nix/store/`, a synchronisation with the +[h|SQLite|URL:https://www.sqlite.org/|tilde.pink|70] + database. I said it was immutable, in fact it is because Nix creates a hash for the path in the `/nix/store/` from the input derivation (not from the construction result). + +It's pretty hard to imagine all this, so I'll give you a concrete example. Let's imagine I want to create a derivation for the famous software +[h|GNU Hello|URL:https://www.gnu.org/software/hello/|tilde.pink|70] +. The Nix derivation could look something like this. + +```nix +# default.nix + +let + pkgs = import { }; +in + { + hello = pkgs.stdenv.mkDerivation { + pname = "hello"; + version = "2.12.1"; + + src = fetchTarball { + url = "https://ftp.gnu.org/gnu/hello/hello-2.12.1.tar.gz"; + sha256 = "1kJjhtlsAkpNB7f6tZEs+dbKd8z7KoNHyDHEJ0tmhnc="; + }; + }; + } +``` + +> The `mkDerivation` function is based on the `derivation` builtin function. + +It can be built with the following command. + +```bash +nix-build +``` + +The build result has been created in `/nix/store/x9cc4jsylk5q01iaxmxf941b59chws5h-hello-2.12.1` and a symbolic link named `result` pointing to this folder has been created in the current folder. We can then find the binary in `./result/bin/hello`. + +Before the build, a `.drv` file was created, which can be found by running the following command. +```bash +nix derivation show ./result | jq "keys[0]" +``` + +The full path to the `.drv` file is found in the first key of the JSON object, so the path to the `.drv` file is `/nix/store/dp5z62k3chf019biikg77p2acmz17phx-hello-2.12.1.drv`. + +As it is in binary format we can use `nix derivation show` to display the construction information it contains with the following command. + +```bash +nix derivation show (nix derivation show ./result | jq "keys[0]" | tr -d "\"") +# Or +nix derivation show /nix/store/dp5z62k3chf019biikg77p2acmz17phx-hello-2.12.1.drv +# ^ +# | Same output +# v +nix derivation show ./result +``` + +### Nixpkgs + +In the Nix expression used previously (the +[h|GNU Hello|URL:https://www.gnu.org/software/hello/|tilde.pink|70] + derivation), I used the `mkDerivation` function from `stdenv`. + +This function is not builtin, it comes from the `pkgs` identifier which has the value `import { };`. + +Before explaining this import, I think it's very important to understand what +[h|Nixpkgs|URL:https://github.com/NixOS/nixpkgs|tilde.pink|70] + is. It's a Git repository that contains all the Nix expressions and modules. When this folder is evaluated, it produces an attribute set containing `stdenv`, which is itself an attribute set containing our `mkDerivation` function. + +Getting back to `pkgs`, `` is just a special Nix syntax, which, when evaluated, gives a path to a folder containing a collection of Nix expressions, i.e. Nixpkgs. + +Incidentally `` has an equivalence in Nix as shown below. + +```nix +nix-repl> +/home/nagi/.nix-defexpr/channels/nixpkgs + +nix-repl> builtins.findFile builtins.nixPath "nixpkgs" +/home/nagi/.nix-defexpr/channels/nixpkg + +nix-repl> :p builtins.nixPath +[ + { + path = "/home/nagi/.nix-defexpr/channels"; + prefix = ""; + } +] +``` + +### Managing multiple Python versions + +One of the advantages of Nix is that it naturally offers the possibility of managing several versions of the same application. Taking +[h|Python|URL:https://www.python.org/|tilde.pink|70] + as an example, let's say I want a Nix shell with version 3.7 and version 3.13. + +To do this, we can check for which version of +[h|Nixpkgs|URL:https://github.com/NixOS/nixpkgs|tilde.pink|70] + Python was built on version 3.7 and target a specific version of +[h|Nixpkgs|URL:https://github.com/NixOS/nixpkgs|tilde.pink|70] + in our Nix expression. + +To do this, there's the +[h|flox](https://floxdev.com/) tool which works very well, but to make it easier to understand I prefer to use [nixhub.io|URL:https://www.nixhub.io|tilde.pink|70] +. + +So I'm looking for a version of the Nix packages that corresponds to Python version 3.7, and I find `nixpkgs/aca0bbe791c220f8360bd0dd8e9dce161253b341#python37`. + +```nix +# shell.nix + +let + pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/tarball/nixos-23.11") { }; + nixpkgs-python = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/aca0bbe791c220f8360bd0dd8e9dce161253b341.tar.gz") { }; +in + pkgs.mkShell { + buildInputs = [ + nixpkgs-python.python37 + pkgs.python313 + ]; + } +``` + +You can build Python derivations and enter a Nix shell with the following command. + +```bash +nix-shell +``` + +And we see that we have access to the two versions requested with the commands `python3.7` and `python3.13` ! + +## A Virtual environment in Python with Nix flakes + +I've recently created a development environment with Nix flakes ( +[h|see documentation|URL:https://nixos.wiki/wiki/Flakes|tilde.pink|70] +), it's very handy as it provides a ready to use environment for Python 3.11 with the desired modules. + +Below is a Nix expression I wrote for the Python module +[h|callviz|URL:https://pypi.org/project/callviz/|tilde.pink|70] +, it has all the necessary dependencies and a virtual Python environment. + +```nix +# flake.nix + +{ + inputs = { + nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable"; + }; + + outputs = + { self, nixpkgs }: + let + supportedSystems = [ + "x86_64-linux" + "aarch64-linux" + "x86_64-darwin" + "aarch64-darwin" + ]; + + forEachSupportedSystem = + f: nixpkgs.lib.genAttrs supportedSystems (system: f { pkgs = import nixpkgs { inherit system; }; }); + in + { + # ... + # I usually also declare a default package, a code checker and formatter + devShells = forEachSupportedSystem ( + { pkgs }: + { + default = pkgs.mkShell { + venvDir = ".venv"; + packages = + with pkgs; + [ + python3 + graphviz + ] + ++ (with pkgs.python3Packages; [ + pip + venvShellHook + graphviz + ]); + }; + } + ); + }; +} +``` + +Note that the default package and the default development shell are compatible with all systems (`supportedSystems`)! + +To realise the derivations and enter the Nix shell, I can run the following command. +```bash +nix develop +``` + +## Nixpkgs contribution + +Once I'd finished exploring and learning Nix, I wanted to make a package for [Super Mario War](http://smwstuff.net/game) and add it to +[h|Nixpkgs|URL:https://github.com/NixOS/nixpkgs|tilde.pink|70] +. + +Here's what the package looks like. + +```nix +{ + lib, + stdenv, + fetchFromGitHub, + cmake, + pkg-config, + enet, + yaml-cpp, + SDL2, + SDL2_image, + SDL2_mixer, + zlib, + unstableGitUpdater, + makeWrapper, +}: +stdenv.mkDerivation (finalAttrs: { + pname = "supermariowar"; + version = "2023-unstable-2024-09-17"; + + src = fetchFromGitHub { + owner = "mmatyas"; + repo = "supermariowar"; + rev = "6b8ff8c669ca31a116754d23b6ff65e42ac50733"; + hash = "sha256-P0jV7G81thj0UJoYLd5+H5SjjaVu4goJxc9IkbzxJgs="; + fetchSubmodules = true; + }; + + nativeBuildInputs = [ + cmake + pkg-config + makeWrapper + ]; + + buildInputs = [ + enet + yaml-cpp + SDL2 + SDL2_image + SDL2_mixer + zlib + ]; + + cmakeFlags = [ "-DBUILD_STATIC_LIBS=OFF" ]; + + postInstall = '' + mkdir -p $out/bin + + for app in smw smw-leveledit smw-worldedit; do + makeWrapper $out/games/$app $out/bin/$app \ + --add-flags "--datadir $out/share/games/smw" + done + + ln -s $out/games/smw-server $out/bin/smw-server + ''; + + passthru.updateScript = unstableGitUpdater { }; + + meta = { + description = "A fan-made multiplayer Super Mario Bros. style deathmatch game"; + homepage = "https://github.com/mmatyas/supermariowar"; + changelog = "https://github.com/mmatyas/supermariowar/blob/${finalAttrs.src.rev}/CHANGELOG"; + license = lib.licenses.gpl2Plus; + maintainers = with lib.maintainers; [ theobori ]; + mainProgram = "smw"; + platforms = lib.platforms.linux; + }; +}) +``` + diff --git a/public_gopher/openbsd_ports.gph b/public_gopher/openbsd_ports.gph index ab96c93..28e6483 100644 --- a/public_gopher/openbsd_ports.gph +++ b/public_gopher/openbsd_ports.gph @@ -19,7 +19,7 @@ Before making the game compatible with the distribution, it's best to fetch the ## OpenBSD environment -My test environment is just a virtual machine managed by VirtualBox on which +My test environment is a virtual machine managed by `virt-manager` (using libvirt to interact with KVM) on which [h|OpenBSD 7.4](https://www.openbsd.org/74.html) has been installed, following the steps [here|URL:https://www.openbsdhandbook.com/installation/|tilde.pink|70] . diff --git a/public_gopher/teeworlds-utilities.gph b/public_gopher/teeworlds-utilities.gph index 4ed38d5..62ba51d 100644 --- a/public_gopher/teeworlds-utilities.gph +++ b/public_gopher/teeworlds-utilities.gph @@ -13,10 +13,9 @@ So I decided to make my own toolbox to manipulate Teeworlds assets, which we use Indirectly, other people use it, for example, to render skins in a Discord channel that displays messages in real time (fokkonaut's Discord server) or in other projects like [h|TeeAssembler 2.0|URL:https://teeassembler.developer.li/|tilde.pink|70] - that used some part of the **`teeworlds-utilites`** code. + that used some part of the Teeworlds utilities code. ## Use case examples - ### Teeworlds skin rendering Render a Teeworlds 4K skin with default and custom colors. diff --git a/public_gopher/teeworlds.gph b/public_gopher/teeworlds.gph index 2eb298a..f8c8e6d 100644 --- a/public_gopher/teeworlds.gph +++ b/public_gopher/teeworlds.gph @@ -82,9 +82,9 @@ Now we can create and start a new container with the teeworlds client image we j I consider that you're using X as your windowing system, rather than something like Wayland or something else. -### X Window System +### X display server -So that the game can work and we can play it. I assume you are using the X window system and that you have a X server listening at a UNIX domain socket. +So that the game can work and we can play it. I assume you are using an X display server and that it is listening at a UNIX domain socket. That is why we are forwarding the `/tmp/.X11-unix/` directory that contains the UNIX domain socket(s) for the X server. diff --git a/public_gopher/terraform_chaos_doom.gph b/public_gopher/terraform_chaos_doom.gph new file mode 100644 index 0000000..612e576 --- /dev/null +++ b/public_gopher/terraform_chaos_doom.gph @@ -0,0 +1,29 @@ +DOOM Terraform chaos engineering +2023-06-03 +Last edit: 2023-06-03 +--------------------- + +I first saw +[h|kubedoom|URL:https://github.com/storax/kubedoom|tilde.pink|70] + and thought it was pretty cool, so I decided to do the same for Terraform, knowing that I was working with it for professional projects. + +The principle is very simple, each enemy represents a Terraform resource, if an enemy dies, the associated resource is destroyed. + +## How does it work? + +The main program is `terraform-doom`, which creates a UNIX socket, listens to it and simultaneously launches an X11 virtual server (Xvfb), a VNC server (x11vnc) attached to this X session and `psdoom` (DOOM writing to the UNIX socket). + +The binaries `Xvfb` and `x11vnc` are used to create a cross-platform graphical access to `psdoom` inside the container. + +At runtime `psdoom` will continuously write to the UNIX socket to signal `terraform-doom` to send Terraform resource information. When an enemy is killed, `psdoom` writes the associated resource name to the socket. + +Everything we've just described will be encapsulated in a Docker container. + +## Demonstration + +This demonstration has been realized with the example Terraform project, every steps to reproduce it are detailed in the README file on the repository. + +## Links + +[h|https://github.com/theobori/terraform-doom|URL:https://github.com/theobori/terraform-doom|tilde.pink|70] + diff --git a/public_gopher/terraform_chaos_teeworlds.gph b/public_gopher/terraform_chaos_teeworlds.gph new file mode 100644 index 0000000..119485f --- /dev/null +++ b/public_gopher/terraform_chaos_teeworlds.gph @@ -0,0 +1,34 @@ +Teeworlds Terraform chaos engineering +2024-06-05 +Last edit: 2024-06-05 +--------------------- + +After doing some [chaos engineering for Terraform with the DOOM game](/posts/terraform_chaos_doom/), I wanted to make a version for Teeworlds, specifically for its version 0.7 (its latest version). + +The difference with the DOOM version is that in this project, a player must capture the flag for a Terraform resource to be randomly destroyed. + +## How does it work? + +When configuring a Teeworlds server, the values below can be entered. + +```bash +# Econ configuration +ec_port 7000 +ec_password "hello_world" +ec_output_level 2 +``` + +These are prefixed with `ec_` because they are associated with the `econ` server. This configuration binds a TCP port which will expose the Telnet protocol-based econ server. + +Through the latter, we'll be able to retrieve events from the Teeworlds server, such as a message sent, a player killed or a flag captured ! + +## Demonstration + +This demonstration has been realized with the example Terraform project, every steps to reproduce it are detailed in the README file on the repository. + +## Links + +[h|https://github.com/theobori/terraform-teeworlds|URL:https://github.com/theobori/terraform-teeworlds|tilde.pink|70] + +[h|https://github.com/theobori/teeworlds-econ|URL:https://github.com/theobori/teeworlds-econ|tilde.pink|70] + diff --git a/public_gopher/tinychip.gph b/public_gopher/tinychip.gph index ba423fc..52a1544 100644 --- a/public_gopher/tinychip.gph +++ b/public_gopher/tinychip.gph @@ -5,9 +5,9 @@ Last edit: 2023-03-08 I wanted to learn the basics of emulator development and emulation in general. So I decided to make a CHIP-8 emulator. -In fact it's a misuse of language to say that it's an "emulator" because CHIP-8 is a language, so we should rather say "interpreter". +In fact it's a misuse of language to say that it's an `emulator` because CHIP-8 is a language, so we should rather say `interpreter`. -## How does it works ? +## How does it work ? So, basically there are three main components that make it works. The CPU, the API and the Core (kernel). diff --git a/public_gopher/tinywad.gph b/public_gopher/tinywad.gph index 0b7657f..104c4a2 100644 --- a/public_gopher/tinywad.gph +++ b/public_gopher/tinywad.gph @@ -5,9 +5,9 @@ Last edit: 2023-05-03 This project is a WAD library/manager, it can be used as a base for other WAD projects like a GUI, a CLI, etc.. -I have played around with some well known `IWAD` like `doom.wad` and `doom2.wad` (registered). +I have played around with some well known IWAD like `doom.wad` and `doom2.wad` (registered). -To test the `IWAD`/`PWAD` generated, I have used two engines: +To test the IWAD/PWAD generated, I have used two engines: - [h|GZDoom|URL:https://zdoom.org/index|tilde.pink|70] (tests + screenshots) @@ -43,7 +43,7 @@ fn main() -> Result<(), WadError> { } ``` -So basically (above) it loads a first `IWAD` file, in our case it is `doom2.wad`. It borrows a lump (`GATE3`) into the variable `gate`, then we load a second `IWAD` which is `doom1.wad`, it selects desired lumps, then it update the selected lumps in `DOOM1` and finally overwrite the file. +So basically (above) it loads a first IWAD file, in our case it is `doom2.wad`. It borrows a lump (`GATE3`) into the variable `gate`, then we load a second IWAD which is `doom1.wad`, it selects desired lumps, then it update the selected lumps in `DOOM1` and finally overwrite the file. ### Screenhot(s) @@ -172,7 +172,7 @@ fn main() -> Result<(), WadError> { } ``` -To take the screenshot (below) `doom1_patch.wad` has been injected into GZDOOM with the `IWAD` `doom.wad` (registered). +To take the screenshot (below) `doom1_patch.wad` has been injected into GZDOOM with the IWAD `doom.wad` (registered). ### Result @@ -184,7 +184,7 @@ To take the screenshot (below) `doom1_patch.wad` has been injected into GZDOOM w ### Extracting MIDI lumps -Extracting every musics from the `IWAD` `doom.wad`. +Extracting every musics from the IWAD `doom.wad`. ```rust use tinywad::error::WadError;