From 64737f4ae06cd4ebfd1ff4fb0ac32053572aa44f Mon Sep 17 00:00:00 2001 From: Will McGugan Date: Mon, 2 Sep 2024 16:05:45 +0100 Subject: [PATCH] Deployed 563b11936 with MkDocs version: 1.6.0 --- .../index.html | 18 + .../index.html | 6808 +++++++++++++++++ blog/archive/2024/index.html | 49 + blog/category/devlog/index.html | 98 +- blog/category/devlog/page/2/index.html | 107 +- blog/category/devlog/page/3/index.html | 6525 ++++++++++++++++ blog/index.html | 96 +- blog/page/2/index.html | 95 +- blog/page/3/index.html | 95 +- blog/page/4/index.html | 47 + feed_json_created.json | 2 +- feed_json_updated.json | 2 +- feed_rss_created.xml | 2 +- feed_rss_updated.xml | 2 +- guide/reactivity/index.html | 574 +- guide/widgets/index.html | 117 +- how-to/render-and-compose/index.html | 1516 ++-- search/search_index.json | 2 +- sitemap.xml | 552 +- sitemap.xml.gz | Bin 2579 -> 2605 bytes tutorial/index.html | 546 +- widgets/digits/index.html | 110 +- 22 files changed, 15518 insertions(+), 1845 deletions(-) create mode 100644 blog/2024/09/15/anatomy-of-a-textual-user-interface/index.html create mode 100644 blog/category/devlog/page/3/index.html diff --git a/blog/2024/04/20/behind-the-curtain-of-inline-terminal-applications/index.html b/blog/2024/04/20/behind-the-curtain-of-inline-terminal-applications/index.html index 4e8508a82e..cc963df286 100644 --- a/blog/2024/04/20/behind-the-curtain-of-inline-terminal-applications/index.html +++ b/blog/2024/04/20/behind-the-curtain-of-inline-terminal-applications/index.html @@ -14,6 +14,8 @@ + + @@ -6696,6 +6698,22 @@

Found this interesting? + + + + + + diff --git a/blog/2024/09/15/anatomy-of-a-textual-user-interface/index.html b/blog/2024/09/15/anatomy-of-a-textual-user-interface/index.html new file mode 100644 index 0000000000..24b2858450 --- /dev/null +++ b/blog/2024/09/15/anatomy-of-a-textual-user-interface/index.html @@ -0,0 +1,6808 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Anatomy of a Textual User Interface - Textual + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + + + + + + + + + + + + + +
+
+
+
+ + +
+
+
+
+ + + + + + + +

Anatomy of a Textual User Interface

+

I recently wrote a TUI to chat to an LLM in the terminal. +I'm not the first to do this (shout out to Elia and Paita), but I may be the first to have it reply as if it were the AI from the Aliens movies?

+

Here's a video of it in action:

+ + + + +

Now let's dissect the code like Bishop dissects a facehugger.

+

All right, sweethearts, what are you waiting for? Breakfast in bed?

+

At the top of the file we have some boilerplate:

+
# /// script
+# requires-python = ">=3.12"
+# dependencies = [
+#     "llm",
+#     "textual",
+# ]
+# ///
+from textual import on, work
+from textual.app import App, ComposeResult
+from textual.widgets import Header, Input, Footer, Markdown
+from textual.containers import VerticalScroll
+import llm
+
+SYSTEM = """Formulate all responses as if you where the sentient AI named Mother from the Aliens movies."""
+
+

The text in the comment is a relatively new addition to the Python ecosystem. +It allows you to specify dependencies inline so that tools can setup an environment automatically. +The only tool that I know of it that uses it is uv.

+

After this comment we have a bunch of imports: textual for the UI, and llm to talk to ChatGPT (also supports other LLMs).

+

Finally, we define SYSTEM, which is the system prompt for the LLM.

+

Look, those two specimens are worth millions to the bio-weapons division.

+

Next up we have the following:

+
class Prompt(Markdown):
+    pass
+
+
+class Response(Markdown):
+    BORDER_TITLE = "Mother"
+
+

These two classes define the widgets which will display text the user enters and the response from the LLM. +They both extend the builtin Markdown widget, since LLMs like to talk in that format.

+

Well, somebody's gonna have to go out there. Take a portable terminal, go out there and patch in manually.

+

Following on from the widgets we have the following:

+
class MotherApp(App):
+    AUTO_FOCUS = "Input"
+
+    CSS = """
+    Prompt {
+        background: $primary 10%;
+        color: $text;
+        margin: 1;        
+        margin-right: 8;
+        padding: 1 2 0 2;
+    }
+
+    Response {
+        border: wide $success;
+        background: $success 10%;   
+        color: $text;             
+        margin: 1;      
+        margin-left: 8; 
+        padding: 1 2 0 2;
+    }
+    """
+
+

This defines an app, which is the top-level object for any Textual app.

+

The AUTO_FOCUS string is a classvar which causes a particular widget to receive input focus when the app starts. In this case it is the Input widget, which we will define later.

+

The classvar is followed by a string containing CSS. +Technically, TCSS or Textual Cascading Style Sheets, a variant of CSS for terminal interfaces.

+

This isn't a tutorial, so I'm not going to go in to a details, but we're essentially setting properties on widgets which define how they look. +Here I styled the prompt and response widgets to have a different color, and tried to give the response a retro tech look with a green background and border.

+

We could express these styles in code. +Something like this:

+
self.styles.color = "red"
+self.styles.margin = 8
+
+

Which is fine, but CSS shines when the UI get's more complex.

+

Look, man. I only need to know one thing: where they are.

+

After the app constants, we have a method called compose:

+
    def compose(self) -> ComposeResult:
+        yield Header()
+        with VerticalScroll(id="chat-view"):
+            yield Response("INTERFACE 2037 READY FOR INQUIRY")
+        yield Input(placeholder="How can I help you?")
+        yield Footer()
+
+

This method adds the initial widgets to the UI.

+

Header and Footer are builtin widgets.

+

Sandwiched between them is a VerticalScroll container widget, which automatically adds a scrollbar (if required). It is pre-populated with a single Response widget to show a welcome message (the with syntax places a widget within a parent widget). Below that is an Input widget where we can enter text for the LLM.

+

This is all we need to define the layout of the TUI. +In Textual the layout is defined with styles (in the same was as color and margin). +Virtually any layout is possible, and you never have to do any math to calculate sizes of widgets—it is all done declaratively.

+

We could add a little CSS to tweak the layout, but the defaults work well here. +The header and footer are docked to an appropriate edge. +The VerticalScroll widget is styled to consume any available space, leaving room for widgets with a defined height (like our Input).

+

Look into my eye.

+

The next method is an event handler.

+
    def on_mount(self) -> None:
+        self.model = llm.get_model("gpt-4o")
+
+

This method is called when the app receives a Mount event, which is one of the first events sent and is typically used for any setup operations.

+

It gets a Model object got our LLM of choice, which we will use later.

+

Note that the llm library supports a large number of models, so feel free to replace the string with the model of your choice.

+

We're in the pipe, five by five.

+

The next method is also a message handler:

+
    @on(Input.Submitted)
+    async def on_input(self, event: Input.Submitted) -> None:
+        chat_view = self.query_one("#chat-view")
+        event.input.clear()
+        await chat_view.mount(Prompt(event.value))
+        await chat_view.mount(response := Response())
+        response.anchor()
+        self.send_prompt(event.value, response)
+
+

The decorator tells Textual to handle the Input.Submitted event, which is sent when the user hits return in the Input.

+
+

More on event handlers

+

There are two ways to receive events in Textual: a naming convention or the decorator. +They aren't on the base class because the app and widgets can receive arbitrary events.

+
+

When that happens, this method clears the input and adds the prompt text to the VerticalScroll. +It also adds a Response widget to contain the LLM's response, and anchors it. +Anchoring a widget will keep it at the bottom of a scrollable view, which is just what we need for a chat interface.

+

Finally in that method we call send_prompt.

+

We're on an express elevator to hell, going down!

+

Here is send_prompt:

+
    @work(thread=True)
+    def send_prompt(self, prompt: str, response: Response) -> None:
+        response_content = ""
+        llm_response = self.model.prompt(prompt, system=SYSTEM)
+        for chunk in llm_response:
+            response_content += chunk
+            self.call_from_thread(response.update, response_content)
+
+

You'll notice that it is decorated with @work, which turns this method in to a worker. +In this case, a threaded worker. Workers are a layer over async and threads, which takes some of the pain out of concurrency.

+

This worker is responsible for sending the prompt, and then reading the response piece-by-piece. +It calls the Markdown widget's update method which replaces its content with new Markdown code, to give that funky streaming text effect.

+

Game over man, game over!

+

The last few lines creates an app instance and runs it:

+
if __name__ == "__main__":
+    app = MotherApp()
+    app.run()
+
+

You may need to have your API key set in an environment variable. +Or if you prefer, you could set in the on_mount function with the following:

+
self.model.key = "... key here ..."
+
+

Not bad, for a human.

+

Here's the code for the Mother AI.

+

Run the following in your shell of choice to launch mother.py (assumes you have uv installed):

+
uv run mother.py
+
+

You know, we manufacture those, by the way.

+

Join our Discord server to discuss more 80s movies (or possibly TUIs).

+ + + + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/blog/archive/2024/index.html b/blog/archive/2024/index.html index d6bd87838c..bffd7ef72c 100644 --- a/blog/archive/2024/index.html +++ b/blog/archive/2024/index.html @@ -6354,6 +6354,55 @@

2024&par + + +
+

Anatomy of a Textual User Interface

+

I recently wrote a TUI to chat to an LLM in the terminal. +I'm not the first to do this (shout out to Elia and Paita), but I may be the first to have it reply as if it were the AI from the Aliens movies?

+

Here's a video of it in action:

+ + + + +
+ + +
+
+ + + +
+
+

Anatomy of a Textual User Interface

+

I recently wrote a TUI to chat to an LLM in the terminal. +I'm not the first to do this (shout out to Elia and Paita), but I may be the first to have it reply as if it were the AI from the Aliens movies?

+

Here's a video of it in action:

+ + + + +
+
+ + - @@ -6887,7 +6889,7 @@

- 1 2 + 1 2 3 diff --git a/blog/category/devlog/page/2/index.html b/blog/category/devlog/page/2/index.html index 8153b09ad3..d4b7c69d3a 100644 --- a/blog/category/devlog/page/2/index.html +++ b/blog/category/devlog/page/2/index.html @@ -6352,6 +6352,53 @@

DevLog + + + + +
+

Overhead of Python Asyncio tasks

+

Every widget in Textual, be it a button, tree view, or a text input, runs an asyncio task. There is even a task for scrollbar corners (the little space formed when horizontal and vertical scrollbars meet).

+ + + + +
+ + +
+
+ + +
- @@ -7053,7 +7042,7 @@

- 1 2 + 1 2 3 diff --git a/blog/category/devlog/page/3/index.html b/blog/category/devlog/page/3/index.html new file mode 100644 index 0000000000..b4f4d35586 --- /dev/null +++ b/blog/category/devlog/page/3/index.html @@ -0,0 +1,6525 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + DevLog - Textual + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + +
+ + + + + + + +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+
+

DevLog

+
+ +
+
+ + + + +
+
+

Stealing Open Source code from Textual

+

I would like to talk about a serious issue in the Free and Open Source software world. Stealing code. You wouldn't steal a car would you?

+
+ +
+ +

But you should steal code from Open Source projects. Respect the license (you may need to give attribution) but stealing code is not like stealing a car. If I steal your car, I have deprived you of a car. If you steal my open source code, I haven't lost anything.

+
+

Warning

+

I'm not advocating for piracy. Open source code gives you explicit permission to use it.

+
+

From my point of view, I feel like code has greater value when it has been copied / modified in another project.

+

There are a number of files and modules in Textual that could either be lifted as is, or wouldn't require much work to extract. I'd like to cover a few here. You might find them useful in your next project.

+ + + + +
+
+ + + + + + + + + + +
+
+ + + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/blog/index.html b/blog/index.html index f1c9f81858..f04b549557 100644 --- a/blog/index.html +++ b/blog/index.html @@ -6340,6 +6340,55 @@

Textual Blog + + +
+

Anatomy of a Textual User Interface

+

I recently wrote a TUI to chat to an LLM in the terminal. +I'm not the first to do this (shout out to Elia and Paita), but I may be the first to have it reply as if it were the AI from the Aliens movies?

+

Here's a video of it in action:

+ + + + +
+ + + - diff --git a/blog/page/2/index.html b/blog/page/2/index.html index 306136b017..1d7700aed5 100644 --- a/blog/page/2/index.html +++ b/blog/page/2/index.html @@ -6340,6 +6340,53 @@

Textual Blog + + +
+

Using Rich Inspect to interrogate Python objects

+

The Rich library has a few functions that are admittedly a little out of scope for a terminal color library. One such function is inspect which is so useful you may want to pip install rich just for this feature.

+ + + + +
+ + +
+
+ + +
-
-
- - - - -
-
-

No-async async with Python

-

A (reasonable) criticism of async is that it tends to proliferate in your code. In order to await something, your functions must be async all the way up the call-stack. This tends to result in you making things async just to support that one call that needs it or, worse, adding async just-in-case. Given that going from def to async def is a breaking change there is a strong incentive to go straight there.

-

Before you know it, you have adopted a policy of "async all the things".

- - - - -
-
- diff --git a/blog/page/3/index.html b/blog/page/3/index.html index 650d0037a1..bdafbf8930 100644 --- a/blog/page/3/index.html +++ b/blog/page/3/index.html @@ -6340,6 +6340,54 @@

Textual Blog + + +
+

No-async async with Python

+

A (reasonable) criticism of async is that it tends to proliferate in your code. In order to await something, your functions must be async all the way up the call-stack. This tends to result in you making things async just to support that one call that needs it or, worse, adding async just-in-case. Given that going from def to async def is a breaking change there is a strong incentive to go straight there.

+

Before you know it, you have adopted a policy of "async all the things".

+ + + + +
+ + + - diff --git a/blog/page/4/index.html b/blog/page/4/index.html index c3537c6291..86c9937b82 100644 --- a/blog/page/4/index.html +++ b/blog/page/4/index.html @@ -6332,6 +6332,53 @@

Textual Blog
+ + + +
+
+

Textual 0.6.0 adds a treemendous new widget

+

A new release of Textual lands 3 weeks after the previous release -- and it's a big one.

+ + + + +
+ + +
+
+