diff --git a/docs/.vitepress/config.mts b/docs/.vitepress/config.mts
index fc2af2b7..66908075 100644
--- a/docs/.vitepress/config.mts
+++ b/docs/.vitepress/config.mts
@@ -50,6 +50,20 @@ export default withMermaid(
},
],
},
+ {
+ text: 'Technology',
+ collapsed: false,
+ items: [
+ {
+ text: 'Our Approach',
+ link: '/docs/technology/introduction',
+ },
+ {
+ text: 'Reviews & Awards',
+ link: '/docs/technology/awards',
+ },
+ ],
+ },
{
text: 'Fact Sheets',
collapsed: false,
diff --git a/docs/docs/technology/assets/dictionary-pipeline.png b/docs/docs/technology/assets/dictionary-pipeline.png
new file mode 100644
index 00000000..c34306e7
Binary files /dev/null and b/docs/docs/technology/assets/dictionary-pipeline.png differ
diff --git a/docs/docs/technology/assets/sign-tube-example.png b/docs/docs/technology/assets/sign-tube-example.png
new file mode 100644
index 00000000..fa421abf
Binary files /dev/null and b/docs/docs/technology/assets/sign-tube-example.png differ
diff --git a/docs/docs/technology/awards.md b/docs/docs/technology/awards.md
new file mode 100644
index 00000000..a9b07ea8
--- /dev/null
+++ b/docs/docs/technology/awards.md
@@ -0,0 +1,29 @@
+# Awards and Recognition
+
+In this page, we list the awards and recognition that the technology behind sign.mt has received in the last few years, alongside public independent reviews.
+
+## 2024
+
+- We received an `EMNLP Outstanding Demo Paper Award` for our demonstration of "sign.mt: Real-Time Multilingual Sign Language Translation Application"[^emnlp2024-award].
+
+[^emnlp2024-award]: sign.mt. 2024. [Twitter Post](https://x.com/signmt_/status/1857181686045540787).
+
+- Dr. Amit Moryossef (among others) received the `SwissNLP Award` for his "outstanding contribution to Sign Language Translation in Switzerland"[^swissnlp2024-award].
+
+[^swissnlp2024-award]: SwissNLP. 2024. [SwissNLP Award](https://swissnlp.org/home/activities/swissnlp-award/).
+
+- sign.mt was independently reviewed for Kazakh-Russian Sign Language translation, reporting accuracy of 37% compared to 98% for human translation[^kazakh-russian-review].
+
+[^kazakh-russian-review]: Imashev et al. 2024. [Comparative Analysis of Sign Language Interpreting Agents Perception: A Study of the Deaf](hhttps://aclanthology.org/2024.lrec-main.319/).
+
+## 2023
+
+- Dr. Amit Moryossef (among others) received the `ACL Outstanding Paper Award` for "Considerations for Meaningful Sign Language Machine Translation"[^acl2023-award].
+
+[^acl2023-award]: Müller et al. 2023. [Considerations for Meaningful Sign Language Machine Translation](https://aclanthology.org/2023.acl-short.60/).
+
+## 2021
+
+- Dr. Amit Moryossef (among others) received the `ACL Best Theme Paper Award` for "Including Signed Languages in Natural Language Processing"[^acl2021-award].
+
+[^acl2021-award]: Yin et al. 2021. [Including Signed Languages in Natural Language Processing](https://aclanthology.org/2021.acl-long.570/).
diff --git a/docs/docs/technology/introduction.md b/docs/docs/technology/introduction.md
new file mode 100644
index 00000000..6aeda82b
--- /dev/null
+++ b/docs/docs/technology/introduction.md
@@ -0,0 +1,126 @@
+# Our Approach
+
+Following the research of Dr. Amit Moryosef's published in his PhD thesis[^amit-thesis], we aim to develop a sign language translation system that separates the computer vision tasks from the language translation tasks.
+
+[^amit-thesis]: Amit Moryosef. 2024. [Real-Time Multilingual Sign Language Processing](https://arxiv.org/abs/2412.01991).
+
+## Spoken to Signed Language Translation
+
+Following, is a flowchart of the current translation pipeline from spoken to signed language.
+Each node represents a different module or function in the pipeline, with a link to the relevant code repository.
+
+- Green edges represent high quality modules.
+- orange edges represent existing, low quality modules.
+- red edges represent modules that need to be implemented.
+
+```mermaid
+flowchart TD
+ A0[Spoken Language Audio] --> A1(Spoken Language Text)
+ A1[Spoken Language Text] --> B[Language Identification]
+ A1 --> C(Normalized Text)
+ B --> C
+ C & B --> Q(Sentence Splitter)
+ Q & B --> D(SignWriting)
+ C -.-> M(Glosses)
+ M -.-> E
+ D --> E(Pose Sequence)
+ D -.-> I(Illustration)
+ N --> H(3D Avatar)
+ N --> G(Skeleton Viewer)
+ N --> F(Human GAN)
+ H & G & F --> J(Video)
+ J --> K(Share Translation)
+ D -.-> L(Description)
+ O --> N(Fluent Pose Sequence)
+ E --> O(Pose Appearance Transfer)
+
+linkStyle default stroke:green;
+linkStyle 3,5,7 stroke:lightgreen;
+linkStyle 10,11,12,15 stroke:red;
+linkStyle 6,8,9,14,19,20 stroke:orange;
+```
+
+This pipeline in fact represents two types of approaches to translation:
+
+1. **Dictionary Based Translation**
+2. **SignWriting based Machine Translation**
+
+### Dictionary Based Translation[^dictionary-baseline]
+
+[^dictionary-baseline]: Moryossef et al. 2023. [An Open-Source Gloss-Based Baseline for Spoken to Signed Language Translation](https://aclanthology.org/2023.at4ssl-1.3/).
+
+The dictionary-based translation approach aims to simplify the translation but sacrifices the fluency and natural expressiveness needed for accurate sign language communication. It can be characterized by the following general steps:
+
+```mermaid
+flowchart LR
+ a[Spoken Language Text] --> b[Glosses] --> c[Pose Sequence] --> d[Video]
+```
+
+![Visualization of one example through the dictionary-based translation pipeline](./assets//dictionary-pipeline.png)
+
+#### **Main Translation Steps**
+
+1. **text-to-gloss translation**: The input spoken language text is transformed into glosses using techniques such as lemmatization, word reordering, and dropping unnecessary words (like articles).
+
+2. **gloss-to-pose conversion**: ‘Glosses’ are mapped to corresponding skeletal poses extracted from a sign language dictionary.
+
+3. **pose-to-video rendering**: The pose sequences are stitched together and rendered back into a synthesized human video.
+
+#### **Data Requirements**
+
+A dictionary of sign language videos for isolated letters, words, and phrases. This dictionary cannot represent the full expressiveness of sign language.
+
+#### **Potential Quality**
+
+Even if the dictionary is complete, this approach cannot fully capture the complexity of sign language. While it attempts to translate from text to a gloss-based structure and map that to movements (poses), it fails to account for the full grammatical and syntactic richness of sign language. Glosses are an incomplete representation of actual signs, and grammar in sign language differs substantially from spoken languages.
+
+### SignWriting based Machine Translation
+
+The machine translation approach aims to achieve similar translation quality to spoken language translation systems such as Google Translate, with potentially high fluency and natural expressiveness. This approach further allows for bi-directional translation, between signed and spoken languages.
+
+```mermaid
+flowchart LR
+ a[Spoken Language Text] --> b[SignWriting] --> c[Pose Sequence] --> d[Video]
+```
+
+![Visualization of one example through the SignWriting-based translation pipeline](./assets/sign-tube-example.png)
+
+#### **Main Translation Steps:**
+
+1. **text-to-SignWriting Translation**: Spoken language text is translated into SignWriting sequences using machine translation (it sees a lot of data, and extrapolates from it).
+
+2. **SignWriting-to-pose conversion**: The written signs are converted into a fluent pose sequence, illustrating how the signs would be physically performed by a signer.
+
+3. **pose-to-video rendering**: This pose sequence is then rendered into a human video.
+
+#### **Data Requirements:**
+
+By combining a relatively small dataset of transcribed single signs (~100k) with a relatively small dataset of segmented continuous signs, and leveraging large video/text sign language datasets, we can automatically transcribe the latter. This process will generate large synthesized datasets for both **text-to-SignWriting** and **SignWriting-to-pose** conversions.
+
+#### **Potential Quality:**
+
+The system aims to accurately represent sign language grammar and structure, allowing for a good translation of both lexical and non-lexical signs, expressions, and classifiers. Potentially, the system can be as good as a deaf human translator, given quality data.
+
+## Signed to Spoken Language Translation
+
+Following, is a flowchart of the current translation pipeline from signed to spoken language.
+
+```mermaid
+flowchart TD
+ A0[Upload Sign Language Video] --> A3[Video]
+ A1[Camera Sign Language Video] --> A3
+ A3 --> B(Pose Estimation)
+ B --> C(Segmentation)
+ C & B --> D(SignWriting Transcription)
+ A2[Language Selector] --> E(Spoken Language Text)
+ D --> E
+ E --> F(Spoken Language Audio)
+ E --> G(Share Translation)
+ C -.-> H(Sign Image)
+
+
+linkStyle 1,2 stroke:orange;
+linkStyle 4,5,6 stroke:lightgreen;
+linkStyle 3,5,7,10 stroke:red;
+linkStyle 0,8,9 stroke:green;
+```
diff --git a/src/app/modules/settings/settings/settings.component.html b/src/app/modules/settings/settings/settings.component.html
index 4eadac22..ad2c8531 100644
--- a/src/app/modules/settings/settings/settings.component.html
+++ b/src/app/modules/settings/settings/settings.component.html
@@ -1,4 +1,4 @@
-
+
@for (s of availableSettings; track s) {
diff --git a/src/app/pages/landing/about/about-numbers/about-numbers.component.html b/src/app/pages/landing/about/about-numbers/about-numbers.component.html
new file mode 100644
index 00000000..77f5a9f4
--- /dev/null
+++ b/src/app/pages/landing/about/about-numbers/about-numbers.component.html
@@ -0,0 +1,22 @@
+
+
Our Vision: To Improve the Lives of 70 Million Deaf People Worldwide