Skip to content

This repository contains an experimental demo application that shows how you can add client-side auto-generated captions to Amazon IVS Real-time and Low-latency streams using transformers.js and WebGPU.

License

Notifications You must be signed in to change notification settings

aws-samples/amazon-ivs-webgpu-captions-demo

Amazon IVS WebGPU Captions Demo

A demo web application that showcases state-of-the-art client-side transcription, with everything running directly in your browser. By leveraging Transformers.js and ONNX Runtime Web, this demo enables WebGPU-accelerated real-time in-browser transcription for Amazon IVS Low-latency and Real-time streams.

A video with captioned subtitles

Warning

This is an experimental demo designed exclusively for educational purposes. By using this solution, you understand and accept its risks and limitations.

Prerequisites

  • A WebGPU capable device.
  • NodeJS v20.10.0 and Node package manager (npm).
    • If you have node version manager installed, run nvm use to sync your node version with this project.
  • API_URL from the deployed serverless infrastructure for this demo.
  • AWS CLI Version 2
  • Access to an AWS Account with at least the following permissions:
    • Create IAM roles
    • Create Lambda Functions
    • Create Amazon IVS Stages
    • Create Amazon S3 Buckets
    • Create Cloudfront Distributions

Running the demo

Follow these instructions to run the demo:

Deploy backend infrastructure

  1. Initialize the infrastructure: npm run deploy:init
  2. Deploy the backend stack: npm run deploy:backend

For more details, review the Amazon IVS WebGPU Captions Demo Serverless Infrastructure

Run client app

  1. Run: npm ci
  2. Run: npm run dev

Deploy client app

The following command will deploy the client website to a public cloudfront url.

  1. Run: npm run deploy:website

Replace the low-latency IVS stream

Replace the PLAYBACK_URL in src/constants.js with your IVS Playback URL.

Customize the available models

Modify the SAMPLE_MODELS in src/constants.js to add or remove the models shown in the UI. Additional models may be found in the Hugging Face ONNX Community.

{
  label: 'Model name'
  description: 'A short description of the model.',
  value: 'huggingface_model_name', // for example, 'onnx-community/whisper-tiny.en'
  sizeInBytes: Model size,
  modelOptions: {
    dtype: {
      encoder_model: 'q4', // 'q4' or 'q8' or 'fp16 or 'fp32' (values may not work with all models)
      decoder_model_merged: 'q4', // 'q4' or 'q8' or 'fp16 or 'fp32' (values may not work with all models)
    },
    device: 'webgpu', // or 'wasm'
  },
},

Known issues and limitations

  • The application is meant for demonstration purposes and not for production use.
  • This application is only tested and supported on browsers and devices that support WebGPU. Other browsers and devices, including mobile browsers and smartphones, may work with this tool, but are not officially supported at this time.
  • Muting a low-latency video will stop captions from being generated. Real-time videos do not have this issue.
  • In some cases, the application may experience a memory leak (seems related to huggingface/transformers.js#860)

About Amazon IVS

Amazon Interactive Video Service (Amazon IVS) is a managed livestreaming and stream chat solution that is quick and easy to set up, and ideal for creating interactive video experiences. Learn more.

About

This repository contains an experimental demo application that shows how you can add client-side auto-generated captions to Amazon IVS Real-time and Low-latency streams using transformers.js and WebGPU.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks