Skip to content

An Open Source Modular Framework From Face to FACS Based Avatar Animation (Unity3D / Blender)

License

Notifications You must be signed in to change notification settings

Yvo02/FACSvatar

 
 

Repository files navigation

Notes 2023-01-12

  • I wanted to simplify the quick start and smooth some rough edges before releasing version v0.4.0, but life held me up and I never merged this into master/main branch. This version is in all aspects better than v0.3.4, so I'm merging it after all in its current state.
  • Version v0.5.0 is on its way though (no release date yet). Small spoilers:
    • Python version 3.10+
    • Support for Blender 3.3 LTS (2023-06-17) Updated FACSvatar-Blender Blender add-on to support Blender LTS v3.3 & v3.6
    • Will have a proper GUI (written in Vue3 (JavaScript Framework)) that communicates with Python.

What is FACSvatar? (v0.4.0-Alpha)

FACSvatar is An Open Source Modular Framework for Real-Time FACS based Facial Animation

Or in plain English:

Track facial expressions with any software and visualize that data on any avatar in real-time, powered by the FACS representation. No more need to modify your avatar to support your tracking software. All written in your favorite programming language, on any OS, and across machines.

Diagram FACS advantage Muscle image source.

  • Facial Action Coding System (FACS): A description of how muscle groups in the human face contract/relax to make any facial configuration possible. (learn more).
    • Action Unit (AU): The strength of contraction of a single muscle group.
  • Modular: Software and OS independent. You only need to know what data goes in and what comes out.
  • Extendable: Write your code, add a ZeroMQ message socket, and let it talk to other modules.
  • Real-time: Create lively avatars that respond to your user.
  • Machine/Deep Learning: Input/output data-fied facial configurations.

FACSvatar demo 2018-09

(Above demo video link: https://www.youtube.com/watch?v=J2FvrIl-ypU)

Message to:

  • Animators: Copy facial expressions from a video/webcam to your avatar.
  • Affective Computing: Enable Human-Agent Interaction (HAI) by inputting your human-analysis into a ML-model, output FACS values, and have your Embodied Conversational Agent (ECA) display it.
  • Psychologists: Create stimuli with the same facial configurations across avatars of different sex, age and ethnicity.

FACSvatar is already operable with:

  • Tracking software:
    • OpenFace: Extract facial AUs from videos/webcam.
  • Visualization software:
  • Modules for additional data processing, and allowing m trackers - to - n avatars (modules folder)
  • ZeroMQ: This framework's glue, allowing modules to communicate with each other.
  • Containerization with Docker to run FACSvatar modules everywhere.

Disclaimers: This is an open-source project, hopefully being flexible enough for your facial animation needs. This is not software supported by a company / commercially, but by users like you. If you need some new capability, you likely have to code it yourself (or ask/hire someone), but questions for guidance are always welcome (make a GitHub issue)! For commercial usage, please check the license page. Read more about FACSvatar's limitations (TODO doc link).

Full documentation

Read the Docs: https://facsvatar.readthedocs.io/

Paper

Please cite the following paper when using this framework in a paper:

van der Struijk, Stef and Huang, Hung-Hsuan and Mirzaei, Maryam Sadat and Nishida, Toyoaki "FACSvatar: An Open Source Modular Framework for Real-Time FACS based Facial Animation" In Proceedings of 18th ACM International Conference on Intelligent Virtual Agents (pp. 159-164). ACM, 2018.

New in v0.4.0-alpha (2020-07-??) TODO UNFINISHED

  • COMPLETE re-write of the documentation: Check it out!
  • Python modules:
    • Standardization pass over all modules / code clean-up
    • Consistency fix: ROUTER / DEALER sockets use JSON formatted data
    • DOC string per class and function
    • Logger instead of print() statements
    • Debug as option to enable logger
    • File structure for proper import of modules / pip?
    • Use config file (in addition to command line arguments) + config filepath argument
  • Easy run: Docker container per module + Docker Compose
  • Demo video

See all changelogs

Quickstart

FACSvatar is tested on Ubuntu and Windows, but should work on MacOS.

This quickstart has 2 parts:

  1. Start FACSvatar modules using Docker - modules in containers (see here for Python instructions)
  2. Visualize in Unity3D or Blender

Dockerized modules

  1. Downloads - Go to the release page of this GitHub repo and download:

    • (Real-time only) openface_2.1.0_zeromq.zip
      • Unzip and execute download_models.sh or .ps1 to download trained models
    • Windows 7 / 8 / 10 Home version <2004 : unity_FACSvatar_standalone_docker-ip.zip
    • Windows 10 Home v2004+ / Pro / Enterprise / Education: unity_FACSvatar_standalone.zip
    • Windows / Linux / Mac: Unity3D editor (documentation)
    • Source code (zip / tar.gz) or download this repository with:
      • git clone https://github.com/NumesSanguis/FACSvatar.git
      • Press the green Clone or Download button on this page --> Download ZIP
  2. Docker Install - Let's you execute applications without worrying about OS or programming language.

  3. Docker Modules - Open a terminal (W7/8: cmd.exe / W10: PowerShell) and navigate to folder FACSvatar/modules, then execute:

    1. docker-compose pull (Downloads FACSvatar Docker containers)
    2. docker-compose up (Starts downloaded Docker containers)
  4. See visualization engine instructions

Offline version:

  1. Open a 2nd terminal in folder FACSvatar/modules and execute: docker-compose exec facsvatar_facsfromcsv bash
  2. Inside Docker container - Start facial animation with: python main.py --pub_ip facsvatar_bridge

With webcam for real-time (Windows-only for now):

  1. Navigate inside folder openface_x.x.x_zeromq
  2. (Windows 7/8/10 Home version <2004 - only) Get Docker machine ip by opening a 2nd terminal and execute: docker-machine ip (likely to be 192.168.99.100)
  3. (Windows 7/8/10 Home version <2004 - only) Open config.xml, change <IP>127.0.0.1</IP> to <IP>machine ip from step 3</IP> (<IP>192.168.99.100</IP>) and save and close.
  4. Double click OpenFaceOffline.exe –> menu: File –> Open Webcam

Visualization engines

Unity3D

Tested on version: 2018.2.20f1

  1. Open the folder unity_FACSvatar as a project with Unity3D
  2. Press play (now it's waiting for facial data)

OR (Windows-only TODO):

  1. Navigate inside unzipped folder unity_FACSvatar_standalone(_docker-ip) and double-click unity_FACSvatar.exe

Extra: Use the numbers 0, 1, 2 on your keyboard to change camera.

FACSvatar Blender add-on

Follow instructions here: https://github.com/NumesSanguis/FACSvatar-Blender

Quickstart video

See the quickstart video (:warning: note that the Blender script part is outdated (from 15:15) due the new FACSvatar Blender add-on):

FACSvatar Quickstart 2019-01 (v0.3.4)

Find out more

Full documentation

Read the FACSvatar documentation!

2017 promotion poster (English & 日本語)

FACSvatar details in English and 日本語

About

An Open Source Modular Framework From Face to FACS Based Avatar Animation (Unity3D / Blender)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 79.6%
  • C# 14.0%
  • Jupyter Notebook 5.4%
  • Other 1.0%