This project aims to provide automatic visualizations for the University Radio York Studio Red using Python.
The purpose of this Python script is to automate visualised radio shows or podcasts in Studio Red by changing the live camera based on who is speaking and switching to visualisation timelord automatically during songs or the news to prevent copyright strikes.
You will need a device running Bitfocus Companion as Python libraries for controlling OBS and Blackmagic ATEMs are a little dicy but http requests to a program that can do it for me aren't. Set up connections in Companion to your OBS instance and to the ATEM. Next import the companion page config file page99.companionconfig
to a page of your choice (making sure to set the page number in credentials.py
)
To use this project, follow these steps:
- Clone the repository:
git clone https://github.com/JamboPE/auto-vis-python.git
onto the studio PC. - Install the required dependencies:
pip install -r dependencies
- Configure the script credentials/settings: Edit
credentials.py
file and update the necessary parameters: adding your MyRadio API key, Bitfocus Companion host IP, Companion page and microphone audio input devices. - Run
test_threashold.py
to work out a good mic threshold to set incredentials.py
to detect speech. - Delete anything not inside the folder
python program
as it is no longer needed.
Run the script and pray to the python and linux audio gods: python main.py
.
Contributions are welcome! If (for some reason) you would like to contribute to this project, please follow these steps:
- Create a new branch:
git checkout -b feature/your-feature
- Make your changes and commit them:
git commit -m 'Add your feature'
- Push to the branch:
git push origin feature/your-feature
- Submit a pull request.
There is a much better auto visualisation repository here that includes features like clipping of your shows and more! I just made this as I don't understand typescript but I would recommend trying that project first before this one.