These days I found an interesting repo ChatGPT to API. You can deploy a fake API using the web version of ChatGPT.
However, it's docs are not specific. This repo is to help you easily config and deploy ChatGPT to API.
- git
- python3
- Docker (if deploy in Docker)
- golang and
go
command in PATH (if deploy in host) - One or more ChatGPT accounts
- Clone this repo to somewhere (suppose
/dcta/
) - Edit the following variables in
run.py
:proxy
: format:host:port
. If you don't need proxy, set it to""
proxy_type
: possible values:"socks5"
or"http"
accounts
: It's a dictionary of accounts' info. Multiple users are supported. See the example bellowserver_host
: the host you want ChatGPT-to-API to listenserver_port
: the port you want ChatGPT-to-API to listen
3. If deployed in host: run pip3 install -r requirements.txt
If deployed in docker: nothing to do in this step
4. run build.py
and follow the instructions.
5. run the service:
If deployed in docker:
open a terminal in /dcta/
. run docker compose up -d
If deployed in host:
open a terminal in /dcta/
. run run.py
6. Enjoy~
- Q: What if the access_token of OpenAI expire?
A: If this happen, the request to the fake api will cause500
status code in ChatGPT-to-API. I use python to inspect this code and will regenerate access_token and restart ChatGPT-to-API automatically.