Effibot: A ChatGPT server with tree-based data structure for efficient context management, providing a mind map-like Q&A experience.

  • By David Wang
  • Last update: Jul 26, 2023
  • Comments: 2


English | 中文

A ChatGPT server that stores and processes data using a tree-based data structure, providing users with a mind map-like Q&A experience with ChatGPT. The tree structure greatly optimizes the transmission of context (tokens) and provides a better experience when used within a company.


The image shows a Demo client; the UI is for reference only.

In work scenarios, the need to deeply ask the same question is relatively rare, so in most cases, the token count can be controlled within 2000. Therefore, GPT 3.5's token limit (4096) is sufficient (no need to consider GPT4 for accuracy).


The time between the two screenshots does not exceed 5 minutes. Due to multiple users, detailed logs need to be viewed to distinguish the token consumption of the five questions mentioned, but the overall token consumption can be seen as controllable.


Demo environment is deployed on a cloud server. And NOT set the OpenAI token, so it will start mock mode.

📢Update Plan

  • User login
  • Data persistence storage
  • Quick deployment script, uploaded to Docker Hub
  • Add support for writing long-form fiction scenarios

Updates will be made as needed. More updates will be provided if the project is widely used, and updates will be made based on interest if the project has fewer users.

Feel free to develop a Web UI based on this project! The UI in the Demo is written by me, a beginner in UI design. PRs are welcome!


Organize user input into a multi-branch tree and only pass the content of the current branch as context information to GPT. The amount of content we transmit each time is equal to the depth of the current node. Optimize the selection and transmission of context through the multi-branch tree.

A binary tree with n nodes has a depth of logn. The depth here refers to the context information we need to pass to the GPT API. If we don't process the context, it can be regarded as a one-dimensional tree, which degenerates into a line segment, naturally the most complex case. By organizing the session into a tree structure, we can create a mind map.

Environment Notice

It is recommended to choose a server location in a country or region supported by OpenAI. Data centers and cloud hosts are both acceptable, and the following clouds have been tested:

  1. Azure
  2. AWS

If you insist on testing in an unsupported country or region, this project fully supports proxies, but the proxy itself may affect the experience and pose risks. See the configuration file Spec.GPT.TransportUrl for the proxy configuration details.

The use of proxies is not recommended. Use at your own risk.

Clone and enter project directory

git clone https://github.com/finishy1995/effibot.git
cd effibot


The default configuration is Mock mode, which means it will not actually call the GPT API but return the user's input as a response. The default REST API port is 4001, and all configurations can be modified in the http/etc/http-api.yaml file.

vi http/etc/http-api.yaml
Name: http-api
Port: 4001 # Port of http server, default 4001
Timeout: 30000 # Timeout of http request, default 30000(ms)
  Level: debug
  Mode: file # Log mode, default console 日志模式,可选 console(命令行输出) 或 file
  Path: ../logs # Log file path, default ../logs
#    Token: "sk-" # Token of OpenAI, will start mock mode if not set. OpenAI 密钥,如果不设置则启用 mock 模式
#    TransportUrl: "http://localhost:4002" # Transport url of OpenAI, default "http://localhost:4002 代理地址,如果不设置则不启用代理
    Timeout: 20s # Timeout of OpenAI request, default 20s
    MaxToken: 1000 # Max token of OpenAI response, default 1000

After modifying the file, if you need One-click deployment or container deployment, please execute the following command

mkdir -p ./effibot_config
cp http/etc/http-api.yaml ./effibot_config

One-click deployment

Please ensure that docker and docker-compose are correctly installed and enabled.

docker-compose up -d

Demo client will run on port 4000, and REST API will run on both ports 4000 and 4001.

If you don't have docker-compose, you can use the following command:

docker network create effibot
docker run -p 4001:4001 -v ./effibot_config:/app/etc --network effibot --name effibot -d finishy/effibot:latest
docker run -p 4000:4000 --network effibot --name effibot-demo -d finishy/effibot-demo:latest

Custom Development/Deployment

Server Development

Ensure that golang 1.18+ is installed and configure.

cd http
go run http.go # go build http.go && ./http

Exit directory

cd ..

Container Packaging

docker build -t effibot:latest -f http/Dockerfile .

Container Network Bridge Create

docker network create effibot

Container Deployment

# Modify the configuration file as needed, such as adding the OpenAI token and change the log mode to console
docker run -p 4001:4001 -v ./effibot_config:/app/etc --network effibot --name effibot -d effibot:latest

Demo Client Container Packaging

docker build -t effibot-demo:latest -f demo/Dockerfile .

Demo Client Container Deployment

docker run -p 4000:4000 --network effibot --name effibot-demo -d effibot-demo:latest

Demo Client Development

Demo client is developed by Vue.js + Vite + TypeScript, and requires Node.js 14+ environment.

cd demo
yarn && yarn dev

Demo client will automatically open at http://localhost:5173.




  • 1


    1. 想问一下大佬这个项目是否是用单个api token做调用,如果公司规模三位数往上,上百人频繁调用同一付费api会不会很大封号的风险
    2. 另外一点分享,npm上的这个包也很不错,把对话请求封装,通过给api传入parentMessageId实现对话追踪https://www.npmjs.com/package/chatgpt
  • 2


    https://github.com/achousa/gpt-graph image

    这个项目有做一些类似的工作。 我的一些想法,可能不是很成熟。

    1. 利用最初的问答,和那个项目一样自动建立思维导图框架(可以选择非手动建立,这样只需要自己进行少量调整)
    2. 再自由选择节点继续询问,询问时gpt自动带入父节点的信息,并输出进一步的框架
    3. 在任何时候,都能自由调整框架内容,相当于人工调节 另外,我还有一个想法,即利用有向图的思想将现有数据压缩成提示信息,作为提问的预设部分