Azure OpenAI Service Proxy. Convert OpenAI official API request to Azure OpenAI API request. Support GPT-4,Embeddings.

  • By Zhiqiang Li
  • Last update: Aug 15, 2023
  • Comments: 15

azure-openai-proxy

License GitHub release Go Report Card CI Release PRs Welcome

English|中文

Azure OpenAI Service Proxy, convert OpenAI official API request to Azure OpenAI API request, support all models, support GPT-4,Embeddings.

aoai-proxy.jpg

Verified support projects:

Name Status
chatgpt-web
chatbox
langchain

Get Start

Retrieve key and endpoint

To successfully make a call against Azure OpenAI, you'll need the following:

Name Desc Default
AZURE_OPENAI_ENDPOINT This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. Alternatively, you can find the value in Azure OpenAI Studio > Playground > Code View. An example endpoint is: https://docs-test-001.openai.azure.com/. N
AZURE_OPENAI_API_VER See here or Azure OpenAI Studio 2023-03-15-preview
AZURE_OPENAI_MODEL_MAPPER This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Deployments in the Azure portal or alternatively under Management > Deployments in Azure OpenAI Studio. gpt-3.5-turbo=gpt-35-turbo

AZURE_OPENAI_MODEL_MAPPER is a mapping from Azure OpenAI deployed model names to official OpenAI model names. You can use commas to separate multiple mappings.

Format:

AZURE_OPENAI_MODEL_MAPPER: <OpenAI Model Name>=<Azure OpenAI deployment model name>

OpenAI Model Names: https://platform.openai.com/docs/models

Azure Deployment Names: Resource Management > Deployments

Example:

AZURE_OPENAI_MODEL_MAPPER: gpt-3.5-turbo=azure-gpt-35

Screenshot of the overview UI for an OpenAI Resource in the Azure portal with the endpoint & access keys location circled in red.

API Key: This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2.

Use Docker

# config by environment 
docker run -d -p 8080:8080 --name=azure-openai-proxy \
  --env AZURE_OPENAI_ENDPOINT=your_azure_endpoint \
  --env AZURE_OPENAI_API_VER=your_azure_api_ver \
  --env AZURE_OPENAI_MODEL_MAPPER=your_azure_deploy_mapper \
  stulzq/azure-openai-proxy:latest

# config by file
docker run -d -p 8080:8080 --name=azure-openai-proxy \
  -v /path/to/config-file.yaml:/app/config/config.yaml \
  stulzq/azure-openai-proxy:latest

Call API:

curl --location --request POST 'localhost:8080/v1/chat/completions' \
-H 'Authorization: Bearer <Azure OpenAI Key>' \
-H 'Content-Type: application/json' \
-d '{
    "max_tokens": 1000,
    "model": "gpt-3.5-turbo",
    "temperature": 0.8,
    "top_p": 1,
    "presence_penalty": 1,
    "messages": [
        {
            "role": "user",
            "content": "Hello"
        }
    ],
    "stream": true
}'

Use ChatGPT-Web

ChatGPT Web: https://github.com/Chanzhaoyu/chatgpt-web

chatgpt-web

Envs:

  • OPENAI_API_KEY Azure OpenAI API Key
  • AZURE_OPENAI_ENDPOINT Azure OpenAI API Endpoint
  • AZURE_OPENAI_MODEL_MAPPER Azure OpenAI API Deployment Name Mappings

docker-compose.yml:

version: '3'

services:
  chatgpt-web:
    image: chenzhaoyu94/chatgpt-web
    ports:
      - 3002:3002
    environment:
      OPENAI_API_KEY: <Azure OpenAI API Key>
      OPENAI_API_BASE_URL: http://azure-openai:8080
      # OPENAI_API_MODEL: gpt-4
      AUTH_SECRET_KEY: ""
      MAX_REQUEST_PER_HOUR: 1000
      TIMEOUT_MS: 60000
    depends_on:
      - azure-openai
    links:
      - azure-openai
    networks:
      - chatgpt-ns

  azure-openai:
    image: stulzq/azure-openai-proxy
    ports:
      - 8080:8080
    environment:
      AZURE_OPENAI_ENDPOINT: <Azure OpenAI API Endpoint>
      AZURE_OPENAI_MODEL_MAPPER: <Azure OpenAI API Deployment Mapper>
      AZURE_OPENAI_API_VER: 2023-03-15-preview
    networks:
      - chatgpt-ns

networks:
  chatgpt-ns:
    driver: bridge

Run:

docker compose up -d

Download

azure-openai-proxy.zip

Comments(15)

  • 1

    支持,不过有一个问题

    我也是azure的用户,之前针对https://github.com/Chanzhaoyu/chatgpt-web,用python自己实现了azure的接口,看到这个项目的思路非常好。但看了下代码,应该有个主要问题没解决。 openapi和azure的最大区别是,对于prompt,官方的入参是json数组形式,但azure是一个string。 官方的例子:message=[{"role": "user", "content": prompt},{"role": "assistant", "content": resp}] azure的例子:prompt=<|im_start|>user\n prompt\n<|im_end|>\n<|im_start|>assistant\n resp\n<|im_end|>\n 另外azure需要额外设置stop=["<|im_end|>"] 希望能完善,后续如果好用我也切换到这个里面来

  • 2

    无法使用GPT-4

    我尝试使用azure openai gpt-4部署模式时,报错。

    docker-compose.yml 配置为: AZURE_OPENAI_MODEL_MAPPER: gpt-4=gpt4

    启动后mapper中有两个键值对(第一个是默认值,第二个是yml中设定的值):如下图 CleanShot 2023-04-20 at 20 31 45@2x

    开始对话报错如下: undefined [ChatGPT error 404: {"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}}]

    多次尝试发现AZURE_OPENAI_MODEL_MAPPER如果设置为非”gpt-3.5-turbo“键,则最终mapper中会添加一个键值对,而不是替换默认第一个键值对,而程序默认使用第一个键值对的值,该值在我的azure资源中没有相应的资源名称,因此报错资源不存在。 如果将model mapper设置为:AZURE_OPENAI_MODEL_MAPPER: gpt-3.5-turbo=gpt4,则仅有一个键值对,可以正常对话,但看程序输出使用的还是gpt-3.5模型,而不是gpt-4

  • 3

    添加出错重试的逻辑

    Azure的API很不稳,经常报如下错误: 1) {"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}} 或者 invalid key 但是过一会又正常了 建议在反向代理那边加一个重试,返回值不是200就重试个几次

  • 4

    gpt-35-turbo能用,但text-davinci-003不行。求解

    我部署了两个model,一个gpt-35-turbo命名gpt35,一个text-davinci-003命名davinci3。

    在我的yaml配置中,我先尝试了: AZURE_OPENAI_MODEL_MAPPER: gpt-3.5-turbo=gpt35 可以正常运行。

    然后我又把这行改成了 AZURE_OPENAI_MODEL_MAPPER: text-davinci-003=davinci3 对话时报错: ChatGPT error 404: {"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}}

    请问该如何解决呢?我查了openai的doc,名字应该是正确的。过了几个小时测试还是这样。

  • 5

    ChatGPT error 404: {"error":{"code":"404","message": "Resource not found"}}

    您好,请教下。

    我的操作基本参考您的yaml文件配置,只修改了下面4个参数。

    AZURE_OPENAI_ENDPOINT: AZURE_OPENAI_MODEL_MAPPER: AZURE_OPENAI_API_VER: 2023-03-15-preview OPENAI_API_KEY:

    改为: environment: OPENAI_API_KEY: 500000dedefr98999900 OPENAI_API_BASE_URL: 没改动 AUTH_SECRET_KEY: "" MAX_REQUEST_PER_HOUR: 1000 TIMEOUT_MS: 60000

    environment: AZURE_OPENAI_ENDPOINT: myname链接 AZURE_OPENAI_MODEL_MAPPER: gpt-3.5-turbo-16k=gpt-35-turbo AZURE_OPENAI_API_VER: 2023-05-15

    结果: 网页运行了,我输入 hello,GPT 报错如下: ChatGPT error 404: {"error":{"code":"404","message": "Resource not found"}}

    麻烦各位大佬,参谋指教

  • 6

    拒绝连接错误connect ECONNREFUSED 127.0.0.1:8080

    我在docker compose up -d之后,在ChatGPT web页面发送消息后没有正常回复,报错如下: request to http://127.0.0.1:8080/v1/chat/completions failed, reason: connect ECONNREFUSED 127.0.0.1:8080 请大佬帮忙看看设置是否有问题,非常感谢!yml如下:

    services:
      chatgpt-web:
        image: chenzhaoyu94/chatgpt-web
        ports:
          - 3002:3002
        environment:
          OPENAI_API_KEY: xxxxxxxxxxxx
          OPENAI_API_BASE_URL: http://localhost:8080
          OPENAI_API_MODEL: "gpt-3.5-turbo"
          AUTH_SECRET_KEY: "Bearer"
          MAX_REQUEST_PER_HOUR: 1000
          TIMEOUT_MS: 60000
        depends_on:
          - azure-openai
        links:
          - azure-openai
        networks:
          - chatgpt-ns
    
      azure-openai:
        image: stulzq/azure-openai-proxy
        ports:
          - 8080:8080
        environment:
          AZURE_OPENAI_ENDPOINT: https://xxx.openai.azure.com
          AZURE_OPENAI_MODEL_MAPPER: gpt-3.5-turbo=gpt-35-turbo
          AZURE_OPENAI_API_VER: 2023-05-15
        networks:
          - chatgpt-ns
    
    networks:
      chatgpt-ns:
        driver: bridge
    
  • 7

    以前部署成功了,最近重新部署报错,请教一下

    请教大佬,用 docker compose部署成功后 对话报错 ChatGPT error 404: {"error":{"code":"404","message": "Resource not found"}} 显示找不到资源,但是我的azure openai资源是没问题的

  • 8

    支持gpt4

    hi 老师您好,我现在申请了azure上边的gpt4的模型试用,但是目前配置完成之后报错,docker-compose配置文件如下 version: '3'

    services: chatgpt4-web: #image: chenzhaoyu94/chatgpt-web image: chenzhaoyu94/chatgpt-web:latest ports: - 3003:3002 environment: OPENAI_API_KEY: xxxxxxx OPENAI_API_BASE_URL: http://10.18.72.88:8086 AUTH_SECRET_KEY: "" MAX_REQUEST_PER_HOUR: 1000 TIMEOUT_MS: 60000 depends_on: - azure4-openai links: - azure4-openai networks: - chatgpt4-ns

    azure4-openai: image: ishadows/azure-openai-proxy ports: - 8086:8080 environment: AZURE_OPENAI_ENDPOINT: https://xxxxxxx.openai.azure.com/ AZURE_OPENAI_MODEL_MAPPER: gpt-4=zzr_test AZURE_OPENAI_API_VER: 2023-03-15-preview networks: - chatgpt4-ns

    networks: chatgpt4-ns: driver: bridge

    报错信息: image

    proxy的报错: [GIN] 2023/04/17 - 08:33:45 | 404 | 1.018µs | 172.27.0.1 | GET "/v1/dashboard/billing/usage?start_date=2023-04-01&end_date=2023-04-30" 2023/04/17 08:34:11 proxying request [gpt-3.5-turbo] /v1/chat/completions -> https://.openai.azure.com/openai/deployments/gpt-35-turbo/chat/completions?api-version=2023-03-15-preview [GIN] 2023/04/17 - 08:34:14 | 404 | 3.201908308s | 172.27.0.1 | POST "/v1/chat/completions"

    请教您一下 这个问题怎么解决

  • 9

    "code":"404","message": "Resource not found"

    按照提示部署了docker 并且测试curl 连接api 回应: {"error":{"code":"404","message": "Resource not found"}} 请大佬看下哪里有问题。 docker logs(apikey已替换处理):

    2023/06/16 16:35:34 AzureOpenAIAPIVer:  apikey
    2023/06/16 16:35:34 AzureOpenAIEndpoint:  https://useastgpt.openai.azure.com/
    2023/06/16 16:35:34 AzureOpenAIModelMapper:  map[gpt-3.5-turbo:gpt-35-turbo]
    2023/06/16 16:35:34 Server listening at :8080
    2023/06/16 16:42:46 proxying request [] /v1/chat/completions -> https://useastgpt.openai.azure.com/openai/deployments/gpt-35-turbo/chat/completions?api-version=apikey
    [GIN] 2023/06/16 - 16:42:47 | 404 |    826.7726ms |      172.17.0.1 | POST     "/v1/chat/completions"
    2023/06/16 16:45:36 Server Shutdown...
    2023/06/16 16:45:36 Server exiting
    2023/06/16 17:28:13 AzureOpenAIAPIVer:  apikey
    2023/06/16 17:28:13 AzureOpenAIEndpoint:  https://useastgpt.openai.azure.com/
    2023/06/16 17:28:13 AzureOpenAIModelMapper:  map[gpt-3.5-turbo:gpt-35-turbo]
    2023/06/16 17:28:13 Server listening at :8080
    2023/06/16 17:28:25 proxying request [] /v1/chat/completions -> https://useastgpt.openai.azure.com/openai/deployments/gpt-35-turbo/chat/completions?api-version=apikey
    [GIN] 2023/06/16 - 17:28:26 | 404 |  856.023736ms |      172.17.0.1 | POST     "/v1/chat/completions"
    
  • 10

    docker compose之后chatgpt-web好像打不开

    docker compose执行完后,那个3002的web界面打不开,docker log显示

    > [email protected] prod /app
    > node ./build/index.mjs
    
     ELIFECYCLE  Command failed.
    

    curl测试proxy是没问题的,请问这种情况怎么解决啊

  • 11

    what's AZURE_OPENAI_MODEL_MAPPER?

    the proxy doesn't work in my azure's server.

    If I use openai key, it works fine, but if I use azure key after proxy, it doesn't work.

    image

    I think that problem is in the yaml file. image

    what's AZURE_OPENAI_MODEL_MAPPER? How can I get it? Would u give me an example? Thanks a lot!

  • 12

    Possible to reverse what this repo is doing: Convert from Azure OpenAI to official OpenAI?

    Azure's OpenAI API is apparently more complicated and config/resource management is notoriously a pain. I didn't see why one would want to convert from OpenAI to Azure OpenAI. But is there a way to convert from Azure to official OpenAI?

  • 13

    [OpenAI] 提供错误的API密钥 | Incorrect API key provided

    各位大佬能看看配置文件哪里有问题? 提示错误的API密钥 Azure的 Model name:gpt-4-32k Model version:0314 API key和endpoint确定是正确的。

    version: '3'

    services: chatgpt-web: image: chenzhaoyu94/chatgpt-web ports: - 3002:3002 environment: OPENAI_API_KEY: b77000000000000000000003 #OPENAI_API_BASE_URL: http://azure-openai:8080 OPENAI_API_MODEL: gpt-4-32k AUTH_SECRET_KEY: "" MAX_REQUEST_PER_HOUR: 1000 TIMEOUT_MS: 60000 depends_on: - azure-openai links: - azure-openai networks: - chatgpt-ns

    azure-openai: image: stulzq/azure-openai-proxy ports: - 8080:8080 environment: AZURE_OPENAI_ENDPOINT: https://my.openai.azure.com/ AZURE_OPENAI_MODEL_MAPPER: gpt-4-32k-0314=gpt-4-32k AZURE_OPENAI_API_VER: "2023-03-14" networks: - chatgpt-ns

    networks: chatgpt-ns: driver: bridge

  • 14

    Require support DALL-E model

    Greeting, Please consider add support DALL-E and in Azure OpenAI, the API route will be : https://xxx.openai.azure.com/openai/images/generations:submit?api-version=2023-06-01-preview

  • 15

    panic

    azure-openai-gpt4 | 2023/04/18 04:49:37 [Recovery] 2023/04/18 - 04:49:37 panic recovered: azure-openai-gpt4 | net/http: abort Handler azure-openai-gpt4 | net/http/httputil/reverseproxy.go:363 (0x30666f) azure-openai-gpt4 | github.com/stulzq/azure-openai-proxy/azure/proxy.go:65 (0x31cffb) azure-openai-gpt4 | github.com/gin-gonic/[email protected]/context.go:174 (0x30dc7b) azure-openai-gpt4 | github.com/gin-gonic/[email protected]/recovery.go:102 (0x30dc5c) azure-openai-gpt4 | github.com/gin-gonic/[email protected]/context.go:174 (0x30cf1b) azure-openai-gpt4 | github.com/gin-gonic/[email protected]/logger.go:240 (0x30cef8) azure-openai-gpt4 | github.com/gin-gonic/[email protected]/context.go:174 (0x30c01f) azure-openai-gpt4 | github.com/gin-gonic/[email protected]/gin.go:620 (0x30bd08) azure-openai-gpt4 | github.com/gin-gonic/[email protected]/gin.go:576 (0x30ba5b) azure-openai-gpt4 | net/http/server.go:2947 (0x22246b) azure-openai-gpt4 | net/http/server.go:1991 (0x21e9f3) azure-openai-gpt4 | runtime/asm_arm64.s:1172 (0x75343)