Question Answering Bot powered by OpenAI GPT models.

  • By null
  • Last update: Apr 27, 2023
  • Comments: 2

GPTBot

Go Reference

Question Answering Bot powered by OpenAI GPT models.

Installation

$ go get -u github.com/go-aie/gptbot

Quick Start

func main() {
    ctx := context.Background()
    apiKey := os.Getenv("OPENAI_API_KEY")
    encoder := gptbot.NewOpenAIEncoder(apiKey, "")
    store := gptbot.NewLocalVectorStore()

    // Feed documents into the vector store.
    feeder := gptbot.NewFeeder(&gptbot.FeederConfig{
        Encoder: encoder,
        Updater: store,
    })
    err := feeder.Feed(ctx, &gptbot.Document{
        ID:   "1",
        Text: "Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model released in 2020 that uses deep learning to produce human-like text. Given an initial text as prompt, it will produce text that continues the prompt.",
    })
    if err != nil {
        fmt.Printf("err: %v", err)
        return
    }

    // Chat with the bot to get answers.
    bot := gptbot.NewBot(&gptbot.BotConfig{
        APIKey:  apiKey,
        Encoder: encoder,
        Querier: store,
    })

    question := "When was GPT-3 released?"
    answer, _, err := bot.Chat(ctx, question)
    if err != nil {
        fmt.Printf("err: %v", err)
        return
    }
    fmt.Printf("Q: %s\n", question)
    fmt.Printf("A: %s\n", answer)

    // Output:
    //
    // Q: When was GPT-3 released?
    // A: GPT-3 was released in 2020.
}

NOTE:

  • The above example uses a local vector store. If you have a larger dataset, please consider using a vector search engine (e.g. Milvus).
  • With the help of GPTBot Server, you can even upload documents as files and then start chatting via HTTP!

Design

GPTBot is an implementation of the method demonstrated in Question Answering using Embeddings.

architecture

Core Concepts

Concepts Description Built-in Support
Preprocessor Preprocess the documents by splitting them into chunks. [customizable]
Preprocessor
Encoder Creates an embedding vector for each chunk. [customizable]
OpenAIEncoder
VectorStore Stores and queries document chunk embeddings. [customizable]
LocalVectorStore
Milvus
Feeder Feeds the documents into the vector store. /
Bot Question answering bot to chat with. /

License

MIT

Download

gptbot.zip

Comments(2)

  • 1

    Question on LocalVectorStore embeddings.

    Hello,

    I'm trying to wrap my head around the Local Vector Store. What I know is that it's primarily used for small-scale docs and it has the ability to load up from disk in JSON format.

    But, it seems to lack the ability to create a store and serialize "save" to disk in JSON format right?

    Doesn't it make sense to persist embeddings across runs of the application so that you don't have to recreate them each time?

    If you think this makes sense I could probably help contribute this ability and write the code to do that.

    Thanks,

    -deckarep

  • 2

    Adds a StoreJSON function to the LocalVectoreStore, with unit-test

    Hello @RussellLuo ,

    Per issue: #1, here is my contribution to include a StoreJSON function on the LocalVectoreStore which allows the serialization of the data to the file-system for starters. Let me know if this looks good or you'd like any changes.

    Thanks for the input and allowing this contribution!

    Cheers,

    -Deckarep