Last Stop is more than just DLP. On top of our increased security visibility

  • By Circulate
  • Last update: Apr 12, 2023
  • Comments: 8

Last Stop

Welcome to the Last Stop for all of your LLM prompts

Last Stop benefits the individual and the organization

  • Deploy within minutes (individuals & organizations benefit!)
    • Check out our installation notes below, it only takes a few minutes to get going!
  • Cheaper usage of ChatGPT (individuals & organizations benefit!)
    • Why pay $20 a month when you can pay per service request, host it yourself at fractions of the cost*! (*Naturally, this depends on the usage of the individual, but most will likely save money and gain security.)
  • Data Loss Prevention (DLP)
    • Organizations benefit from maintaining their own instance of ChatGPT on their servers
    • Allow your employees to access ChatGPT without bringing their own accounts
    • Monitor for potential DLP (emails, names, code, etc), sanitize the requests, block the requests, or anything in between
  • Build a company corpus
    • A corpus is a collection of data, such as the prompts and the responses
    • Share prompts among the team so nobody has to search the same thing twice!
    • Provides better results and quicker results, as well as faster knowledge shares
  • Gain insights into common prompts and train your employees based on the commonalities
  • API Security

The best benefit of all? This is intended to run entirely in your own network or on your own device!

Our mission

Last Stop's mission is to provide accessibility and security to the individual and to the organization. Second to that is providing a quality experience with all LLMs in one location - query one or query them all (as more APIs are released of course).

Example image of LLM functionality

Why we started this

Countries and organizations are banning ChatGPT altogether, but we believe that there is a happy medium. New innovation should not be stifled but encouraged safely, and that's exactly what we're here to empower.

Don't be like these examples:

Have more questions? Reach out to us in some of the following places:

How to deploy on your local machine

  1. In order to deploy for the first time you must have the following dependencies:
  2. Add your OpenAI API Key to the lib/docker-compose.yml file, at OPENAI_APIKEY=
  3. In /lib, run docker compose up --build to spin up the environment
  4. Navigate to localhost:8080 to begin using the UI

How to deploy to the cloud

Coming soon - starting with AWS. If you would like to see more cloud configurations just let us know.

Current Status & Roadmap

Immediate concerns:

- Responsive web design
- Build infrastructure as code to deploy to cloud

We plan to:

- Build in-network ML solutions for DLP detection
    - e.g. token classification for names, email, code, etc
- Build in-network ML solutions for data sanitization
- Provide a data store for organizations to build knowledge bases
- Provide an API layer for organizations to leverage for internal usage
- Continue to build a quality Open Source UI experience
- Build a mobile experience
- Much more - feel free to create an issue


Can I use GPT-4?

Yes, but first you must apply to the waitlist at ``. It is not generally available yet.

Can I use Bard or Anthropic?

We have applied to participate in their APIs to begin building for these LLMs. We look forward to the market of models, and will be supporting more as they get released.



  • 1

    Restructure project to docker containers and servers

    Create a cluster of services that can be deployed on any cloud by using docker, then providing cloud-specific IaC or the ability to host on a machine using docker-compose

  • 2

    Consider system rearchitect to move website onto public subnet

    Currently the website resides in S3 and on Cloudfront. While the internet is unable to access the website, this still leaves a risk of exposure. May be better to move website frontend hosting in to Beanstalk for a higher level of security.

    API Gateway is secured within the public subnet and unable to be accessed externally

  • 3

    Infrastructure TODO

    • Complete modules (audit-log, gpt3-cc, last-stop), for each module...
      • Add necessary outputs
      • Provision in
    • Adjust to refer to module outputs correctly
    • Complete elastic beanstalk private deploy
    • Complete API GW private deploy
  • 4

    Error Handling in SFN

    Steps that do not return API Gateway response will output a failure message. This failure message should be handled in the SFN in order to return a response to the frontend

  • 5

    Data Filtering Step

    This step will initially store the request to the LastStopAuditLog table in dynamodb, as well as to a LastStopConversation table.

    The LastStopAuditLog ID will be passed into the next step to begin validation checks of the data being submitted.

  • 6

    Prompt handling and sharing

    • Ability to share the initial conversation or compiled conversation among others as the new starting point
    • Look into Langchain for opening up other possibilities with these prompts
  • 7

    Create IaC for AWS Cloud Deployment

    This deployment will leverage RDS and ideally Elastic Beanstalk.

    Ultimately the goal will be for an organization to assign a route53 address to the frontend and host internally for their organization. From there, companies will be able to redirect any requests to their internally hosted chat instance

  • 8

    Use ChatGPT to name conversations

    In order to have better naming conventions for each conversation, fork a request to chatgpt that names the conversation something relevant for the user to be able to reflect on