Last Stop
Welcome to the Last Stop for all of your LLM prompts
Last Stop benefits the individual and the organization
- Deploy within minutes (individuals & organizations benefit!)
- Check out our installation notes below, it only takes a few minutes to get going!
- Cheaper usage of ChatGPT (individuals & organizations benefit!)
- Why pay $20 a month when you can pay per service request, host it yourself at fractions of the cost*! (*Naturally, this depends on the usage of the individual, but most will likely save money and gain security.)
- Data Loss Prevention (DLP)
- Organizations benefit from maintaining their own instance of ChatGPT on their servers
- Allow your employees to access ChatGPT without bringing their own accounts
- Monitor for potential DLP (emails, names, code, etc), sanitize the requests, block the requests, or anything in between
- Build a company corpus
- A corpus is a collection of data, such as the prompts and the responses
- Share prompts among the team so nobody has to search the same thing twice!
- Provides better results and quicker results, as well as faster knowledge shares
- Gain insights into common prompts and train your employees based on the commonalities
- API Security
- ChatGPT has heightened security for their APIs that includes an "Opt-in" selection for your data to be shared. Don't be the product, use the APIs!!
The best benefit of all? This is intended to run entirely in your own network or on your own device!
Our mission
Last Stop's mission is to provide accessibility and security to the individual and to the organization. Second to that is providing a quality experience with all LLMs in one location - query one or query them all (as more APIs are released of course).
Why we started this
Countries and organizations are banning ChatGPT altogether, but we believe that there is a happy medium. New innovation should not be stifled but encouraged safely, and that's exactly what we're here to empower.
Don't be like these examples:
- 11% of data that employees paste in ChatGPT contains sensitive data
- Samsung Proprietary Information Leak
Have more questions? Reach out to us in some of the following places:
How to deploy on your local machine
- In order to deploy for the first time you must have the following dependencies:
- An API Key for ChatGPT
- If you want us to retrieve and manage the API Key on your behalf, let us know!
- Docker (Rancher or Docker Desktop)
- An API Key for ChatGPT
- Add your OpenAI API Key to the lib/docker-compose.yml file, at
OPENAI_APIKEY=
- In /lib, run
docker compose up --build
to spin up the environment - Navigate to localhost:8080 to begin using the UI
How to deploy to the cloud
Coming soon - starting with AWS. If you would like to see more cloud configurations just let us know.
Current Status & Roadmap
Immediate concerns:
- Responsive web design
- Build infrastructure as code to deploy to cloud
We plan to:
- Build in-network ML solutions for DLP detection
- e.g. token classification for names, email, code, etc
- Build in-network ML solutions for data sanitization
- Provide a data store for organizations to build knowledge bases
- Provide an API layer for organizations to leverage for internal usage
- Continue to build a quality Open Source UI experience
- Build a mobile experience
- Much more - feel free to create an issue
FAQ:
Can I use GPT-4?
Yes, but first you must apply to the waitlist at `platform.openai.com`. It is not generally available yet.
Can I use Bard or Anthropic?
We have applied to participate in their APIs to begin building for these LLMs. We look forward to the market of models, and will be supporting more as they get released.
Restructure project to docker containers and servers
Create a cluster of services that can be deployed on any cloud by using docker, then providing cloud-specific IaC or the ability to host on a machine using docker-compose
Consider system rearchitect to move website onto public subnet
Currently the website resides in S3 and on Cloudfront. While the internet is unable to access the website, this still leaves a risk of exposure. May be better to move website frontend hosting in to Beanstalk for a higher level of security.
API Gateway is secured within the public subnet and unable to be accessed externally
Infrastructure TODO
Error Handling in SFN
Steps that do not return API Gateway response will output a failure message. This failure message should be handled in the SFN in order to return a response to the frontend
Data Filtering Step
This step will initially store the request to the LastStopAuditLog table in dynamodb, as well as to a LastStopConversation table.
The LastStopAuditLog ID will be passed into the next step to begin validation checks of the data being submitted.
Prompt handling and sharing
Create IaC for AWS Cloud Deployment
This deployment will leverage RDS and ideally Elastic Beanstalk.
Ultimately the goal will be for an organization to assign a route53 address to the frontend and host internally for their organization. From there, companies will be able to redirect any https://chat.openai.com/ requests to their internally hosted chat instance
Use ChatGPT to name conversations
In order to have better naming conventions for each conversation, fork a request to chatgpt that names the conversation something relevant for the user to be able to reflect on