163 lines
3.8 KiB
Markdown
163 lines
3.8 KiB
Markdown
|
|
# ChatSBT
|
||
|
|
|
||
|
|
ChatSBT is a full-stack chat application that allows users to interact with multiple AI models through a web interface. The application features real-time streaming responses, chat history persistence, and support for various language models.
|
||
|
|
|
||
|
|
## Features
|
||
|
|
|
||
|
|
- Real-time chat with multiple AI models
|
||
|
|
- Server-Sent Events (SSE) for streaming responses
|
||
|
|
- Chat history persistence
|
||
|
|
- Responsive web interface built with Svelte
|
||
|
|
- Docker support for easy deployment
|
||
|
|
|
||
|
|
## Technology Stack
|
||
|
|
|
||
|
|
### Backend
|
||
|
|
- **Python 3.11+**
|
||
|
|
- **Starlette** - ASGI framework for building asynchronous web applications
|
||
|
|
- **Langchain** - Framework for developing applications with LLMs
|
||
|
|
- **Masonite ORM** - Database ORM for Python
|
||
|
|
- **SQLite** - Default database (configurable)
|
||
|
|
|
||
|
|
### Frontend
|
||
|
|
- **Svelte 5** - Reactive UI framework
|
||
|
|
- **Vite** - Next-generation frontend tooling
|
||
|
|
- **DaisyUI** - Tailwind CSS components
|
||
|
|
- **Marked** - Markdown parser
|
||
|
|
|
||
|
|
### Infrastructure
|
||
|
|
- **Deno** - JavaScript/TypeScript runtime for frontend build
|
||
|
|
- **uv** - Python package installer and resolver
|
||
|
|
|
||
|
|
## Prerequisites
|
||
|
|
|
||
|
|
Before you begin, ensure you have the following installed:
|
||
|
|
- Python 3.11 or higher
|
||
|
|
- Deno 2.4 or higher
|
||
|
|
- uv (Python package installer)
|
||
|
|
- Docker (optional, for containerized deployment)
|
||
|
|
|
||
|
|
## Installation
|
||
|
|
|
||
|
|
### Backend Setup
|
||
|
|
|
||
|
|
1. Install Python dependencies using uv:
|
||
|
|
```bash
|
||
|
|
uv sync
|
||
|
|
```
|
||
|
|
|
||
|
|
2. Set up environment variables:
|
||
|
|
Create a `.env` file in the project root with the necessary configuration:
|
||
|
|
```env
|
||
|
|
# Database configuration
|
||
|
|
DB_CONNECTION=sqlite
|
||
|
|
DB_DATABASE=chatsbt.db
|
||
|
|
|
||
|
|
# AI Provider API keys (add as needed)
|
||
|
|
OPENAI_API_KEY=your_openai_api_key
|
||
|
|
ANTHROPIC_API_KEY=your_anthropic_api_key
|
||
|
|
```
|
||
|
|
|
||
|
|
### Frontend Setup
|
||
|
|
|
||
|
|
1. Navigate to the frontend directory:
|
||
|
|
```bash
|
||
|
|
cd frontend
|
||
|
|
```
|
||
|
|
|
||
|
|
2. Install frontend dependencies:
|
||
|
|
```bash
|
||
|
|
deno install --allow-scripts
|
||
|
|
```
|
||
|
|
|
||
|
|
3. Build the frontend:
|
||
|
|
```bash
|
||
|
|
deno run build
|
||
|
|
```
|
||
|
|
|
||
|
|
## Running the Application
|
||
|
|
|
||
|
|
### Development Mode
|
||
|
|
|
||
|
|
To run the application in development mode with hot reloading:
|
||
|
|
|
||
|
|
1. Start the backend server:
|
||
|
|
```bash
|
||
|
|
uv run app.py
|
||
|
|
```
|
||
|
|
The backend will be available at http://localhost:8000
|
||
|
|
|
||
|
|
2. In a separate terminal, start the frontend development server:
|
||
|
|
```bash
|
||
|
|
cd frontend
|
||
|
|
deno run dev
|
||
|
|
```
|
||
|
|
The frontend will be available at http://localhost:5173
|
||
|
|
|
||
|
|
### Production Mode
|
||
|
|
|
||
|
|
To run the application in production mode:
|
||
|
|
|
||
|
|
1. Build the frontend:
|
||
|
|
```bash
|
||
|
|
cd frontend
|
||
|
|
deno run build
|
||
|
|
cd ..
|
||
|
|
```
|
||
|
|
|
||
|
|
2. Start the backend server:
|
||
|
|
```bash
|
||
|
|
uv run app.py
|
||
|
|
```
|
||
|
|
|
||
|
|
The application will be available at http://localhost:8000
|
||
|
|
|
||
|
|
### Using Docker
|
||
|
|
|
||
|
|
To run the application using Docker:
|
||
|
|
|
||
|
|
1. Build the Docker image:
|
||
|
|
```bash
|
||
|
|
docker build -t chatsbt .
|
||
|
|
```
|
||
|
|
|
||
|
|
2. Run the container:
|
||
|
|
```bash
|
||
|
|
docker run -p 8000:8000 chatsbt
|
||
|
|
```
|
||
|
|
|
||
|
|
The application will be available at http://localhost:8000
|
||
|
|
|
||
|
|
|
||
|
|
## API Endpoints
|
||
|
|
|
||
|
|
The backend exposes the following RESTful API endpoints:
|
||
|
|
|
||
|
|
### Models
|
||
|
|
|
||
|
|
- `GET /api/models` - Retrieve list of available AI models
|
||
|
|
- Response: `{"models": ["model1", "model2", ...]}`
|
||
|
|
|
||
|
|
### Chats
|
||
|
|
|
||
|
|
- `POST /api/chats` - Create a new chat session
|
||
|
|
- Request: `{"model": "model_name"}`
|
||
|
|
- Response: `{"id": "chat_id", "model": "model_name"}`
|
||
|
|
|
||
|
|
- `GET /api/chats/{chat_id}` - Retrieve chat history
|
||
|
|
- Response: `{"messages": [{"role": "human|assistant", "content": "message_text"}, ...]}`
|
||
|
|
|
||
|
|
### Messages
|
||
|
|
|
||
|
|
- `POST /api/chats/{chat_id}/messages` - Send a message to a chat
|
||
|
|
- Request: `{"message": "user_message", "model": "model_name"}`
|
||
|
|
- Response: `{"status": "queued", "message_id": "message_id"}`
|
||
|
|
|
||
|
|
- `GET /api/chats/{chat_id}/stream?message_id={message_id}` - Stream AI response
|
||
|
|
- Server-Sent Events (SSE) endpoint that streams the AI response token by token
|
||
|
|
- Events:
|
||
|
|
- `data: token_content` - Individual tokens from the AI response
|
||
|
|
- `event: done` - Indicates the response is complete
|
||
|
|
|
||
|
|
|