Compare commits

...

10 commits

Author SHA1 Message Date
b5015a8c4c ruta serve chat 2025-08-06 00:40:42 -04:00
687a4cccb6 hurl api test script 2025-08-04 14:06:28 -04:00
c49636c766 database support 2025-08-04 14:05:54 -04:00
f7b23a3cec separa rutas api 2025-08-02 00:57:39 -04:00
2f74e893bf vite rolldown upgrade 2025-08-01 23:54:11 -04:00
c4eebafff6 more docker fixes 2025-07-31 22:50:03 -04:00
e2bd322814 fix dockerfile 2025-07-31 21:51:38 -04:00
33c5908b9f add dockerfile 2025-07-31 19:00:28 -04:00
66eaa56429 fixes icon and get parameter logic 2025-07-31 18:30:25 -04:00
16f084bfcd rename frontend subfolder 2025-07-31 16:48:55 -04:00
36 changed files with 1359 additions and 925 deletions

3
.gitignore vendored
View file

@ -10,3 +10,6 @@ wheels/
.venv
.python-version
.env
# Databases
*.sqlite3

42
Dockerfile Normal file
View file

@ -0,0 +1,42 @@
# Multi-stage build for a Python backend with Deno frontend
# Stage 1: Build the frontend
FROM denoland/deno:2.4.3 AS frontend-builder
# Set working directory for frontend
WORKDIR /app/frontend
# Copy frontend files
COPY frontend/ .
# Install dependencies and build the frontend
RUN deno install --allow-scripts
RUN deno run build
# Stage 2: Setup Python backend with uv
FROM python:3.11-slim AS backend
# Install uv
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
# Set working directory
WORKDIR /app
# Copy Python project files
COPY pyproject.toml uv.lock ./
# Install Python dependencies
RUN uv sync --frozen
# Copy backend source code and .env file
COPY *.py ./
COPY .env ./
# Copy built frontend from previous stage
COPY --from=frontend-builder /app/frontend/dist ./frontend/dist
# Expose the port (adjust if your app uses a different port)
EXPOSE 8000
# Run the application
CMD ["uv", "run", "app.py"]

View file

@ -1,82 +0,0 @@
# ChatSBT - Multi-Model Chat Application
A modern chat application supporting multiple AI models through OpenRouter API.
## Features
- Chat with multiple AI models (Qwen, Deepseek, Kimi)
- Real-time streaming responses
- Conversation history
- Simple REST API backend
- Modern Svelte frontend
## Tech Stack
### Frontend
- Svelte
- DaisyUI (Tailwind component library)
- Vite
### Backend
- Starlette (async Python web framework)
- LangChain (LLM orchestration)
- LangGraph (for potential future agent workflows)
- OpenRouter API (multi-model provider)
## API Endpoints
| Method | Path | Description |
|--------|------|-------------|
| POST | /chats | Create new chat session |
| GET | /chats/{chat_id} | Get chat history |
| POST | /chats/{chat_id}/messages | Post new message |
| GET | /chats/{chat_id}/stream | Stream response from AI |
## Prerequisites
- Python 3.11+
- Deno
- UV (Python package manager)
- OpenRouter API key (set in `.env` file)
## Installation
1. Clone the repository
2. Set up environment variables:
```bash
echo "OPENROUTER_API_KEY=your_key_here" > .env
echo "OPENROUTER_BASE_URL=https://openrouter.ai/api/v1" >> .env
```
4. Install frontend dependencies:
```bash
cd chatsbt
deno install
```
## Running
1. Start backend server:
```bash
uv run app.py
```
2. Start the frontend (another terminal):
```bash
cd chatsbt
deno run dev
```
The application will be available at `http://localhost:5173`
## Configuration
Available models:
- `qwen/qwen3-235b-a22b-2507`
- `deepseek/deepseek-r1-0528`
- `moonshotai/kimi-k2`

131
TESTING.md Normal file
View file

@ -0,0 +1,131 @@
# Backend API Testing with Hurl
This document provides instructions for testing the chat backend API using Hurl.
## Prerequisites
1. **Install Hurl**:
- macOS: `brew install hurl`
- Ubuntu/Debian: `sudo apt update && sudo apt install hurl`
- Windows: Download from [hurl.dev](https://hurl.dev)
- Or use Docker: `docker pull ghcr.io/orange-opensource/hurl:latest`
2. **Start the backend server**:
```bash
# Make sure you're in the project root directory
python -m uvicorn app:application --host 0.0.0.0 --port 8000 --reload
```
## Running the Tests
### Basic Test Run
```bash
# Run all tests
hurl test-backend.hurl
# Run with verbose output
hurl --verbose test-backend.hurl
# Run with color output
hurl --color test-backend.hurl
```
### Advanced Options
```bash
# Run with detailed report
hurl --report-html report.html test-backend.hurl
# Run specific tests (first 5 tests)
hurl --to-entry 5 test-backend.hurl
# Run tests with custom variables
hurl --variable host=localhost --variable port=8000 test-backend.hurl
# Run with retry on failure
hurl --retry 3 test-backend.hurl
```
## Test Coverage
The test script covers:
### ✅ Happy Path Tests
- **GET /api/models** - Retrieves available AI models
- **POST /api/chats** - Creates new chat sessions
- **GET /api/chats/{id}** - Retrieves chat history
- **POST /api/chats/{id}/messages** - Sends messages to chat
- **GET /api/chats/{id}/stream** - Streams AI responses via SSE
### ✅ Error Handling Tests
- Invalid model names
- Non-existent chat IDs
- Missing required parameters
### ✅ Multi-turn Conversation Tests
- Multiple message exchanges
- Conversation history persistence
- Different model selection
## Test Structure
The tests are organized in logical flow:
1. **Model Discovery** - Get available models
2. **Chat Creation** - Create new chat sessions
3. **Message Exchange** - Send messages and receive responses
4. **History Verification** - Ensure messages are persisted
5. **Error Scenarios** - Test edge cases and error handling
## Environment Variables
You can customize the test environment:
```bash
# Set custom host/port
export HURL_host=localhost
export HURL_port=8000
# Or pass as arguments
hurl --variable host=127.0.0.1 --variable port=8080 test-backend.hurl
```
## Troubleshooting
### Common Issues
1. **Connection refused**
- Ensure the backend is running on port 8000
- Check firewall settings
2. **Tests fail with 404 errors**
- Verify the backend routes are correctly configured
- Check if database migrations have been run
3. **SSE streaming tests timeout**
- Increase timeout: `hurl --max-time 30 test-backend.hurl`
- Check if AI provider is responding
### Debug Mode
Run tests with maximum verbosity:
```bash
hurl --very-verbose test-backend.hurl
```
## Continuous Integration
Add to your CI/CD pipeline:
```yaml
# GitHub Actions example
- name: Run API Tests
run: |
hurl --test --report-junit results.xml test-backend.hurl
```
## Test Results
After running tests, you'll see:
- ✅ **Green** - Tests passed
- ❌ **Red** - Tests failed with details
- 📊 **Summary** - Total tests, duration, and success rate

27
app.py
View file

@ -1,8 +1,11 @@
from starlette.applications import Starlette
from starlette.routing import Route
from starlette.routing import Route, Mount
from starlette.staticfiles import StaticFiles
from starlette.responses import FileResponse
from controllers import create_chat, post_message, chat_stream, history, get_models
from starlette.middleware import Middleware
from starlette.middleware.cors import CORSMiddleware
import os
middleware = [
Middleware(
@ -14,12 +17,24 @@ middleware = [
)
]
async def serve_frontend(request):
"""Serve the frontend index.html file"""
return FileResponse(os.path.join("frontend", "dist", "index.html"))
async def serve_chat(request):
"""Serve the chat.html file for specific chat routes"""
return FileResponse(os.path.join("frontend", "dist", "chat.html"))
routes = [
Route("/models", get_models, methods=["GET"]),
Route("/chats", create_chat, methods=["POST"]),
Route("/chats/{chat_id:str}", history, methods=["GET"]),
Route("/chats/{chat_id:str}/messages", post_message, methods=["POST"]),
Route("/chats/{chat_id:str}/stream", chat_stream, methods=["GET"]),
Route("/", serve_frontend, methods=["GET"]),
Route("/chats/{chat_id:str}", serve_chat, methods=["GET"]),
Route("/api/models", get_models, methods=["GET"]),
Route("/api/chats", create_chat, methods=["POST"]),
Route("/api/chats/{chat_id:str}", history, methods=["GET"]),
Route("/api/chats/{chat_id:str}/messages", post_message, methods=["POST"]),
Route("/api/chats/{chat_id:str}/stream", chat_stream, methods=["GET"]),
Mount("/assets", StaticFiles(directory=os.path.join("frontend", "dist", "assets")), name="assets"),
Mount("/icon", StaticFiles(directory=os.path.join("frontend", "dist", "icon")), name="icon"),
]

View file

@ -2,16 +2,18 @@ from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage
from os import getenv
from dotenv import load_dotenv
from pydantic import SecretStr
load_dotenv()
def get_llm(provider: str):
"""Return a LangChain chat model for the requested provider."""
return ChatOpenAI(
openai_api_key=getenv("OPENROUTER_API_KEY"),
openai_api_base=getenv("OPENROUTER_BASE_URL"),
model_name=provider,
api_key=SecretStr(getenv("OPENROUTER_API_KEY","")),
base_url=getenv("OPENROUTER_BASE_URL"),
model=provider,
)
def get_messages(chats, chat_id):
return [HumanMessage(**m) if m["role"] == "human" else AIMessage(**m) for m in chats[chat_id]]
print(chats)
return [HumanMessage(**m) if m["role"] == "human" else AIMessage(**m) for m in chats]

View file

@ -1,3 +0,0 @@
{
"recommendations": ["svelte.svelte-vscode"]
}

View file

@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="31.88" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 257"><defs><linearGradient id="IconifyId1813088fe1fbc01fb466" x1="-.828%" x2="57.636%" y1="7.652%" y2="78.411%"><stop offset="0%" stop-color="#41D1FF"></stop><stop offset="100%" stop-color="#BD34FE"></stop></linearGradient><linearGradient id="IconifyId1813088fe1fbc01fb467" x1="43.376%" x2="50.316%" y1="2.242%" y2="89.03%"><stop offset="0%" stop-color="#FFEA83"></stop><stop offset="8.333%" stop-color="#FFDD35"></stop><stop offset="100%" stop-color="#FFA800"></stop></linearGradient></defs><path fill="url(#IconifyId1813088fe1fbc01fb466)" d="M255.153 37.938L134.897 252.976c-2.483 4.44-8.862 4.466-11.382.048L.875 37.958c-2.746-4.814 1.371-10.646 6.827-9.67l120.385 21.517a6.537 6.537 0 0 0 2.322-.004l117.867-21.483c5.438-.991 9.574 4.796 6.877 9.62Z"></path><path fill="url(#IconifyId1813088fe1fbc01fb467)" d="M185.432.063L96.44 17.501a3.268 3.268 0 0 0-2.634 3.014l-5.474 92.456a3.268 3.268 0 0 0 3.997 3.378l24.777-5.718c2.318-.535 4.413 1.507 3.936 3.838l-7.361 36.047c-.495 2.426 1.782 4.5 4.151 3.78l15.304-4.649c2.372-.72 4.652 1.36 4.15 3.788l-11.698 56.621c-.732 3.542 3.979 5.473 5.943 2.437l1.313-2.028l72.516-144.72c1.215-2.423-.88-5.186-3.54-4.672l-25.505 4.922c-2.396.462-4.435-1.77-3.759-4.114l16.646-57.705c.677-2.35-1.37-4.583-3.769-4.113Z"></path></svg>

Before

Width:  |  Height:  |  Size: 1.5 KiB

View file

@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="26.6" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 308"><path fill="#FF3E00" d="M239.682 40.707C211.113-.182 154.69-12.301 113.895 13.69L42.247 59.356a82.198 82.198 0 0 0-37.135 55.056a86.566 86.566 0 0 0 8.536 55.576a82.425 82.425 0 0 0-12.296 30.719a87.596 87.596 0 0 0 14.964 66.244c28.574 40.893 84.997 53.007 125.787 27.016l71.648-45.664a82.182 82.182 0 0 0 37.135-55.057a86.601 86.601 0 0 0-8.53-55.577a82.409 82.409 0 0 0 12.29-30.718a87.573 87.573 0 0 0-14.963-66.244"></path><path fill="#FFF" d="M106.889 270.841c-23.102 6.007-47.497-3.036-61.103-22.648a52.685 52.685 0 0 1-9.003-39.85a49.978 49.978 0 0 1 1.713-6.693l1.35-4.115l3.671 2.697a92.447 92.447 0 0 0 28.036 14.007l2.663.808l-.245 2.659a16.067 16.067 0 0 0 2.89 10.656a17.143 17.143 0 0 0 18.397 6.828a15.786 15.786 0 0 0 4.403-1.935l71.67-45.672a14.922 14.922 0 0 0 6.734-9.977a15.923 15.923 0 0 0-2.713-12.011a17.156 17.156 0 0 0-18.404-6.832a15.78 15.78 0 0 0-4.396 1.933l-27.35 17.434a52.298 52.298 0 0 1-14.553 6.391c-23.101 6.007-47.497-3.036-61.101-22.649a52.681 52.681 0 0 1-9.004-39.849a49.428 49.428 0 0 1 22.34-33.114l71.664-45.677a52.218 52.218 0 0 1 14.563-6.398c23.101-6.007 47.497 3.036 61.101 22.648a52.685 52.685 0 0 1 9.004 39.85a50.559 50.559 0 0 1-1.713 6.692l-1.35 4.116l-3.67-2.693a92.373 92.373 0 0 0-28.037-14.013l-2.664-.809l.246-2.658a16.099 16.099 0 0 0-2.89-10.656a17.143 17.143 0 0 0-18.398-6.828a15.786 15.786 0 0 0-4.402 1.935l-71.67 45.674a14.898 14.898 0 0 0-6.73 9.975a15.9 15.9 0 0 0 2.709 12.012a17.156 17.156 0 0 0 18.404 6.832a15.841 15.841 0 0 0 4.402-1.935l27.345-17.427a52.147 52.147 0 0 1 14.552-6.397c23.101-6.006 47.497 3.037 61.102 22.65a52.681 52.681 0 0 1 9.003 39.848a49.453 49.453 0 0 1-22.34 33.12l-71.664 45.673a52.218 52.218 0 0 1-14.563 6.398"></path></svg>

Before

Width:  |  Height:  |  Size: 1.9 KiB

View file

@ -1,31 +0,0 @@
import { chatStore } from "./chatStore.svelte.js";
// keyed by chat_id → chatStore instance
const cache = $state({});
// which chat is on screen right now
export const activeChatId = $state(null);
export function getStore(chatId) {
if (!cache[chatId]) {
cache[chatId] = chatStore(chatId);
}
return cache[chatId];
}
export function switchChat(chatId) {
activeChatId = chatId;
}
export function newChat() {
const id = "chat_" + crypto.randomUUID();
switchChat(id);
return id;
}
// restore last opened chat (or create first one)
(() => {
const ids = JSON.parse(localStorage.getItem("chat_ids") || "[]");
if (ids.length) switchChat(ids[0]);
else newChat();
})();

1
config/__init__.py Normal file
View file

@ -0,0 +1 @@
# Masonite-orm module

11
config/database.py Normal file
View file

@ -0,0 +1,11 @@
from masoniteorm.connections import ConnectionResolver
DATABASES = {
"default": "sqlite",
"sqlite": {
"driver": "sqlite",
"database": "database.sqlite3",
}
}
DB = ConnectionResolver().set_connection_details(DATABASES)

View file

@ -1,12 +1,13 @@
import uuid
import json
from typing import Dict, List, Tuple
from starlette.responses import JSONResponse
from starlette.requests import Request
from sse_starlette.sse import EventSourceResponse
from chatgraph import get_messages, get_llm
from models.Chat import Chat
CHATS: Dict[str, List[dict]] = {} # chat_id -> messages
PENDING: Dict[str, Tuple[str, str]] = {} # message_id -> (chat_id, provider)
MODELS = {
@ -32,16 +33,22 @@ async def create_chat(request: Request):
provider = body.get("model","")
if provider not in MODELS:
return JSONResponse({"error": "Unknown model"}, status_code=400)
chat_id = str(uuid.uuid4())[:8]
CHATS[chat_id] = []
chat = Chat()
chat_id = str(uuid.uuid4())
chat.id = chat_id
chat.title = "New Chat"
chat.messages = json.dumps([])
chat.save()
return JSONResponse({"id": chat_id, "model": provider})
async def history(request : Request):
"""GET /chats/{chat_id} -> previous messages"""
chat_id = request.path_params["chat_id"]
if chat_id not in CHATS:
chat = Chat.find(chat_id)
if not chat:
return JSONResponse({"error": "Not found"}, status_code=404)
return JSONResponse({"messages": CHATS[chat_id]})
messages = json.loads(chat.messages) if chat.messages else []
return JSONResponse({"messages": messages})
async def post_message(request: Request):
"""POST /chats/{chat_id}/messages
@ -49,7 +56,8 @@ async def post_message(request: Request):
Returns: {"message_id": "<chat_id>"}
"""
chat_id = request.path_params["chat_id"]
if chat_id not in CHATS:
chat = Chat.find(chat_id)
if not chat:
return JSONResponse({"error": "Chat not found"}, status_code=404)
body = await request.json()
@ -58,9 +66,14 @@ async def post_message(request: Request):
if provider not in MODELS:
return JSONResponse({"error": "Unknown model"}, status_code=400)
# Load existing messages and add the new user message
messages = json.loads(chat.messages) if chat.messages else []
messages.append({"role": "human", "content": user_text})
chat.messages = json.dumps(messages)
chat.save()
message_id = str(uuid.uuid4())
PENDING[message_id] = (chat_id, provider)
CHATS[chat_id].append({"role": "human", "content": user_text})
return JSONResponse({
"status": "queued",
@ -72,13 +85,18 @@ async def chat_stream(request):
chat_id = request.path_params["chat_id"]
message_id = request.query_params.get("message_id")
if chat_id not in CHATS or message_id not in PENDING:
if message_id not in PENDING:
return JSONResponse({"error": "Not found"}, status_code=404)
chat_id_from_map, provider = PENDING.pop(message_id)
assert chat_id == chat_id_from_map
msgs = get_messages(CHATS, chat_id)
chat = Chat.find(chat_id)
if not chat:
return JSONResponse({"error": "Chat not found"}, status_code=404)
messages = json.loads(chat.messages) if chat.messages else []
msgs = get_messages( messages , chat_id)
llm = get_llm(provider)
async def event_generator():
@ -88,7 +106,10 @@ async def chat_stream(request):
buffer += token
yield {"data": token}
# Finished: store assistant reply
CHATS[chat_id].append({"role": "assistant", "content": buffer})
messages.append({"role": "assistant", "content": buffer})
chat.messages = json.dumps(messages)
chat.save()
yield {"event": "done", "data": ""}
return EventSourceResponse(event_generator())

View file

@ -0,0 +1,12 @@
from masoniteorm.migrations import Migration
class CreateChatsTable(Migration):
def up(self):
with self.schema.create("chats") as table:
table.uuid("id").primary()
table.string("title")
table.json("messages")
def down(self):
self.schema.drop("chats")

View file

@ -6,9 +6,9 @@
"npm:@tailwindcss/vite@^4.1.11": "4.1.11_vite@7.0.5__picomatch@4.0.3",
"npm:daisyui@^5.0.46": "5.0.46",
"npm:marked@^16.1.1": "16.1.1",
"npm:rolldown-vite@latest": "7.0.12_picomatch@4.0.3",
"npm:svelte@^5.35.5": "5.36.8_acorn@8.15.0",
"npm:tailwindcss@^4.1.11": "4.1.11",
"npm:vite@^7.0.4": "7.0.5_picomatch@4.0.3"
"npm:tailwindcss@^4.1.11": "4.1.11"
},
"npm": {
"@ampproject/remapping@2.3.0": {
@ -201,6 +201,95 @@
"@tybys/wasm-util@0.10.0"
]
},
"@napi-rs/wasm-runtime@1.0.1": {
"integrity": "sha512-KVlQ/jgywZpixGCKMNwxStmmbYEMyokZpCf2YuIChhfJA2uqfAKNEM8INz7zzTo55iEXfBhIIs3VqYyqzDLj8g==",
"dependencies": [
"@emnapi/core",
"@emnapi/runtime",
"@tybys/wasm-util@0.10.0"
]
},
"@oxc-project/runtime@0.78.0": {
"integrity": "sha512-jOU7sDFMyq5ShGJC21UobalVzqcdtWGfySVp8ELvKoVLzMpLHb4kv1bs9VKxaP8XC7Z9hlAXwEKVhCTN+j21aQ=="
},
"@oxc-project/types@0.78.0": {
"integrity": "sha512-8FvExh0WRWN1FoSTjah1xa9RlavZcJQ8/yxRbZ7ElmSa2Ij5f5Em7MvRbSthE6FbwC6Wh8iAw0Gpna7QdoqLGg=="
},
"@rolldown/binding-android-arm64@1.0.0-beta.30": {
"integrity": "sha512-4j7QBitb/WMT1fzdJo7BsFvVNaFR5WCQPdf/RPDHEsgQIYwBaHaL47KTZxncGFQDD1UAKN3XScJ0k7LAsZfsvg==",
"os": ["android"],
"cpu": ["arm64"]
},
"@rolldown/binding-darwin-arm64@1.0.0-beta.30": {
"integrity": "sha512-4vWFTe1o5LXeitI2lW8qMGRxxwrH/LhKd2HDLa/QPhdxohvdnfKyDZWN96XUhDyje2bHFCFyhMs3ak2lg2mJFA==",
"os": ["darwin"],
"cpu": ["arm64"]
},
"@rolldown/binding-darwin-x64@1.0.0-beta.30": {
"integrity": "sha512-MxrfodqImbsDFFFU/8LxyFPZjt7s4ht8g2Zb76EmIQ+xlmit46L9IzvWiuMpEaSJ5WbnjO7fCDWwakMGyJJ+Dw==",
"os": ["darwin"],
"cpu": ["x64"]
},
"@rolldown/binding-freebsd-x64@1.0.0-beta.30": {
"integrity": "sha512-c/TQXcATKoO8qE1bCjCOkymZTu7yVUAxBSNLp42Q97XHCb0Cu9v6MjZpB6c7Hq9NQ9NzW44uglak9D/r77JeDw==",
"os": ["freebsd"],
"cpu": ["x64"]
},
"@rolldown/binding-linux-arm-gnueabihf@1.0.0-beta.30": {
"integrity": "sha512-Vxci4xylM11zVqvrmezAaRjGBDyOlMRtlt7TDgxaBmSYLuiokXbZpD8aoSuOyjUAeN0/tmWItkxNGQza8UWGNQ==",
"os": ["linux"],
"cpu": ["arm"]
},
"@rolldown/binding-linux-arm64-gnu@1.0.0-beta.30": {
"integrity": "sha512-iEBEdSs25Ol0lXyVNs763f7YPAIP0t1EAjoXME81oJ94DesJslaLTj71Rn1shoMDVA+dfkYA286w5uYnOs9ZNA==",
"os": ["linux"],
"cpu": ["arm64"]
},
"@rolldown/binding-linux-arm64-musl@1.0.0-beta.30": {
"integrity": "sha512-Ny684Sn1X8c+gGLuDlxkOuwiEE3C7eEOqp1/YVBzQB4HO7U/b4n7alvHvShboOEY5DP1fFUjq6Z+sBLYlCIZbQ==",
"os": ["linux"],
"cpu": ["arm64"]
},
"@rolldown/binding-linux-arm64-ohos@1.0.0-beta.30": {
"integrity": "sha512-6moyULHDPKwt5RDEV72EqYw5n+s46AerTwtEBau5wCsZd1wuHS1L9z6wqhKISXAFTK9sneN0TEjvYKo+sgbbiA==",
"os": ["openharmony"],
"cpu": ["arm64"]
},
"@rolldown/binding-linux-x64-gnu@1.0.0-beta.30": {
"integrity": "sha512-p0yoPdoGg5Ow2YZKKB5Ypbn58i7u4XFk3PvMkriFnEcgtVk40c5u7miaX7jH0JdzahyXVBJ/KT5yEpJrzQn8yg==",
"os": ["linux"],
"cpu": ["x64"]
},
"@rolldown/binding-linux-x64-musl@1.0.0-beta.30": {
"integrity": "sha512-sM/KhCrsT0YdHX10mFSr0cvbfk1+btG6ftepAfqhbcDfhi0s65J4dTOxGmklJnJL9i1LXZ8WA3N4wmnqsfoK8Q==",
"os": ["linux"],
"cpu": ["x64"]
},
"@rolldown/binding-wasm32-wasi@1.0.0-beta.30": {
"integrity": "sha512-i3kD5OWs8PQP0V+JW3TFyCLuyjuNzrB45em0g84Jc+gvnDsGVlzVjMNPo7txE/yT8CfE90HC/lDs3ry9FvaUyw==",
"dependencies": [
"@napi-rs/wasm-runtime@1.0.1"
],
"cpu": ["wasm32"]
},
"@rolldown/binding-win32-arm64-msvc@1.0.0-beta.30": {
"integrity": "sha512-q7mrYln30V35VrCqnBVQQvNPQm8Om9HC59I3kMYiOWogvJobzSPyO+HA1MP363+Qgwe39I2I1nqBKPOtWZ33AQ==",
"os": ["win32"],
"cpu": ["arm64"]
},
"@rolldown/binding-win32-ia32-msvc@1.0.0-beta.30": {
"integrity": "sha512-nUqGBt39XTpbBEREEnyKofdP3uz+SN/x2884BH+N3B2NjSUrP6NXwzltM35C0wKK42hX/nthRrwSgj715m99Jw==",
"os": ["win32"],
"cpu": ["ia32"]
},
"@rolldown/binding-win32-x64-msvc@1.0.0-beta.30": {
"integrity": "sha512-lbnvUwAXIVWSXAeZrCa4b1KvV/DW0rBnMHuX0T7I6ey1IsXZ90J37dEgt3j48Ex1Cw1E+5H7VDNP2gyOX8iu3w==",
"os": ["win32"],
"cpu": ["x64"]
},
"@rolldown/pluginutils@1.0.0-beta.30": {
"integrity": "sha512-whXaSoNUFiyDAjkUF8OBpOm77Szdbk5lGNqFe6CbVbJFrhCCPinCbRA3NjawwlNHla1No7xvXXh+CpSxnPfUEw=="
},
"@rollup/rollup-android-arm-eabi@4.45.1": {
"integrity": "sha512-NEySIFvMY0ZQO+utJkgoMiCAjMrGvnbDLHvcmlA33UXJpYBCvlBEbMMtV837uCkS+plG2umfhn0T5mMAxGrlRA==",
"os": ["android"],
@ -392,7 +481,7 @@
"@emnapi/core",
"@emnapi/runtime",
"@emnapi/wasi-threads",
"@napi-rs/wasm-runtime",
"@napi-rs/wasm-runtime@0.2.12",
"@tybys/wasm-util@0.9.0",
"tslib"
],
@ -468,6 +557,9 @@
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"bin": true
},
"ansis@4.1.0": {
"integrity": "sha512-BGcItUBWSMRgOCe+SVZJ+S7yTRG0eGt9cXAHev72yuGcY23hnLA7Bky5L/xLyPINoSN95geovfBkqoTlNZYa7w=="
},
"aria-query@5.3.2": {
"integrity": "sha512-COROpnaoap1E2F000S62r6A60uHZnmlvomhfyT2DlTcrY1OrBKn2UhH7qn5wTC9zMvD0AY7csdPSNwKP+7WiQw=="
},
@ -709,6 +801,47 @@
"source-map-js"
]
},
"rolldown-vite@7.0.12_picomatch@4.0.3": {
"integrity": "sha512-Gr40FRnE98FwPJcMwcJgBwP6U7Qxw/VEtDsFdFjvGUTdgI/tTmF7z7dbVo/ajItM54G+Zo9w5BIrUmat6MbuWQ==",
"dependencies": [
"fdir",
"lightningcss",
"picomatch",
"postcss",
"rolldown",
"tinyglobby"
],
"optionalDependencies": [
"fsevents"
],
"bin": true
},
"rolldown@1.0.0-beta.30": {
"integrity": "sha512-H/LmDTUPlm65hWOTjXvd1k0qrGinNi8LrG3JsHVm6Oit7STg0upBmgoG5PZUHbAnGTHr0MLoLyzjmH261lIqSg==",
"dependencies": [
"@oxc-project/runtime",
"@oxc-project/types",
"@rolldown/pluginutils",
"ansis"
],
"optionalDependencies": [
"@rolldown/binding-android-arm64",
"@rolldown/binding-darwin-arm64",
"@rolldown/binding-darwin-x64",
"@rolldown/binding-freebsd-x64",
"@rolldown/binding-linux-arm-gnueabihf",
"@rolldown/binding-linux-arm64-gnu",
"@rolldown/binding-linux-arm64-musl",
"@rolldown/binding-linux-arm64-ohos",
"@rolldown/binding-linux-x64-gnu",
"@rolldown/binding-linux-x64-musl",
"@rolldown/binding-wasm32-wasi",
"@rolldown/binding-win32-arm64-msvc",
"@rolldown/binding-win32-ia32-msvc",
"@rolldown/binding-win32-x64-msvc"
],
"bin": true
},
"rollup@4.45.1": {
"integrity": "sha512-4iya7Jb76fVpQyLoiVpzUrsjQ12r3dM7fIVz+4NwoYvZOShknRmiv+iu9CClZml5ZLGb0XMcYLutK6w9tgxHDw==",
"dependencies": [
@ -830,9 +963,9 @@
"npm:@tailwindcss/vite@^4.1.11",
"npm:daisyui@^5.0.46",
"npm:marked@^16.1.1",
"npm:rolldown-vite@latest",
"npm:svelte@^5.35.5",
"npm:tailwindcss@^4.1.11",
"npm:vite@^7.0.4"
"npm:tailwindcss@^4.1.11"
]
}
}

View file

@ -2,7 +2,7 @@
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<link rel="icon" type="image/svg+xml" href="/icon/multibot_32.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Multi chat LLM</title>
</head>

View file

@ -16,6 +16,6 @@
"marked": "^16.1.1",
"svelte": "^5.35.5",
"tailwindcss": "^4.1.11",
"vite": "^7.0.4"
"vite": "npm:rolldown-vite@latest"
}
}

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 18 KiB

View file

@ -18,7 +18,7 @@
{#each chatStore.history as c}
<li>
<a
href="/{c.id}"
href="?chat={c.id}"
class={chatStore.chatId === c.id ? "active" : ""}
onclick={(e) => {
e.preventDefault();

View file

@ -1,7 +1,7 @@
const API = "http://localhost:8000"; // change if needed
const API = import.meta.env.CHATSBT_API_URL || "";
export async function createChat(model = "qwen/qwen3-235b-a22b-2507") {
const r = await fetch(`${API}/chats`, {
const r = await fetch(`${API}/api/chats`, {
method: "POST",
body: JSON.stringify({ model }),
});
@ -9,7 +9,7 @@ export async function createChat(model = "qwen/qwen3-235b-a22b-2507") {
}
export async function sendUserMessage(chatId, text, model = "") {
const r = await fetch(`${API}/chats/${chatId}/messages`, {
const r = await fetch(`${API}/api/chats/${chatId}/messages`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ message: text, model }),
@ -19,17 +19,17 @@ export async function sendUserMessage(chatId, text, model = "") {
export function openStream(chatId, messageId) {
return new EventSource(
`${API}/chats/${chatId}/stream?message_id=${messageId}`,
`${API}/api/chats/${chatId}/stream?message_id=${messageId}`,
);
}
export async function fetchModels() {
try {
const response = await fetch(`${API}/models`);
const response = await fetch(`${API}/api/models`);
const data = await response.json();
return data.models || [];
} catch (error) {
console.error('Failed to fetch models:', error);
console.error("Failed to fetch models:", error);
return [];
}
}

View file

@ -40,7 +40,14 @@ export const chatStore = (() => {
messages = stored?.messages || [];
loading = true;
loading = false;
window.history.replaceState({}, "", `/${id}`);
// Update URL with GET parameter
const url = new URL(window.location.href);
if (id) {
url.searchParams.set('chat', id);
} else {
url.searchParams.delete('chat');
}
window.history.replaceState({}, "", url);
}
async function createAndSelect() {
@ -101,13 +108,14 @@ export const chatStore = (() => {
}
}
// initial route handling
const path = window.location.pathname.slice(1);
// initial route handling - use GET parameter instead of path
const params = new URLSearchParams(window.location.search);
const chatIdFromUrl = params.get('chat');
const storedHistory = loadHistory();
if (path && !storedHistory.find((c) => c.id === path)) {
if (chatIdFromUrl && !storedHistory.find((c) => c.id === chatIdFromUrl)) {
createAndSelect();
} else if (path) {
selectChat(path);
} else if (chatIdFromUrl) {
selectChat(chatIdFromUrl);
}
// Load models on initialization

View file

@ -0,0 +1,44 @@
// Parse chat ID from GET parameter
export function getChatIdFromUrl() {
const params = new URLSearchParams(window.location.search);
return params.get('chat');
}
// Update URL with GET parameter
export function updateUrlWithChatId(chatId) {
const url = new URL(window.location.href);
if (chatId) {
url.searchParams.set('chat', chatId);
} else {
url.searchParams.delete('chat');
}
window.history.replaceState({}, "", url);
}
// which chat is on screen right now
export let activeChatId = $state(null);
export function switchChat(chatId) {
activeChatId = chatId;
updateUrlWithChatId(chatId);
}
export function newChat() {
const id = "chat_" + crypto.randomUUID();
switchChat(id);
return id;
}
// restore last opened chat (or create first one)
(() => {
const ids = JSON.parse(localStorage.getItem("chat_ids") || "[]");
const urlChatId = getChatIdFromUrl();
if (urlChatId) {
switchChat(urlChatId);
} else if (ids.length) {
switchChat(ids[0]);
} else {
newChat();
}
})();

10
models/Chat.py Normal file
View file

@ -0,0 +1,10 @@
from masoniteorm.models import Model
from masoniteorm.scopes import UUIDPrimaryKeyMixin
class Chat(Model, UUIDPrimaryKeyMixin):
__table__ = "chats"
__timestamps__ = False
__primary_key__ = "id"
__incrementing__ = False
__fillable__ = ["id", "title", "messages"]

View file

@ -8,13 +8,11 @@ dependencies = [
"jinja2>=3.1.6",
"langchain>=0.3.26",
"langchain-core>=0.3.68",
"langchain-openai>=0.3.28",
"starlette>=0.47.1",
"uvicorn>=0.35.0",
"python-dotenv>=1.1.1",
"websockets>=15.0.1",
"sse-starlette>=2.4.1",
"langchain-openai>=0.3.28",
"langgraph>=0.5.4",
"langgraph-checkpoint-sqlite>=2.0.11",
"aiosqlite>=0.21.0",
"masonite-orm>=3.0.0",
]

165
test-backend.hurl Normal file
View file

@ -0,0 +1,165 @@
# Hurl Test Script for Chat Backend API
# This script tests the complete flow of the backend API
# Test 1: Get available models
GET http://localhost:8000/api/models
HTTP 200
[Captures]
models: jsonpath "$.models"
# Validate models response
[Asserts]
jsonpath "$.models" count > 0
# Test 2: Create a new chat
POST http://localhost:8000/api/chats
Content-Type: application/json
{
"model": "qwen/qwen3-235b-a22b-2507"
}
HTTP 200
[Captures]
chat_id: jsonpath "$.id"
model: jsonpath "$.model"
# Validate chat creation response
[Asserts]
jsonpath "$.id" != null
jsonpath "$.model" == "qwen/qwen3-235b-a22b-2507"
# Test 3: Get chat history (should be empty initially)
GET http://localhost:8000/api/chats/{{chat_id}}
HTTP 200
[Captures]
messages: jsonpath "$.messages"
# Validate empty history
[Asserts]
jsonpath "$.messages" count == 0
# Test 4: Post a message to the chat
POST http://localhost:8000/api/chats/{{chat_id}}/messages
Content-Type: application/json
{
"message": "Hello, this is a test message",
"model": "{{model}}"
}
HTTP 200
[Captures]
message_id: jsonpath "$.message_id"
status: jsonpath "$.status"
# Validate message posting
[Asserts]
jsonpath "$.status" == "queued"
jsonpath "$.message_id" != null
# Test 5: Stream the response (SSE)
GET http://localhost:8000/api/chats/{{chat_id}}/stream?message_id={{message_id}}
HTTP 200
[Asserts]
header "Content-Type" == "text/event-stream; charset=utf-8"
# Test 6: Verify chat history now contains messages
GET http://localhost:8000/api/chats/{{chat_id}}
HTTP 200
[Captures]
updated_messages: jsonpath "$.messages"
# Validate messages are stored
[Asserts]
jsonpath "$.messages" count == 2
jsonpath "$.messages[0].role" == "human"
jsonpath "$.messages[0].content" == "Hello, this is a test message"
jsonpath "$.messages[1].role" == "assistant"
# Test 7: Post another message to test multi-turn conversation
POST http://localhost:8000/api/chats/{{chat_id}}/messages
Content-Type: application/json
{
"message": "Can you tell me a joke?",
"model": "{{model}}"
}
HTTP 200
[Captures]
message_id2: jsonpath "$.message_id"
# Test 8: Stream second response
GET http://localhost:8000/api/chats/{{chat_id}}/stream?message_id={{message_id2}}
HTTP 200
# Test 9: Verify multi-turn conversation history
GET http://localhost:8000/api/chats/{{chat_id}}
HTTP 200
[Captures]
final_messages: jsonpath "$.messages"
# Validate 4 messages (2 human + 2 assistant)
[Asserts]
jsonpath "$.messages" count == 4
# Test 10: Error handling - Invalid model
POST http://localhost:8000/api/chats
Content-Type: application/json
{
"model": "invalid-model-name"
}
HTTP 400
[Asserts]
jsonpath "$.error" == "Unknown model"
# Test 11: Error handling - Chat not found
GET http://localhost:8000/api/chats/non-existent-chat-id
HTTP 404
[Asserts]
jsonpath "$.error" == "Not found"
# Test 12: Error handling - Invalid chat ID for messages
POST http://localhost:8000/api/chats/non-existent-chat-id/messages
Content-Type: application/json
{
"message": "This should fail",
"model": "qwen/qwen3-235b-a22b-2507"
}
HTTP 404
[Asserts]
jsonpath "$.error" == "Chat not found"
# Test 13: Error handling - Missing message in post
POST http://localhost:8000/api/chats/{{chat_id}}/messages
Content-Type: application/json
{
"model": "{{model}}"
}
HTTP 200
# Note: The backend seems to accept empty messages, so this might not fail
# Test 14: Create another chat with different model
POST http://localhost:8000/api/chats
Content-Type: application/json
{
"model": "openai/gpt-4.1"
}
HTTP 200
[Captures]
chat_id2: jsonpath "$.id"
model2: jsonpath "$.model"
# Test 15: Verify second chat has different ID
[Asserts]
variable "chat_id" != "chat_id2"
variable "model2" == "openai/gpt-4.1"

1469
uv.lock generated

File diff suppressed because it is too large Load diff