Full-stack chat application that allows users to interact with multiple AI models through a web interface
Find a file
2025-08-27 13:27:44 -04:00
config database support 2025-08-04 14:05:54 -04:00
databases/migrations database support 2025-08-04 14:05:54 -04:00
frontend separa rutas api 2025-08-02 00:57:39 -04:00
models database support 2025-08-04 14:05:54 -04:00
.gitignore database support 2025-08-04 14:05:54 -04:00
app.py ruta serve chat 2025-08-06 00:40:42 -04:00
chatgraph.py database support 2025-08-04 14:05:54 -04:00
controllers.py database support 2025-08-04 14:05:54 -04:00
Dockerfile more docker fixes 2025-07-31 22:50:03 -04:00
pyproject.toml database support 2025-08-04 14:05:54 -04:00
README.md Readme added 2025-08-27 13:27:44 -04:00
test-backend.hurl hurl api test script 2025-08-04 14:06:28 -04:00
TESTING.md hurl api test script 2025-08-04 14:06:28 -04:00
uv.lock database support 2025-08-04 14:05:54 -04:00

ChatSBT

ChatSBT is a full-stack chat application that allows users to interact with multiple AI models through a web interface. The application features real-time streaming responses, chat history persistence, and support for various language models.

Features

  • Real-time chat with multiple AI models
  • Server-Sent Events (SSE) for streaming responses
  • Chat history persistence
  • Responsive web interface built with Svelte
  • Docker support for easy deployment

Technology Stack

Backend

  • Python 3.11+
  • Starlette - ASGI framework for building asynchronous web applications
  • Langchain - Framework for developing applications with LLMs
  • Masonite ORM - Database ORM for Python
  • SQLite - Default database (configurable)

Frontend

  • Svelte 5 - Reactive UI framework
  • Vite - Next-generation frontend tooling
  • DaisyUI - Tailwind CSS components
  • Marked - Markdown parser

Infrastructure

  • Deno - JavaScript/TypeScript runtime for frontend build
  • uv - Python package installer and resolver

Prerequisites

Before you begin, ensure you have the following installed:

  • Python 3.11 or higher
  • Deno 2.4 or higher
  • uv (Python package installer)
  • Docker (optional, for containerized deployment)

Installation

Backend Setup

  1. Install Python dependencies using uv:

    uv sync
    
  2. Set up environment variables: Create a .env file in the project root with the necessary configuration:

    # Database configuration
    DB_CONNECTION=sqlite
    DB_DATABASE=chatsbt.db
    
    # AI Provider API keys (add as needed)
    OPENAI_API_KEY=your_openai_api_key
    ANTHROPIC_API_KEY=your_anthropic_api_key
    

Frontend Setup

  1. Navigate to the frontend directory:

    cd frontend
    
  2. Install frontend dependencies:

    deno install --allow-scripts
    
  3. Build the frontend:

    deno run build
    

Running the Application

Development Mode

To run the application in development mode with hot reloading:

  1. Start the backend server:

    uv run app.py
    

    The backend will be available at http://localhost:8000

  2. In a separate terminal, start the frontend development server:

    cd frontend
    deno run dev
    

    The frontend will be available at http://localhost:5173

Production Mode

To run the application in production mode:

  1. Build the frontend:

    cd frontend
    deno run build
    cd ..
    
  2. Start the backend server:

    uv run app.py
    

The application will be available at http://localhost:8000

Using Docker

To run the application using Docker:

  1. Build the Docker image:

    docker build -t chatsbt .
    
  2. Run the container:

    docker run -p 8000:8000 chatsbt
    

The application will be available at http://localhost:8000

API Endpoints

The backend exposes the following RESTful API endpoints:

Models

  • GET /api/models - Retrieve list of available AI models
    • Response: {"models": ["model1", "model2", ...]}

Chats

  • POST /api/chats - Create a new chat session

    • Request: {"model": "model_name"}
    • Response: {"id": "chat_id", "model": "model_name"}
  • GET /api/chats/{chat_id} - Retrieve chat history

    • Response: {"messages": [{"role": "human|assistant", "content": "message_text"}, ...]}

Messages

  • POST /api/chats/{chat_id}/messages - Send a message to a chat

    • Request: {"message": "user_message", "model": "model_name"}
    • Response: {"status": "queued", "message_id": "message_id"}
  • GET /api/chats/{chat_id}/stream?message_id={message_id} - Stream AI response

    • Server-Sent Events (SSE) endpoint that streams the AI response token by token
    • Events:
      • data: token_content - Individual tokens from the AI response
      • event: done - Indicates the response is complete