Skip to content

Open WebUI

Open WebUI is the primary user interface for interacting with AI models in the Local AI Cyber Lab. It provides a modern, feature-rich chat interface for model interaction and management.

Architecture Overview

graph TB
    subgraph User_Interface["User Interface"]
        chat["Chat Interface"]
        settings["Settings Panel"]
        templates["Prompt Templates"]
        history["Chat History"]
    end

    subgraph Backend["Backend Services"]
        api["API Server"]
        auth["Authentication"]
        storage["State Management"]

        subgraph Model_Integration["Model Integration"]
            ollama["Ollama Connection"]
            langfuse["Langfuse Analytics"]
            guardian["AI Guardian"]
        end
    end

    subgraph Storage["Persistent Storage"]
        db["Chat Database"]
        files["File Storage"]
        config["Configuration"]
    end

    User_Interface --> Backend
    Backend --> Storage
    Model_Integration --> ollama
    Model_Integration --> langfuse
    Model_Integration --> guardian

    classDef primary fill:#f9f,stroke:#333,stroke-width:2px
    classDef secondary fill:#bbf,stroke:#333,stroke-width:1px
    class chat,api primary
    class ollama,guardian secondary

Features

Chat Interface

graph LR
    subgraph Chat_Features["Chat Features"]
        A["Message Input"] --> B["Model Selection"]
        B --> C["Parameter Control"]
        C --> D["Response Generation"]
        D --> E["History Management"]
    end

    subgraph Advanced_Features["Advanced Features"]
        F["File Upload"] --> G["Code Highlighting"]
        G --> H["Markdown Support"]
        H --> I["Export Options"]
    end

    subgraph Integration["Integrations"]
        J["Model APIs"] --> K["Security Checks"]
        K --> L["Analytics"]
        L --> M["Storage"]
    end

    Chat_Features --> Advanced_Features
    Advanced_Features --> Integration

Installation

Open WebUI is automatically deployed as part of the Local AI Cyber Lab. To customize:

# Update Open WebUI
docker-compose pull openwebui

# Start the service
docker-compose up -d openwebui

Configuration

Environment Variables

# .env file
OPENWEBUI_PORT=3000
WEBUI_AUTH_TOKEN=your-secure-token
OLLAMA_API_BASE_URL=http://ollama:11434

Security Settings

# docker-compose.yml
services:
  openwebui:
    environment:
      - WEBUI_AUTH_TOKEN=${WEBUI_AUTH_TOKEN}
      - SESSION_SECRET=${SESSION_SECRET}
      - ENABLE_SECURITY_HEADERS=true

User Interface Features

Chat Management

  1. Model Selection:
  2. Choose from available models
  3. Configure model parameters
  4. Save custom presets

  5. Chat Controls:

  6. Message formatting
  7. File attachments
  8. Code blocks
  9. Markdown support

  10. History Management:

  11. Save conversations
  12. Export chat logs
  13. Search history
  14. Tag conversations

Advanced Features

  1. Prompt Templates:

    {
      "name": "Code Review",
      "template": "Review this code:\n```{{language}}\n{{code}}\n```\nFocus on:",
      "variables": ["language", "code"]
    }
    

  2. Parameter Controls:

    const modelParams = {
      temperature: 0.7,
      top_p: 0.9,
      max_tokens: 2000,
      presence_penalty: 0.0,
      frequency_penalty: 0.0
    };
    

Integration

Ollama Integration

# Example API integration
async def query_model(prompt, model="llama2"):
    response = await fetch(f"{OLLAMA_API_BASE_URL}/api/chat", {
        method: "POST",
        headers: {"Content-Type": "application/json"},
        body: JSON.stringify({
            model: model,
            messages: [{"role": "user", "content": prompt}]
        })
    })
    return await response.json()

Security Integration

# AI Guardian integration
async def validate_prompt(prompt):
    response = await fetch("/api/security/validate", {
        method: "POST",
        headers: {
            "Content-Type": "application/json",
            "Authorization": f"Bearer {API_KEY}"
        },
        body: JSON.stringify({ prompt: prompt })
    })
    return await response.json()

Monitoring

Health Checks

# docker-compose.yml
services:
  openwebui:
    healthcheck:
      test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000"]
      interval: 30s
      timeout: 10s
      retries: 3

Analytics Integration

// Langfuse integration
const trackModelUsage = async (modelId, prompt, response) => {
    await langfuse.track({
        type: "llm",
        modelId,
        prompt,
        response,
        metadata: {
            temperature: modelParams.temperature,
            maxTokens: modelParams.max_tokens
        }
    });
};

Performance Optimization

Caching

// Response caching
const cache = new Map();

const getCachedResponse = async (prompt, model) => {
    const key = `${model}:${prompt}`;
    if (cache.has(key)) {
        return cache.get(key);
    }
    const response = await queryModel(prompt, model);
    cache.set(key, response);
    return response;
};

Resource Management

# docker-compose.yml
services:
  openwebui:
    mem_limit: ${OPENWEBUI_MEMORY_LIMIT:-1g}
    cpus: ${OPENWEBUI_CPU_LIMIT:-1.0}

Troubleshooting

Common Issues

  1. Connection Problems:

    # Check connectivity
    curl -v http://localhost:3000/health
    
    # Check Ollama connection
    curl -v http://ollama:11434/api/health
    

  2. Authentication Issues:

    # Verify environment variables
    docker-compose config
    
    # Check logs
    docker-compose logs openwebui
    

Additional Resources

  1. User Guide
  2. API Documentation
  3. Security Guide
  4. Customization Guide

Integration Examples

  1. Custom Templates
  2. API Usage
  3. Plugin Development
  4. Theme Customization