Skip to content

Getting Started Guide

Introduction

This guide will help you get started with Local-AI-Cyber-Lab, a comprehensive environment for AI development and cybersecurity research. We'll cover basic setup, initial configuration, and essential usage patterns.

Prerequisites

Before starting, ensure you have:

  • Docker Engine 24.0+
  • Docker Compose v2.0+
  • Git
  • 16GB+ RAM (32GB recommended)
  • 50GB+ free disk space
  • NVIDIA GPU (optional but recommended):
  • CUDA 11.8+ compatible
  • Minimum 8GB VRAM for basic models
  • 16GB+ VRAM recommended for larger models

Initial Setup

  1. Clone the repository:

    git clone https://github.com/Local-AI-Cyber-Lab/Local-AI-Cyber-Lab.git
    cd Local-AI-Cyber-Lab
    

  2. Configure environment:

    cp .env.example .env
    # Edit .env with your configurations
    

  3. Initialize services:

    ./scripts/install.sh
    

Basic Configuration

Environment Variables

Key variables to configure in .env:

# Core Settings
TZ=UTC
DOMAIN=localhost

# Security
WEBUI_AUTH_TOKEN=your-secure-token
QDRANT_API_KEY=your-api-key
LANGFUSE_SECRET_KEY=your-secret-key

# Resource Limits
OLLAMA_MEMORY_LIMIT=8g
PORTAINER_MEMORY_LIMIT=1g
N8N_MEMORY_LIMIT=2g

Service Access

After installation, access services through:

Service URL Default Credentials
Portainer https://localhost:9443 Set during setup
Open WebUI https://localhost/chat None required
Flowise https://localhost/flow Set in .env

Basic Usage

1. Managing Models

# Pull models
ollama pull mistral
ollama pull codellama

# List available models
ollama list

2. Using the Chat Interface

  1. Access Open WebUI at https://localhost/chat
  2. Select a model from the dropdown
  3. Start chatting with the AI

3. Creating AI Workflows

  1. Access Flowise at https://localhost/flow
  2. Create a new workflow
  3. Add nodes for:
  4. Input processing
  5. LLM interaction
  6. Output formatting
  7. Connect nodes and deploy

4. Monitoring

  1. Access Langfuse at https://localhost/trace
  2. View:
  3. Request logs
  4. Model performance
  5. Error rates
  6. Usage statistics

Security Best Practices

  1. Change default passwords immediately
  2. Use strong API keys
  3. Enable 2FA where available
  4. Regularly update components
  5. Monitor system logs
  6. Follow the principle of least privilege

Next Steps

Troubleshooting

Common issues and solutions:

  1. Services not starting
  2. Check Docker logs
  3. Verify port availability
  4. Ensure sufficient resources

  5. Model loading errors

  6. Verify GPU drivers
  7. Check memory limits
  8. Ensure model compatibility

  9. Access denied

  10. Verify credentials
  11. Check API keys
  12. Review firewall settings

Support

For additional help: