Getting Started Guide¶
Introduction¶
This guide will help you get started with Local-AI-Cyber-Lab, a comprehensive environment for AI development and cybersecurity research. We'll cover basic setup, initial configuration, and essential usage patterns.
Prerequisites¶
Before starting, ensure you have:
- Docker Engine 24.0+
- Docker Compose v2.0+
- Git
- 16GB+ RAM (32GB recommended)
- 50GB+ free disk space
- NVIDIA GPU (optional but recommended):
- CUDA 11.8+ compatible
- Minimum 8GB VRAM for basic models
- 16GB+ VRAM recommended for larger models
Initial Setup¶
-
Clone the repository:
-
Configure environment:
-
Initialize services:
Basic Configuration¶
Environment Variables¶
Key variables to configure in .env
:
# Core Settings
TZ=UTC
DOMAIN=localhost
# Security
WEBUI_AUTH_TOKEN=your-secure-token
QDRANT_API_KEY=your-api-key
LANGFUSE_SECRET_KEY=your-secret-key
# Resource Limits
OLLAMA_MEMORY_LIMIT=8g
PORTAINER_MEMORY_LIMIT=1g
N8N_MEMORY_LIMIT=2g
Service Access¶
After installation, access services through:
Service | URL | Default Credentials |
---|---|---|
Portainer | https://localhost:9443 | Set during setup |
Open WebUI | https://localhost/chat | None required |
Flowise | https://localhost/flow | Set in .env |
Basic Usage¶
1. Managing Models¶
2. Using the Chat Interface¶
- Access Open WebUI at https://localhost/chat
- Select a model from the dropdown
- Start chatting with the AI
3. Creating AI Workflows¶
- Access Flowise at https://localhost/flow
- Create a new workflow
- Add nodes for:
- Input processing
- LLM interaction
- Output formatting
- Connect nodes and deploy
4. Monitoring¶
- Access Langfuse at https://localhost/trace
- View:
- Request logs
- Model performance
- Error rates
- Usage statistics
Security Best Practices¶
- Change default passwords immediately
- Use strong API keys
- Enable 2FA where available
- Regularly update components
- Monitor system logs
- Follow the principle of least privilege
Next Steps¶
- Explore Advanced Configuration
- Learn about Security Features
- Start Building AI Workflows
- Understand Model Fine-tuning
Troubleshooting¶
Common issues and solutions:
- Services not starting
- Check Docker logs
- Verify port availability
-
Ensure sufficient resources
-
Model loading errors
- Verify GPU drivers
- Check memory limits
-
Ensure model compatibility
-
Access denied
- Verify credentials
- Check API keys
- Review firewall settings
Support¶
For additional help:
- Issue Tracker
- Discussions
- Contact: support@cyber-ai-agents.com