Hostinger VPS — Docker services, AI models, security, and maintenance
| Service | Container | Access | Status |
|---|---|---|---|
| Wiki | wiki-wiki-1 | anukul-wiki.duckdns.org | Live |
| OpenClaw | openclaw-47o1 | Port 65123 + Telegram | Live |
| Hermes Agent | hermes-agent | Telegram bot | Live |
| Traefik | traefik-pt36 | Ports 80/443 | Live |
| Ollama | systemd | localhost:11434 | Live |
| Provider | API Base | Models |
|---|---|---|
| Moonshot/Kimi | api.moonshot.ai/v1 | kimi-k2-0905-preview |
| OpenRouter | openrouter.ai/api/v1 | gpt-oss-120b:free, gemma-4-26b:free |
| Nexos | Hostinger bundled | 21 models |
| Model | Size | Context | Best For |
|---|---|---|---|
| gemma2:2b | 1.6 GB | 32K | Quick small tasks |
| qwen2.5:3b | 1.9 GB | 32K | Local reasoning |
| hermes3:3b | 2.0 GB | 32K | Agent tasks |
ssh -i ~/.ssh/id_ed25519 root@187.77.185.36
# View all containers
docker ps
# Restart a service
cd /docker/wiki && docker compose restart
cd /docker/openclaw-47o1 && docker compose restart
cd /docker/hermes-agent && docker compose restart
# View logs
docker logs <container-name> --tail 50
# Full restart
docker compose up -d --force-recreate
# List models
ollama list
# Pull new model
ollama pull <model-name>
# Test model
curl http://localhost:11434/api/generate \
-d '{"model":"qwen2.5:3b","prompt":"hello","stream":false}'
# Restart service
systemctl restart ollama
# View rules
ufw status verbose
# Open ports: 22 (SSH), 80/443 (HTTP/S), 65123 (OpenClaw)
# Docker subnet 172.16.0.0/12 allowed to Ollama 11434
| Measure | Status |
|---|---|
| UFW Firewall | Active |
| fail2ban (SSH) | Active |
| Unattended Upgrades | Active |
| SSH Key Auth | ED25519 |
| HTTPS (Let's Encrypt) | TLS 1.3 |
Use api.moonshot.ai NOT api.moonshot.cn. Check key at platform.kimi.ai
Ollama must listen on 0.0.0.0. Check: ss -tlnp | grep 11434. Fix: /etc/systemd/system/ollama.service.d/override.conf → Environment="OLLAMA_HOST=0.0.0.0"
Test token: curl https://api.telegram.org/bot<TOKEN>/getMe. Restart: docker compose restart