Deployment
Hosting: Laravel Forge
Postbox laeuft auf einem Forge-managed Server. Deployments werden ueber Forge ausgeloest (Push-to-Deploy oder manuell).
PostgreSQL-Voraussetzungen
Vor dem ersten Deployment muss die pg_trgm Extension aktiviert sein (fuer Trigram-basierte Fuzzy-Suche):
# Einmalig auf dem Production-Server ausfuehren:
sudo -u postgres psql -d postbox -c "CREATE EXTENSION IF NOT EXISTS pg_trgm;"
Ohne diese Extension schlaegt die Migration 2026_02_21_100000_add_trigram_indexes_for_discover fehl. Die Extension aktiviert GIN-Trigram-Indexes fuer similarity() und ILIKE-Queries auf social_profiles und watchers.
Optionaler manueller Index (nicht in Migration enthalten):
CREATE INDEX CONCURRENTLY watchers_name_trgm_index ON watchers USING gin (name gin_trgm_ops);
Deployment Script
Das Forge Deployment Script fuehrt folgende Schritte aus:
cd /home/forge/app.postbox.so/current
# Dependencies aktualisieren
composer install --no-dev --no-interaction --prefer-dist --optimize-autoloader
# Migrationen ausfuehren
php artisan migrate --force
# Caches optimieren (Config, Routes, Views)
php artisan optimize
# Log Viewer Assets publishen
php artisan vendor:publish --tag=log-viewer-assets --force
# Vite-Assets bauen
npm ci --prefer-offline
npm run build
# Queue Workers neustarten (graceful)
php artisan queue:restart
Location: Forge Dashboard > Sites > Deploy Script
Queue Workers (Forge Daemons)
Alle Queue Workers laufen als Forge Daemons. Jede Queue hat einen dedizierten Worker:
# YouTube Channel Updates (--tries=0: Job steuert Retries selbst via $tries/$maxExceptions/retryUntil)
php8.4 artisan queue:work database --sleep=5 --daemon --quiet --timeout=120 --tries=0 --queue=imports-youtube
# YouTube Priority Updates
php8.4 artisan queue:work database --sleep=5 --daemon --quiet --timeout=120 --tries=0 --queue=imports-youtube-priority
# YouTube Video Stats
php8.4 artisan queue:work database --sleep=5 --daemon --quiet --timeout=120 --tries=0 --queue=imports-youtube-video
# YouTube Video Priority
php8.4 artisan queue:work database --sleep=5 --daemon --quiet --timeout=120 --tries=0 --queue=imports-youtube-video-priority
# Related Profiles (alle Plattformen, High-Prio zuerst fuer User-getriggerte Jobs)
php8.4 artisan queue:work database --sleep=5 --daemon --quiet --timeout=300 --tries=0 --queue=youtube-related-channels-high,instagram-related-profiles-high,cross-platform-related-high,youtube-related-channels,instagram-related-profiles,cross-platform-related
# AI Detection (Rate-Limited durch Job-Middleware)
php8.4 artisan queue:work database --sleep=5 --daemon --quiet --timeout=60 --tries=3 --queue=ai-detection
# E-Mail Notifications (Feedback, Alerts, Registration)
php8.4 artisan queue:work database --sleep=5 --daemon --quiet --timeout=60 --tries=5 --queue=emails
WICHTIG: YouTube-Queues muessen
--tries=0verwenden! Die Jobs steuern Retries selbst via$tries = 0+$maxExceptions+retryUntil(). Bei--tries=3zaehlt jederrelease()-Aufruf (Quota-Pause) als Versuch, was nach 3 Releases zuMaxAttemptsExceededExceptionfuehrt — obwohl noch genug API-Quota vorhanden ist.
| Queue | Timeout | Tries | Besonderheit |
|---|---|---|---|
imports-youtube | 120s | 0 | Job-managed Retries (Quota-Aware) |
imports-youtube-priority | 120s | 0 | PRO/Leaderboard-Profile |
imports-youtube-video | 120s | 0 | Video-Statistiken |
imports-youtube-video-priority | 120s | 0 | Prioritaets-Video-Sync |
youtube-related-channels-high | 300s | 0 | User-getriggert, hoechste Prio |
instagram-related-profiles-high | 300s | 0 | User-getriggert, hoechste Prio |
cross-platform-related-high | 300s | 0 | User-getriggert, hoechste Prio |
youtube-related-channels | 300s | 0 | AutoFill, Quota-Aware |
instagram-related-profiles | 300s | 0 | AutoFill |
cross-platform-related | 300s | 0 | AutoFill |
ai-detection | 60s | 3 | Gemini Rate-Limit: 15/min |
emails | 60s | 5 | Notification-Mails, Feedback, Admin-Alerts |
Location: Forge Dashboard > Daemons
Collector-basierte Jobs (Instagram)
Instagram Daily Scrapes laufen nicht ueber Laravel Queues, sondern ueber das Collector-System:
- Browser-Extension least Jobs via
/api/collector/jobs/lease - Ergebnisse werden via
/api/collector/jobs/{id}/completezurueckgemeldet - Kein Laravel Queue Worker erforderlich
Location: app/Http/Controllers/Api/CollectorJobController.php
Reverb Daemon
Laravel Reverb laeuft als separater Forge Daemon:
php8.4 artisan reverb:start --host=0.0.0.0 --port=8081
Nginx WebSocket Proxy
Nginx muss WebSocket-Verbindungen an Reverb weiterleiten:
location /app {
proxy_pass http://127.0.0.1:8081;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
}
Location: Forge Dashboard > Sites > Nginx Configuration
PHP-FPM Konfiguration
PHP-FPM verarbeitet alle HTTP-Requests (Web UI, Livewire Polling, API). Die Default-Konfiguration von Forge ist fuer Production-Workloads zu niedrig.
Location: /etc/php/8.4/fpm/pool.d/www.conf
Empfohlene Konfiguration (48-Kern-Server)
| Setting | Forge Default | Empfohlen | Beschreibung |
|---|---|---|---|
pm | dynamic | dynamic | Pool-Modus |
pm.max_children | 20 | 150 | Max. gleichzeitige Worker |
pm.start_servers | 2 | 30 | Worker beim Start |
pm.min_spare_servers | 1 | 15 | Mindest-Idle-Worker |
pm.max_spare_servers | 3 | 50 | Max. Idle-Worker |
pm.max_requests | 0 | 1000 | Requests pro Worker vor Respawn |
Faustregeln:
max_children= Verfuegbarer RAM (GB) / 0.1 GB (bei ~100 MB pro Worker)pm.max_requests = 1000verhindert Memory-Bloat durch langlebige Worker- Bei
active: N, idle: 0insystemctl status php8.4-fpmistmax_childrenzu niedrig
# Aendern und neustarten
sudo nano /etc/php/8.4/fpm/pool.d/www.conf
sudo systemctl restart php8.4-fpm
sudo systemctl status php8.4-fpm
WICHTIG: Nach PHP-Upgrades (z.B. 8.3 → 8.4) pruefen ob der alte FPM-Master noch laeuft:
ps -eo pid,args | grep "php-fpm: master". Alte Version stoppen:sudo systemctl stop php8.3-fpm && sudo systemctl disable php8.3-fpm
Siehe auch: Troubleshooting fuer Diagnose bei FPM-Worker-Exhaustion.
Scheduler
Der Laravel Scheduler muss jede Minute laufen. Forge richtet den Cron automatisch ein:
* * * * * cd /home/forge/app.postbox.so/current && php8.4 artisan schedule:run >> /dev/null 2>&1
Alle Scheduled Commands sind in routes/console.php definiert. Wichtige Jobs haben Heartbeat-Monitoring via CronHeartbeatMonitorService und Overlap-Schutz via .withoutOverlapping().
Location: routes/console.php
Wichtige .env-Variablen (Production)
# App
APP_ENV=production
APP_DEBUG=false
APP_URL=https://app.postbox.so
# Datenbank
DB_CONNECTION=pgsql
DB_HOST=127.0.0.1
DB_PORT=5432
DB_DATABASE=postbox
DB_USERNAME=forge
DB_PASSWORD=<secret>
# Queue & Cache
QUEUE_CONNECTION=database
CACHE_STORE=database
# Reverb (Production)
REVERB_APP_ID=postbox
REVERB_APP_KEY=<generated-key>
REVERB_APP_SECRET=<generated-secret>
REVERB_HOST=app.postbox.so
REVERB_PORT=443
REVERB_SCHEME=https
# Flare (Error Tracking)
FLARE_KEY=your-flare-key
# YouTube API
YOUTUBE_API_KEY=<key>
YOUTUBE_API_KEYS=<key1>,<key2>,<key3>
# Health Monitoring
HEALTH_TOKEN=<generated-hex-token>
# WorkOS SSO
WORKOS_CLIENT_ID=<client-id>
WORKOS_API_KEY=<api-key>
Location: .env.example
Troubleshooting
Scheduler laeuft nicht
# Cache leeren
php artisan schedule:clear-cache
# Manuell testen
php artisan schedule:run --verbose
# Cron-Eintrag pruefen
crontab -l
Queue-Probleme
# Queue-Status pruefen
php artisan queue:monitor imports-youtube,imports-youtube-priority,ai-detection
# Fehlgeschlagene Jobs anzeigen
php artisan queue:failed
# Alle fehlgeschlagenen Jobs erneut versuchen
php artisan queue:retry all
Pending Jobs inspizieren
php8.4 artisan tinker --execute="
DB::table('jobs')
->selectRaw(\"queue, payload::json->>'displayName' as job_class, COUNT(*) as count\")
->groupBy('queue', DB::raw(\"payload::json->>'displayName'\"))
->orderByDesc('count')
->get()
->each(fn(\$r) => print(\"\$r->queue: \$r->job_class (\$r->count)\n\"));
"
Mit Alter der aeltesten Jobs:
php8.4 artisan tinker --execute="
DB::table('jobs')
->selectRaw(\"queue, payload::json->>'displayName' as job_class, MIN(to_timestamp(available_at)) as oldest, COUNT(*) as count\")
->groupBy('queue', DB::raw(\"payload::json->>'displayName'\"))
->orderByDesc('count')
->get()
->each(fn(\$r) => print(\"\$r->queue: \$r->job_class (\$r->count, oldest: \$r->oldest)\n\"));
"
Jobs die mehrere Tage alt sind deuten auf gestoppte oder gecrashe Forge Workers hin.
Jobs manuell anschieben
Wenn Forge-Worker gestoppt sind:
# Einzelne Queue abarbeiten (stoppt automatisch wenn leer)
php8.4 artisan queue:work --queue=ai-detection --stop-when-empty
# Mehrere Queues parallel
php8.4 artisan queue:work --queue=ai-detection --stop-when-empty &
php8.4 artisan queue:work --queue=imports-youtube --stop-when-empty &
--stop-when-empty verhindert Zombie-Prozesse neben den Forge-Workern.
Memory-Probleme
php -d memory_limit=512M artisan social:queue-daily-instagram
Die Scraper-Commands nutzen chunkById(1000) um Memory-Exhaustion zu vermeiden.