Solution to 413 Payload Too Large:
✅ Same repository: peikarband/landing
✅ Different tags: base, latest, {commit}
Images:
• hub.peikarband.ir/peikarband/landing:base (base image)
• hub.peikarband.ir/peikarband/landing:latest (app)
• hub.peikarband.ir/peikarband/landing:{commit} (app)
No new repo creation, no permission issues!
Before: hub.peikarband.ir/peikarband/base:latest
After: hub.peikarband.ir/peikarband/landing:base
This solves the 413 error because:
✅ Same repository (no new repo creation)
✅ Just different tags
✅ No permission/quota issues
Images:
• hub.peikarband.ir/peikarband/landing:base
• hub.peikarband.ir/peikarband/landing:latest
• hub.peikarband.ir/peikarband/landing:{commit}
New commands:
• make docker-build-base - Build base image locally
• make docker-push-base - Push base to Harbor
• make docker-build - Build app (updated to use base)
• make docker-push - Push app to Harbor
Usage:
1. make docker-login
2. make docker-build-base
3. make docker-push-base
4. make docker-build
5. make docker-push
Problem:
• 413 Payload Too Large error
• Harbor doesn't handle provenance/sbom metadata well
Solution:
✅ provenance: false (already was)
✅ sbom: false (new - disables SBOM generation)
✅ No cache settings (simpler, more compatible)
This makes images compatible with Harbor registry!
Pipeline now handles base image automatically:
✅ ensure-base-image:
• Checks if Dockerfile.base changed
• Only rebuilds if needed
• Saves ~10 minutes when unchanged
✅ build-and-push-app:
• Uses base image
• Fast build (~3 minutes)
✅ verify-images:
• Confirms both images exist
• Shows available tags
Behavior:
─────────
1️⃣ Dockerfile.base changed:
→ Build base (~10 min)
→ Build app (~3 min)
→ Total: ~13 min
2️⃣ Only code changed:
→ Skip base (path filter)
→ Build app (~3 min)
→ Total: ~3 min ✅
This is the smart solution we wanted!
All dependencies now in base image:
✅ Python 3.11
✅ Node.js 20
✅ bun, npm
✅ Build tools (gcc, g++, make)
✅ Runtime essentials (curl, ca-certificates)
✅ tini (init system)
Result:
• Runtime stage needs ZERO installations
• Just copy files from builder
• Pure base image usage 🚀
Problem: Runtime stage was installing Node.js again!
Solution: Use base image for runtime too
- Already has Python 3.11 ✅
- Already has Node.js 20 ✅
- Already has curl, ca-certificates ✅
- Only install tini (tiny)
This is the CORRECT way to use base image!
Changes:
✅ Dockerfile now uses base image
✅ Helper script to build base locally
✅ Complete documentation
Base image contains heavy dependencies:
- Python 3.11
- Node.js 20
- bun, npm
- Build tools (gcc, g++, make)
Build times:
• First time: 10 minutes (build base)
• After that: 3 minutes (code only) 🚀
To build base image:
./build-base-local.sh
Then normal builds are FAST!
Problem: Docker-in-Docker doesn't work in Woodpecker alpine image
Solution:
- Dockerfile now self-contained (installs Node.js, bun directly)
- No dependency on external base image
- Build always works
- Simpler and more reliable
Trade-off:
- Build time: ~8-10 minutes (but reliable)
- No complex base image management
- Easier to maintain
For future optimization:
- Use .woodpecker-base.yml separately to build base
- Then switch back to base image usage
- But for now, this JUST WORKS
- Use 'docker pull' to check if base exists
- If exists: skip build (saves ~10 minutes) ✅
- If not exists: build automatically
- Single stage that handles both check and build
- No authentication issues (uses docker login)
Behavior:
✓ Base exists → Skip (~30 seconds check + 3 min app)
✓ Base missing → Build base (~10 min) + app (~3 min)
This is the REAL solution we wanted!
- Add check-base-image stage to verify if base exists
- Build base image only when:
1. Dockerfile.base changes (path condition)
2. .woodpecker.yml changes
3. Manual trigger
- Saves ~10 minutes on normal builds
- First time or after base changes: builds base
- Normal commits: skips base, only builds app
Behavior:
✓ Normal push: skip base (~3 min)
✓ Dockerfile.base change: build base (~12 min)
✓ Manual trigger: build base
- Remove cache_from and cache_to that cause parsing errors
- Keep pull: true for layer caching
- Simpler configuration that works reliably
- Docker will still use local cache automatically
Error was: type required form> "ref=..."
Cause: Woodpecker plugin doesn't support complex cache syntax
- Always build base image first (with cache for speed)
- If base exists in registry, uses cache (~30 sec)
- If base doesn't exist, builds from scratch (~10 min)
- Then builds and pushes application image
- Self-healing: no manual intervention needed
Pipeline flow:
1. build-base-image (always, with cache)
2. build-image (app)
3. push-image (with multi-tags)
4. verify-push
5. notify
First run: ~12 minutes (base + app)
Subsequent: ~3 minutes (cached base + app)
- Always build base image first (with cache for speed)
- If base exists in registry, uses cache (~30 sec)
- If base doesn't exist, builds from scratch (~10 min)
- Then builds and pushes application image
- Self-healing: no manual intervention needed
Pipeline flow:
1. build-base-image (always, with cache)
2. build-image (app)
3. push-image (with multi-tags)
4. verify-push
5. notify
First run: ~12 minutes (base + app)
Subsequent: ~3 minutes (cached base + app)
- Add two Ingress: peikarband.ir (frontend) and api.peikarband.ir (backend)
- Add runtime script to update .web/env.json from API_URL env var
- Remove --backend-only flag to enable both frontend and backend
- Configure API_URL from Helm values instead of build-time args
- Update .dockerignore to include update-env-json.sh script
- Pre-install bun with retry mechanism before reflex export
- Add bun to PATH to ensure reflex can find it
- Fixes connection reset errors during Docker build
- جدا کردن build و push به دو step مجزا
- استفاده از docker:24-dind برای کنترل بیشتر
- build step: فقط build میکند با --load
- push step: فقط push میکند
- مزایا: امکان retry فقط push، debug بهتر
- حذف reflex init که فایلها را overwrite میکرد
- نگه داشتن .web directory برای frontend static files
- اصلاح service targetPort از hardcode به values
- افزایش readiness/liveness probe timing به 120 ثانیه
- اصلاح export برای production mode
Reflex only accepts 'dev' or 'prod' as valid --env values.
This was causing: Error: Invalid value for '--env': 'production' is not one of 'dev', 'prod'
Changes:
- Dockerfile: REFLEX_ENV=production -> prod
- Dockerfile CMD: --env production -> prod
- docs/handbook.md: updated example command
- Currently using SQLite (not PostgreSQL)
- Redis not implemented yet
- Disabled postgresql.enabled and redis.enabled in production and staging values
- Removed unnecessary database environment variables from deployment
Changes:
- Add templates/docker-registry.yaml to auto-create imagePullSecret
- Add registrySecret config to values.yaml (disabled by default)
- Enable registrySecret in values-production.yaml with placeholders
- Secret auto-generates from username/password in values
Usage in ArgoCD:
1. Set parameters in UI:
- registrySecret.username: <your-username>
- registrySecret.password: <your-password>
2. Sync the app
3. Secret will be auto-created and used for image pull
No manual kubectl commands needed!
Changes:
- Add templates/secret.yaml to automatically create docker-registry secret
- Add imageCredentials config to values.yaml (disabled by default)
- Enable imageCredentials in values-production.yaml
- Auto-generates kubernetes.io/dockerconfigjson secret from username/password
Usage in production:
1. Set credentials via ArgoCD values override:
imageCredentials.username: <from-secret>
imageCredentials.password: <from-secret>
2. Or use external-secrets operator to inject from vault
The secret will be auto-created and referenced in imagePullSecrets.
Changes:
- Disable imagePullSecrets in production (hub-registry-secret doesn't exist yet)
- Add comment with command to create the secret if needed
- Fix typo: 'flase' -> 'false' in autoscaling.enabled
Note: Registry can work without secret if it's public, or create the secret:
kubectl create secret docker-registry hub-registry-secret \
--docker-server=hub.peikarband.ir \
--docker-username=<username> \
--docker-password=<password> \
-n peikarband
This resolves the 'Unable to retrieve some image pull secrets' warning.
Problem: Mixing toYaml output with inline list items broke YAML structure
{{- toYaml .Values.env | nindent 12 }}
- name: API_URL # This caused parse error
Solution: Define all env vars inline and append .Values.env at the end
using range loop. This creates valid YAML list structure.
Now helm lint and helm template both pass successfully.
Comments between env list items were breaking YAML parser in ArgoCD:
'error converting YAML to JSON: yaml: line 79: did not find expected key'
Removed inline comments before env var definitions. The YAML structure
is now clean and validates correctly with helm template.