Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion apps/docs/content/guides/choose-queue.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,9 @@ description: "**Use NATS** for most cases (simple, fast, JetStream persistence).

- Ports: 4222 (client), 8222 (HTTP monitoring)
- Auth: user `zerops` + auto-generated password
- Connection: `nats://${user}:${password}@${hostname}:4222`
- **Connection** — two supported patterns, pick ONE:
- **Separate env vars** (recommended, works with every NATS client library): pass `servers: ${hostname}:${port}` plus `user: ${user}, pass: ${password}` as client-side connect options. The servers list stays credential-free.
- **Opaque connection string**: pass `${connectionString}` directly as the servers option — the platform builds a correctly-formatted URL with embedded auth that the NATS server expects.
- JetStream: Enabled by default (`JET_STREAM_ENABLED=1`)
- Storage: Up to 40GB memory + 250GB file store
- Max message: 8MB default, 64MB max (`MAX_PAYLOAD`)
Expand All @@ -40,3 +42,4 @@ description: "**Use NATS** for most cases (simple, fast, JetStream persistence).
2. **Kafka single-node has no replication**: 1 broker = 3 partitions but zero redundancy
3. **NATS JetStream HA sync interval**: 1-minute sync across nodes — brief data lag possible
4. **Kafka SASL only**: No anonymous connections — always use the generated credentials
5. **NATS authorization violation from a hand-composed URL**: do not build a `nats://user:pass@host:4222` URL from the separate env vars. Most NATS client libraries will parse the embedded credentials AND separately attempt SASL with the same values, producing a double-auth that the server rejects with `Authorization Violation` on the first CONNECT frame (symptom: startup crash, no successful subscription). Use either the separate env vars passed as connect options (credential-free servers list) or the opaque `${connectionString}` the platform builds for you — both patterns in the Connection section above avoid the double-auth path.
5 changes: 4 additions & 1 deletion apps/docs/content/guides/object-storage-integration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ When you create an Object Storage service, Zerops auto-generates these env vars

| Variable | Description |
|----------|-------------|
| `apiUrl` | S3 endpoint URL (accessible from Zerops and remotely) |
| `apiUrl` | S3 endpoint URL — full `https://...` URL ready for any S3 SDK's `endpoint` option |
| `apiHost` | S3 endpoint host only (no scheme); use only if the client library needs host separately |
| `accessKeyId` | S3 access key |
| `secretAccessKey` | S3 secret key |
| `bucketName` | Auto-generated bucket name (hostname + random prefix, immutable) |
Expand All @@ -20,6 +21,8 @@ When you create an Object Storage service, Zerops auto-generates these env vars
| `serviceId` | Service ID (Zerops-generated) |
| `hostname` | Service hostname |

**Use `${storage_apiUrl}` as the S3 endpoint** — it carries the complete `https://` scheme and is what every S3 SDK's `endpoint` option expects. The `apiHost` variant is host-only; if a client library requires host separately, combine `https://${storage_apiHost}` manually — **never `http://`**. The object-storage gateway rejects plaintext HTTP with a 301 redirect to the HTTPS equivalent, and most S3 SDKs don't follow the redirect automatically. The symptom of a misconfigured endpoint is `UnknownError` or connection-refused on the first bucket call.

Reference them in zerops.yml `run.envVariables`:
```yaml
S3_ENDPOINT: ${storage_apiUrl}
Expand Down
Loading