feat: use postgres
This commit is contained in:
+150
-23
@@ -37,7 +37,7 @@ Core product properties:
|
|||||||
* in-place upgrade of a running game is allowed only as a patch update within the same semver major/minor line;
|
* in-place upgrade of a running game is allowed only as a patch update within the same semver major/minor line;
|
||||||
* player commands are turn-bound and are accepted only before the next scheduled turn generation cutoff.
|
* player commands are turn-bound and are accepted only before the next scheduled turn generation cutoff.
|
||||||
|
|
||||||
The current v1 platform uses Redis as the main data store and Redis Streams as the internal event bus.
|
The platform stores durable business state in PostgreSQL (one shared database, schema per service) and uses Redis with Redis Streams for ephemeral state, caches, and the internal event bus. The backend split, library stack, and staged migration plan live in [`PG_PLAN.md`](PG_PLAN.md) and the [Persistence Backends](#persistence-backends) section below.
|
||||||
|
|
||||||
## Main Principles
|
## Main Principles
|
||||||
|
|
||||||
@@ -124,7 +124,8 @@ flowchart LR
|
|||||||
Mail["Mail Service"]
|
Mail["Mail Service"]
|
||||||
Geo["Geo Profile Service"]
|
Geo["Geo Profile Service"]
|
||||||
Billing["Billing Service\nfuture"]
|
Billing["Billing Service\nfuture"]
|
||||||
Redis["Redis\nKV + Streams"]
|
Redis["Redis\nCache, Streams, Leases"]
|
||||||
|
Postgres["PostgreSQL\nDurable Business State"]
|
||||||
Telemetry["Telemetry"]
|
Telemetry["Telemetry"]
|
||||||
|
|
||||||
Client --> Gateway
|
Client --> Gateway
|
||||||
@@ -162,6 +163,13 @@ flowchart LR
|
|||||||
Notify --> Redis
|
Notify --> Redis
|
||||||
|
|
||||||
Runtime --> Redis
|
Runtime --> Redis
|
||||||
|
|
||||||
|
Mail --> Redis
|
||||||
|
User --> Postgres
|
||||||
|
Mail --> Postgres
|
||||||
|
Notify --> Postgres
|
||||||
|
Lobby --> Postgres
|
||||||
|
|
||||||
Billing --> User
|
Billing --> User
|
||||||
Telemetry --- Gateway
|
Telemetry --- Gateway
|
||||||
Telemetry --- Auth
|
Telemetry --- Auth
|
||||||
@@ -332,8 +340,10 @@ For auth callers, a successful result means the request was durably accepted
|
|||||||
into the mail-delivery pipeline or intentionally suppressed; it does not
|
into the mail-delivery pipeline or intentionally suppressed; it does not
|
||||||
require that the external SMTP exchange already completed before the response
|
require that the external SMTP exchange already completed before the response
|
||||||
is returned.
|
is returned.
|
||||||
Stable service-local delivery rules, retry semantics, and Redis-backed
|
Stable service-local delivery rules, retry semantics, and storage details
|
||||||
processing details belong in [`mail/README.md`](mail/README.md), not in the
|
(PostgreSQL for the durable delivery record, attempt history, dead letters,
|
||||||
|
and audit; Redis for the inbound `mail:delivery_commands` stream and its
|
||||||
|
consumer offset) belong in [`mail/README.md`](mail/README.md), not in the
|
||||||
root architecture document.
|
root architecture document.
|
||||||
|
|
||||||
## 5. [Geo Profile Service](geoprofile/README.md)
|
## 5. [Geo Profile Service](geoprofile/README.md)
|
||||||
@@ -490,7 +500,7 @@ service-layer logic.
|
|||||||
|
|
||||||
RND owns three levels of state per name:
|
RND owns three levels of state per name:
|
||||||
|
|
||||||
- **registered** — platform-unique permanent names owned by one regular user.
|
* **registered** — platform-unique permanent names owned by one regular user.
|
||||||
A registered name cannot be transferred, released, or renamed; the only path
|
A registered name cannot be transferred, released, or renamed; the only path
|
||||||
back to availability is `permanent_block` or `DeleteUser` on the owning
|
back to availability is `permanent_block` or `DeleteUser` on the owning
|
||||||
account. The number of registered names a user can hold is bounded by the
|
account. The number of registered names a user can hold is bounded by the
|
||||||
@@ -498,13 +508,13 @@ RND owns three levels of state per name:
|
|||||||
snapshot): `free=1`, `paid_monthly=2`, `paid_yearly=6`,
|
snapshot): `free=1`, `paid_monthly=2`, `paid_yearly=6`,
|
||||||
`paid_lifetime=unlimited`. Tariff downgrade never revokes existing
|
`paid_lifetime=unlimited`. Tariff downgrade never revokes existing
|
||||||
registrations; it only constrains new ones.
|
registrations; it only constrains new ones.
|
||||||
- **reservation** — per-game binding created when a participant joins a game
|
* **reservation** — per-game binding created when a participant joins a game
|
||||||
through application approval or invite redeem. The reservation key is
|
through application approval or invite redeem. The reservation key is
|
||||||
`(game_id, canonical_key)`. One user may hold the same name simultaneously
|
`(game_id, canonical_key)`. One user may hold the same name simultaneously
|
||||||
across multiple active games. A reservation survives until the game
|
across multiple active games. A reservation survives until the game
|
||||||
finishes, then either becomes a `pending_registration` (see below) or is
|
finishes, then either becomes a `pending_registration` (see below) or is
|
||||||
released.
|
released.
|
||||||
- **pending_registration** — a reservation that survived a capable finish and
|
* **pending_registration** — a reservation that survived a capable finish and
|
||||||
is now waiting up to 30 days for the owner to upgrade it into a registered
|
is now waiting up to 30 days for the owner to upgrade it into a registered
|
||||||
name via `lobby.race_name.register`. Expiration releases the binding.
|
name via `lobby.race_name.register`. Expiration releases the binding.
|
||||||
|
|
||||||
@@ -807,25 +817,143 @@ The main example is `Lobby -> Game Master`:
|
|||||||
* synchronous for critical registration/update after successful start;
|
* synchronous for critical registration/update after successful start;
|
||||||
* asynchronous for secondary propagation and denormalized status fan-out.
|
* asynchronous for secondary propagation and denormalized status fan-out.
|
||||||
|
|
||||||
## Redis as Data and Event Infrastructure
|
## Persistence Backends
|
||||||
|
|
||||||
Redis is the initial shared infrastructure for:
|
The platform splits durable state across two backends.
|
||||||
|
|
||||||
* main persistent data of services where no SQL backend is yet introduced;
|
PostgreSQL is the source of truth for table-shaped business state:
|
||||||
* gateway session cache backing data;
|
|
||||||
* replay reservation store for gateway;
|
|
||||||
* session lifecycle projection;
|
|
||||||
* internal event bus using Redis Streams;
|
|
||||||
* notification-intent ingress through `notification:intents`;
|
|
||||||
* notification fan-out;
|
|
||||||
* runtime job completion events;
|
|
||||||
* lobby/game-master propagation events;
|
|
||||||
* geo auxiliary events.
|
|
||||||
|
|
||||||
Redis Streams are therefore the platform event bus in v1.
|
* user identity, profile settings, tariffs/entitlements, sanctions, limits,
|
||||||
|
and the blocked-email registry;
|
||||||
|
* mail deliveries, attempt history, dead letters, payloads, and
|
||||||
|
malformed-command audit;
|
||||||
|
* notification records, route materialisations, dead letters, and
|
||||||
|
malformed-intent audit;
|
||||||
|
* lobby games, applications, invites, memberships, and the race-name
|
||||||
|
registry (registered/reservation/pending tiers);
|
||||||
|
* idempotency records, expressed as `UNIQUE` constraints on the durable
|
||||||
|
table — not as a separate kv;
|
||||||
|
* retry scheduling state, expressed as a `next_attempt_at` column on the
|
||||||
|
durable table and worked off via `SELECT ... FOR UPDATE SKIP LOCKED`.
|
||||||
|
|
||||||
This is an accepted trade-off for simpler early infrastructure.
|
Redis is the source of truth for ephemeral and runtime-coordination state:
|
||||||
Service boundaries must still stay storage-agnostic where future SQL migration is expected, especially in `Auth / Session Service`.
|
|
||||||
|
* the platform event bus implemented as Redis Streams (`user:domain_events`,
|
||||||
|
`user:lifecycle_events`, `gm:lobby_events`, `runtime:job_results`,
|
||||||
|
`notification:intents`, `gateway:client-events`, `mail:delivery_commands`);
|
||||||
|
* stream consumer offsets;
|
||||||
|
* gateway session cache, replay reservations, rate-limit counters, and
|
||||||
|
short-lived runtime locks/leases (e.g. notification `route_leases`);
|
||||||
|
* `Auth / Session Service` challenges and active session tokens, which are
|
||||||
|
TTL-bounded and where loss is recoverable by re-authentication;
|
||||||
|
* lobby per-game runtime aggregates that are deleted at game finish
|
||||||
|
(`game_turn_stats`, `gap_activated_at`, capability evaluation marker).
|
||||||
|
|
||||||
|
### Database topology
|
||||||
|
|
||||||
|
* Single PostgreSQL database `galaxy`.
|
||||||
|
* Schema per service: `user`, `mail`, `notification`, `lobby`. Reserved for
|
||||||
|
future use: `geoprofile`. Not allocated unless needed: `gateway`,
|
||||||
|
`authsession`.
|
||||||
|
* Each service connects with its own PostgreSQL role whose grants are
|
||||||
|
restricted to its own schema (defense-in-depth).
|
||||||
|
* Authentication is username + password only. `sslmode=disable`. No client
|
||||||
|
certificates and no SCRAM channel binding.
|
||||||
|
* Each service connects to one primary plus zero-or-more read-only
|
||||||
|
replicas. Only the primary is used in this iteration; the replica pool
|
||||||
|
is wired but receives no traffic. Future read-routing is a non-breaking
|
||||||
|
change.
|
||||||
|
|
||||||
|
### Redis topology
|
||||||
|
|
||||||
|
* Each service connects to one master plus zero-or-more replicas.
|
||||||
|
* All connections require a password. `USERNAME`/ACL is not used. TLS is
|
||||||
|
off.
|
||||||
|
* Only the master is used in this iteration; the replica list is wired but
|
||||||
|
unused. Failover/read routing is added later without a config break.
|
||||||
|
* The legacy env vars `*_REDIS_TLS_ENABLED` and `*_REDIS_USERNAME` are
|
||||||
|
removed without a backward-compat shim.
|
||||||
|
|
||||||
|
### Library stack and migration discipline
|
||||||
|
|
||||||
|
* Driver: `github.com/jackc/pgx/v5`, exposed as `*sql.DB` via
|
||||||
|
`github.com/jackc/pgx/v5/stdlib` so it is consumable by query builders
|
||||||
|
written against `database/sql`.
|
||||||
|
* Query layer: `github.com/go-jet/jet/v2` (PostgreSQL dialect). Generated
|
||||||
|
code lives under each service `internal/adapters/postgres/jet/`,
|
||||||
|
regenerated by a per-service `make jet` target (testcontainers + goose +
|
||||||
|
jet) and committed to the repo so consumers don't need Docker just to
|
||||||
|
build.
|
||||||
|
* Migrations: `github.com/pressly/goose/v3` library API. Migration files
|
||||||
|
are embedded via `//go:embed *.sql`, applied at service startup before
|
||||||
|
any listener opens; the service exits non-zero on failure. Files are
|
||||||
|
forward-only, sequence-numbered, and use the standard `-- +goose Up` /
|
||||||
|
`-- +goose Down` markers.
|
||||||
|
* Single-init policy during pre-launch development: each PG-backed
|
||||||
|
service ships exactly one migration file, `00001_init.sql`, that
|
||||||
|
represents the full current schema. New tables, columns, and indexes
|
||||||
|
are added by editing that file directly rather than by appending
|
||||||
|
`00002_*.sql`, `00003_*.sql`, etc. The trade-off is intentional —
|
||||||
|
schema clarity beats migration-history granularity while no production
|
||||||
|
database exists. Once the platform reaches its first production
|
||||||
|
deploy, future schema evolution switches to additive sequence-numbered
|
||||||
|
migrations.
|
||||||
|
* Test infrastructure: `github.com/testcontainers/testcontainers-go` plus
|
||||||
|
the `modules/postgres` submodule for unit tests and for `make jet`.
|
||||||
|
|
||||||
|
Per-service decision records that capture schema and adapter choices live
|
||||||
|
at `galaxy/<service>/docs/postgres-migration.md`.
|
||||||
|
|
||||||
|
### Timestamp handling
|
||||||
|
|
||||||
|
Every time-valued column in every Galaxy schema is `timestamptz`. The
|
||||||
|
adapter layer is responsible for ensuring that all `time.Time` values
|
||||||
|
crossing the SQL boundary carry `time.UTC` as their location.
|
||||||
|
|
||||||
|
* **Writes.** Every `time.Time` parameter bound through `database/sql`
|
||||||
|
(`ExecContext`, `QueryContext`, `QueryRowContext`) is normalised with
|
||||||
|
`.UTC()` at the binding site. Optional `*time.Time` columns are bound
|
||||||
|
through a shared helper (`nullableTime` or equivalent per adapter) that
|
||||||
|
returns `value.UTC()` when non-nil and SQL `NULL` otherwise. Helper
|
||||||
|
bindings of `cutoff`, `now`, etc. (retention, schedulers) follow the
|
||||||
|
same rule even when the input was already produced via
|
||||||
|
`clock.Now().UTC()` — defensive `.UTC()` calls are intentional and
|
||||||
|
cheap.
|
||||||
|
* **Reads.** Every `time.Time` scanned out of PostgreSQL is re-wrapped
|
||||||
|
with `.UTC()` (directly or via a small helper that mirrors
|
||||||
|
`nullableTime` for the read path) before it leaves the adapter. The
|
||||||
|
domain layer therefore never observes a `time.Time` whose location is
|
||||||
|
anything other than `time.UTC`.
|
||||||
|
* **Why.** PostgreSQL stores `timestamptz` as UTC at rest, but the Go
|
||||||
|
driver returns scanned values in `time.Local`. Mixing locations across
|
||||||
|
the boundary produces inequalities in tests, drift in JSON output, and
|
||||||
|
comparison bugs against pointer fields. The defensive `.UTC()` rule on
|
||||||
|
both sides removes that class of bug entirely.
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
For each service `<S>` ∈ { `USERSERVICE`, `MAIL`, `NOTIFICATION`,
|
||||||
|
`LOBBY`, `GATEWAY`, `AUTHSESSION` }, the Redis connection accepts:
|
||||||
|
|
||||||
|
* `<S>_REDIS_MASTER_ADDR` (required)
|
||||||
|
* `<S>_REDIS_REPLICA_ADDRS` (optional, comma-separated)
|
||||||
|
* `<S>_REDIS_PASSWORD` (required)
|
||||||
|
* `<S>_REDIS_DB`, `<S>_REDIS_OPERATION_TIMEOUT`
|
||||||
|
|
||||||
|
For PG-backed services (`USERSERVICE`, `MAIL`, `NOTIFICATION`, `LOBBY`)
|
||||||
|
the Postgres connection accepts:
|
||||||
|
|
||||||
|
* `<S>_POSTGRES_PRIMARY_DSN` (required;
|
||||||
|
`postgres://<role>:<pwd>@<host>:5432/galaxy?search_path=<schema>&sslmode=disable`)
|
||||||
|
* `<S>_POSTGRES_REPLICA_DSNS` (optional, comma-separated)
|
||||||
|
* `<S>_POSTGRES_OPERATION_TIMEOUT`, `<S>_POSTGRES_MAX_OPEN_CONNS`,
|
||||||
|
`<S>_POSTGRES_MAX_IDLE_CONNS`, `<S>_POSTGRES_CONN_MAX_LIFETIME`
|
||||||
|
|
||||||
|
Stream- and key-shape env vars (`*_REDIS_DOMAIN_EVENTS_STREAM`,
|
||||||
|
`*_REDIS_LIFECYCLE_EVENTS_STREAM`, `*_REDIS_KEYSPACE_PREFIX`,
|
||||||
|
`MAIL_REDIS_COMMAND_STREAM`, `NOTIFICATION_INTENTS_STREAM`, etc.) keep
|
||||||
|
their current names and semantics — they describe stream/key shapes, not
|
||||||
|
connection topology.
|
||||||
|
|
||||||
## Main End-to-End Flows
|
## Main End-to-End Flows
|
||||||
|
|
||||||
@@ -1122,7 +1250,6 @@ The architecture intentionally does not try to solve all future concerns now.
|
|||||||
|
|
||||||
Current non-goals:
|
Current non-goals:
|
||||||
|
|
||||||
* a separate global SQL storage layer in v1;
|
|
||||||
* a separate policy engine;
|
* a separate policy engine;
|
||||||
* automatic billing integration in v1;
|
* automatic billing integration in v1;
|
||||||
* automatic match balancing in v1;
|
* automatic match balancing in v1;
|
||||||
|
|||||||
+920
@@ -0,0 +1,920 @@
|
|||||||
|
# PostgreSQL Migration Plan
|
||||||
|
|
||||||
|
This plan has been already implemented and stays here for historical reasons.
|
||||||
|
|
||||||
|
It should NOT be threated as source of truth for service functionality.
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
The Galaxy Game project currently uses Redis as the only persistence backend
|
||||||
|
across all implemented services (`user`, `mail`, `notification`, `lobby`,
|
||||||
|
`gateway`, `authsession`). Redis serves both kinds of state: ephemeral and
|
||||||
|
runtime-coordination state (where it shines — Streams, caches, replay keys,
|
||||||
|
runtime queues, session caches, leases) and table-shaped business state where
|
||||||
|
it is a poor fit (durable user accounts, entitlements/sanctions, mail audit
|
||||||
|
records, notification routes/idempotency, lobby memberships and invites).
|
||||||
|
Replication and standby for Redis are not configured anywhere. There is no
|
||||||
|
SQL/migration tooling in the repo at all.
|
||||||
|
|
||||||
|
We migrate to a Redis + PostgreSQL split where each backend owns the data it
|
||||||
|
serves best. PostgreSQL becomes the source of truth for table-shaped business
|
||||||
|
state, gives us ACID transactions, mature physical/logical replication, and
|
||||||
|
backup/restore via `pg_dump` and WAL archiving. Redis remains the source of
|
||||||
|
truth for streams, pub/sub, caches, leases, replay keys, rate limits, session
|
||||||
|
caches, runtime queues, and stream consumer offsets.
|
||||||
|
|
||||||
|
The plan migrates only services already implemented and explicitly excludes
|
||||||
|
`galaxy/game`. It targets steady-state architecture rules first (one
|
||||||
|
authoritative document, `ARCHITECTURE.md`), then walks each service end to end
|
||||||
|
— code, tests, service-local README/docs, and integration suites — so that no
|
||||||
|
intermediate commit leaves docs and code in conflict.
|
||||||
|
|
||||||
|
## Confirmed decisions (with project owner)
|
||||||
|
|
||||||
|
1. **Documentation strategy**: `ARCHITECTURE.md` is updated as the very first
|
||||||
|
stage with the architecture-wide rules. Each per-service README and per-
|
||||||
|
service `docs/` change inside that service's own stage, paired with code
|
||||||
|
and tests. This keeps `ARCHITECTURE.md` ≡ policy, README ≡ current state,
|
||||||
|
and ensures any commit can be checked out without code/doc divergence.
|
||||||
|
2. **Service scope**: full migration of durable storage to PostgreSQL for
|
||||||
|
`user`, `mail`, `notification`, `lobby`. Only Redis configuration refactor
|
||||||
|
(master/replica + mandatory password, drop `TLS_ENABLED` / `USERNAME`) for
|
||||||
|
`gateway` and `authsession` — these services intentionally stay Redis-
|
||||||
|
only. `geoprofile` has no implementation; its `PLAN.md` and `README.md`
|
||||||
|
absorb the new persistence rules so future implementation follows them.
|
||||||
|
3. **Idempotency and retry-schedule placement**: idempotency records and
|
||||||
|
retry schedule queues live in PostgreSQL on the same table as the durable
|
||||||
|
record they protect (`(producer, idempotency_key)` UNIQUE on `records`,
|
||||||
|
`next_attempt_at` column on `deliveries` / `routes`). One source of truth,
|
||||||
|
no dual-write hazard between PG and Redis ZSETs.
|
||||||
|
4. **Stack**: `github.com/jackc/pgx/v5` driver, exposed as `*sql.DB` via
|
||||||
|
`github.com/jackc/pgx/v5/stdlib`. `github.com/go-jet/jet/v2` for
|
||||||
|
type-safe query building + code generation, generated against a
|
||||||
|
testcontainers PostgreSQL instance with migrations applied (Makefile
|
||||||
|
target per service). `github.com/pressly/goose/v3` library API for
|
||||||
|
embedded migrations applied at service startup; the `goose` CLI may be
|
||||||
|
used for local development and rollback investigations but is not in the
|
||||||
|
service binary path.
|
||||||
|
5. **Code**: all postgres queries must use pre-generated code with `jet` and
|
||||||
|
appropriate builders rather than raw SQL queries, unless this usage cannot
|
||||||
|
achive the goal of businness-scenario due to lack of `go-jet` functionality.
|
||||||
|
|
||||||
|
## Architectural rules (target steady-state)
|
||||||
|
|
||||||
|
These rules land in `ARCHITECTURE.md` in Stage 0 and govern every subsequent
|
||||||
|
service stage.
|
||||||
|
|
||||||
|
### Backend assignment
|
||||||
|
|
||||||
|
PostgreSQL is the source of truth for:
|
||||||
|
|
||||||
|
- Domain entities with table-shaped business state (`accounts`,
|
||||||
|
`entitlement_records`, `sanction_records`, `limit_records`,
|
||||||
|
`blocked_emails`, `deliveries`, `attempts`, `dead_letters`,
|
||||||
|
`malformed_commands`, `notification_records`, `notification_routes`,
|
||||||
|
`games`, `applications`, `invites`, `memberships`, `race_names`).
|
||||||
|
- Idempotency records (UNIQUE constraint on the durable table, not a
|
||||||
|
separate kv).
|
||||||
|
- Retry scheduling state (`next_attempt_at` column + supporting index on the
|
||||||
|
durable table).
|
||||||
|
- Audit history records that must outlive any Redis snapshot.
|
||||||
|
|
||||||
|
Redis is the source of truth for:
|
||||||
|
|
||||||
|
- Redis Streams used as the event bus (`user:domain_events`,
|
||||||
|
`user:lifecycle_events`, `gm:lobby_events`, `runtime:job_results`,
|
||||||
|
`notification:intents`, `gateway:client-events`, `mail:delivery_commands`).
|
||||||
|
- Stream consumer offsets (small runtime coordination state, rebuildable).
|
||||||
|
- Caches and projections (gateway session cache).
|
||||||
|
- Replay reservation keys.
|
||||||
|
- Rate limit counters.
|
||||||
|
- Runtime coordination locks/leases (e.g. notification `route_leases`).
|
||||||
|
- Authentication challenge state and active session tokens (TTL-bounded; loss
|
||||||
|
is recoverable by re-authentication).
|
||||||
|
- Ephemeral per-game runtime aggregates that are deleted at game finish
|
||||||
|
(lobby `game_turn_stats`, `gap_activated_at`, capability evaluation
|
||||||
|
marker).
|
||||||
|
|
||||||
|
### Database topology
|
||||||
|
|
||||||
|
- Single PostgreSQL database `galaxy`.
|
||||||
|
- Schema-per-service: `user`, `mail`, `notification`, `lobby`. Reserved for
|
||||||
|
later: `geoprofile`. Not allocated unless needed: `gateway`, `authsession`.
|
||||||
|
- Per-service PostgreSQL role with grants restricted to its own schema
|
||||||
|
(defense-in-depth, simple to express in the initial migration).
|
||||||
|
- Authentication: username + password only. `sslmode=disable`. No client
|
||||||
|
certificates, no SCRAM channel binding, no custom auth plugins.
|
||||||
|
- Each service connects to one primary plus zero-or-more read-only replicas.
|
||||||
|
In this iteration only the primary is used; the replica pool is wired but
|
||||||
|
receives no traffic. Future read-routing is non-breaking.
|
||||||
|
|
||||||
|
### Redis topology
|
||||||
|
|
||||||
|
- Each service connects to one master Redis plus zero-or-more replica Redis
|
||||||
|
hosts.
|
||||||
|
- All connections use a mandatory password. `USERNAME`/ACL not used. TLS off.
|
||||||
|
- In this iteration only the master is used; the replica list is wired but
|
||||||
|
unused — non-breaking switch later when the app starts routing reads.
|
||||||
|
- Existing env vars `*_REDIS_TLS_ENABLED`, `*_REDIS_USERNAME` are removed
|
||||||
|
(hard rename; no backward-compat shim — fresh project, no production
|
||||||
|
deploys to migrate).
|
||||||
|
|
||||||
|
### Library stack
|
||||||
|
|
||||||
|
- Driver: `github.com/jackc/pgx/v5` (modern, actively maintained), exposed
|
||||||
|
to `database/sql` via `github.com/jackc/pgx/v5/stdlib` so go-jet's
|
||||||
|
`qrm.Queryable` interface is satisfied without changes.
|
||||||
|
- Query layer: `github.com/go-jet/jet/v2` (PostgreSQL dialect). Generated
|
||||||
|
code lives under each service `internal/adapters/postgres/jet/`,
|
||||||
|
regenerated via a `make jet` target and committed to the repo.
|
||||||
|
- Migrations: `github.com/pressly/goose/v3` library API; migration files
|
||||||
|
embedded via `//go:embed *.sql`; applied at startup, before opening any
|
||||||
|
HTTP/gRPC listener; non-zero exit on failure.
|
||||||
|
- Test infrastructure: `github.com/testcontainers/testcontainers-go` plus
|
||||||
|
the `modules/postgres` submodule; the same setup is reused by `make jet`
|
||||||
|
to host a transient instance for jet codegen.
|
||||||
|
|
||||||
|
### Migration discipline
|
||||||
|
|
||||||
|
- Forward-only sequence-numbered files: `00001_init.sql`, `00002_*.sql`, …
|
||||||
|
- Lowercase snake_case names; goose `-- +goose Up` / `-- +goose Down`
|
||||||
|
markers; statements that need transaction-wrapping use
|
||||||
|
`-- +goose StatementBegin` / `-- +goose StatementEnd`.
|
||||||
|
- Migrations apply at service startup; service exits non-zero on failure.
|
||||||
|
- Per-service decision record at `galaxy/<service>/docs/postgres-migration.md`
|
||||||
|
captures schema decisions and any non-trivial deviation from the rules.
|
||||||
|
|
||||||
|
### Per-service code organisation
|
||||||
|
|
||||||
|
```text
|
||||||
|
galaxy/<service>/
|
||||||
|
internal/
|
||||||
|
adapters/
|
||||||
|
postgres/
|
||||||
|
migrations/ # *.sql files + migrations.go (//go:embed)
|
||||||
|
jet/ # generated; commit-checked
|
||||||
|
<portname>/ # adapter implementations matching internal/ports
|
||||||
|
config/
|
||||||
|
config.go # adds Postgres + new Redis schema
|
||||||
|
Makefile # `jet` target: testcontainers + goose + jet
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test patterns
|
||||||
|
|
||||||
|
- Per-service unit tests against a real PostgreSQL via
|
||||||
|
`testcontainers-go`; replace the corresponding miniredis test path where
|
||||||
|
storage moved to PG.
|
||||||
|
- Shared port-test suites (e.g. `lobby/internal/ports/racenamedirtest/`)
|
||||||
|
gain a Postgres harness; they remain backend-agnostic in shape.
|
||||||
|
- `integration/internal/harness/postgres_container.go` is added; integration
|
||||||
|
suites that need PG declare it next to their existing Redis container.
|
||||||
|
- Stub adapters (`*stub/`) are kept where the in-memory port is useful for
|
||||||
|
tests that don't need a real backend. Redis adapters that previously
|
||||||
|
implemented these ports are removed (no dead code).
|
||||||
|
|
||||||
|
### Configuration env vars (target)
|
||||||
|
|
||||||
|
For each service `<S>` ∈ { `USERSERVICE`, `MAIL`, `NOTIFICATION`, `LOBBY`,
|
||||||
|
`GATEWAY`, `AUTHSESSION` }:
|
||||||
|
|
||||||
|
- `<S>_REDIS_MASTER_ADDR` (required)
|
||||||
|
- `<S>_REDIS_REPLICA_ADDRS` (optional, comma-separated; default empty)
|
||||||
|
- `<S>_REDIS_PASSWORD` (required)
|
||||||
|
- `<S>_REDIS_DB` (default 0)
|
||||||
|
- `<S>_REDIS_OPERATION_TIMEOUT` (default 250ms)
|
||||||
|
|
||||||
|
For PG-backed services (`USERSERVICE`, `MAIL`, `NOTIFICATION`, `LOBBY`):
|
||||||
|
|
||||||
|
- `<S>_POSTGRES_PRIMARY_DSN` (required;
|
||||||
|
e.g. `postgres://userservice:secret@postgres:5432/galaxy?search_path=user&sslmode=disable`)
|
||||||
|
- `<S>_POSTGRES_REPLICA_DSNS` (optional, comma-separated)
|
||||||
|
- `<S>_POSTGRES_OPERATION_TIMEOUT` (default 1s)
|
||||||
|
- `<S>_POSTGRES_MAX_OPEN_CONNS` (default 25)
|
||||||
|
- `<S>_POSTGRES_MAX_IDLE_CONNS` (default 5)
|
||||||
|
- `<S>_POSTGRES_CONN_MAX_LIFETIME` (default 30m)
|
||||||
|
|
||||||
|
DSN sets `search_path=<schema>` so unqualified table references resolve into
|
||||||
|
the service-owned schema; `sslmode=disable` is set explicitly per the
|
||||||
|
"no TLS" requirement.
|
||||||
|
|
||||||
|
Service-prefix-specific stream/keyspace env vars (`*_REDIS_DOMAIN_EVENTS_STREAM`,
|
||||||
|
`*_REDIS_LIFECYCLE_EVENTS_STREAM`, `*_REDIS_KEYSPACE_PREFIX`,
|
||||||
|
`MAIL_REDIS_COMMAND_STREAM`, etc.) keep their current names and semantics —
|
||||||
|
they describe stream/key shapes, not connection topology.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Stages
|
||||||
|
|
||||||
|
Each stage is independently executable and shippable.
|
||||||
|
|
||||||
|
### ~~Stage 0~~ — Architecture-wide rules and PG_PLAN.md materialisation
|
||||||
|
|
||||||
|
This stage is implemented.
|
||||||
|
|
||||||
|
**Goal**: land the steady-state rules in `ARCHITECTURE.md` and place
|
||||||
|
`PG_PLAN.md` at the project root so subsequent `/stage-implementation`
|
||||||
|
invocations have an authoritative reference.
|
||||||
|
|
||||||
|
**Actions**:
|
||||||
|
|
||||||
|
1. Write the contents of this plan file to `/Users/id/src/go/galaxy/PG_PLAN.md`.
|
||||||
|
2. Add a new section to `ARCHITECTURE.md` (e.g. `§9 Persistence Backends`)
|
||||||
|
capturing every rule under the *Architectural rules* heading above:
|
||||||
|
backend assignment, database/Redis topology, library stack, migration
|
||||||
|
discipline, code organisation, test patterns, env-var conventions.
|
||||||
|
3. Add a short *Migration Window* sub-section to `ARCHITECTURE.md` noting
|
||||||
|
that until all `PG_PLAN.md` stages complete, each service's `README.md`
|
||||||
|
continues to describe its actual current state — this caveat is removed
|
||||||
|
in Stage 9.
|
||||||
|
4. Adjust `ARCHITECTURE.md §8` (publisher rules) so cross-references
|
||||||
|
distinguish "Redis Stream" (event bus, stays Redis) from "PG-backed
|
||||||
|
table" (durable record).
|
||||||
|
|
||||||
|
**Files (modified / new)**:
|
||||||
|
|
||||||
|
- `/Users/id/src/go/galaxy/PG_PLAN.md` — new
|
||||||
|
- `/Users/id/src/go/galaxy/ARCHITECTURE.md` — modified
|
||||||
|
|
||||||
|
**Out of scope**: zero service code, zero per-service README/docs, zero
|
||||||
|
`go.mod` changes, zero new dependencies in service modules.
|
||||||
|
|
||||||
|
**Verification**:
|
||||||
|
|
||||||
|
- `git diff --stat` reports two paths only: `PG_PLAN.md`, `ARCHITECTURE.md`.
|
||||||
|
- `ARCHITECTURE.md` reads coherently end to end, with the new section
|
||||||
|
cross-referenced from §8 and from any other place that today says
|
||||||
|
"Redis is the v1 backend".
|
||||||
|
- Manual: read `PG_PLAN.md` top to bottom, confirm every architectural
|
||||||
|
decision matches the section in `ARCHITECTURE.md`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~Stage 1~~ — Shared infrastructure packages (`pkg/postgres`, `pkg/redisconn`)
|
||||||
|
|
||||||
|
This stage is implemented.
|
||||||
|
|
||||||
|
**Goal**: provide one canonical helper each for Postgres and Redis so per-
|
||||||
|
service stages don't reinvent connection/migration wiring. No service
|
||||||
|
consumes them yet.
|
||||||
|
|
||||||
|
**Files (new)**:
|
||||||
|
|
||||||
|
- `pkg/postgres/config.go` — `Config` struct (PrimaryDSN, ReplicaDSNs,
|
||||||
|
OperationTimeout, MaxOpenConns, MaxIdleConns, ConnMaxLifetime); helper
|
||||||
|
`LoadFromEnv(prefix string) (Config, error)` that reads
|
||||||
|
`<prefix>_POSTGRES_*`.
|
||||||
|
- `pkg/postgres/open.go` — `OpenPrimary(ctx, cfg) (*sql.DB, error)` and
|
||||||
|
`OpenReplicas(ctx, cfg) ([]*sql.DB, error)` using
|
||||||
|
`pgx.ConnConfig` → `stdlib.OpenDB(...)`; configures pool sizes and
|
||||||
|
per-statement context timeout.
|
||||||
|
- `pkg/postgres/migrate.go` — `RunMigrations(ctx context.Context, db *sql.DB,
|
||||||
|
fs embed.FS) error` wrapping `goose.SetBaseFS(fs)` + `goose.UpContext`.
|
||||||
|
- `pkg/postgres/otel.go` — `Instrument(db *sql.DB, telemetry telemetry.Runtime)`
|
||||||
|
applying `otelsql.RegisterDBStatsMetrics` and statement spans.
|
||||||
|
- `pkg/postgres/postgres_test.go` — testcontainers-backed smoke test:
|
||||||
|
open primary, run a one-line migration, insert/select.
|
||||||
|
- `pkg/redisconn/config.go` — `Config` struct (MasterAddr, ReplicaAddrs,
|
||||||
|
Password, DB, OperationTimeout); helper `LoadFromEnv(prefix string)
|
||||||
|
(Config, error)` that reads `<prefix>_REDIS_*` (the new shape only;
|
||||||
|
rejects deprecated TLS/USERNAME vars with a clear error).
|
||||||
|
- `pkg/redisconn/client.go` — `NewMasterClient(cfg) *redis.Client` and
|
||||||
|
`NewReplicaClients(cfg) []*redis.Client` (latter returns nil/empty when
|
||||||
|
replicas not configured).
|
||||||
|
- `pkg/redisconn/otel.go` — `Instrument(client *redis.Client,
|
||||||
|
telemetry telemetry.Runtime)` applying `redisotel.InstrumentTracing` /
|
||||||
|
`InstrumentMetrics`.
|
||||||
|
- `pkg/redisconn/redisconn_test.go` — miniredis-backed config and master
|
||||||
|
client tests.
|
||||||
|
|
||||||
|
**Files (touched)**:
|
||||||
|
|
||||||
|
- `pkg/go.mod` — add `github.com/jackc/pgx/v5`,
|
||||||
|
`github.com/jackc/pgx/v5/stdlib`, `github.com/pressly/goose/v3`,
|
||||||
|
`github.com/testcontainers/testcontainers-go/modules/postgres`,
|
||||||
|
`github.com/XSAM/otelsql` (for db instrumentation; alternative:
|
||||||
|
`go.nhat.io/otelsql` — pick one in implementation).
|
||||||
|
- `go.work` — confirm `pkg/` is registered (already is).
|
||||||
|
|
||||||
|
**Verification**:
|
||||||
|
|
||||||
|
- `cd /Users/id/src/go/galaxy/pkg && go test ./postgres/... ./redisconn/...`
|
||||||
|
passes locally with Docker available.
|
||||||
|
- `go vet ./...` clean.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~Stage 2~~ — Integration test harness extension
|
||||||
|
|
||||||
|
This stage is implemented.
|
||||||
|
|
||||||
|
**Goal**: extend `integration/internal/harness/` with a Postgres container
|
||||||
|
helper and a service-bootstrap helper that builds the per-service DSN with
|
||||||
|
the right `search_path`. All existing integration suites stay green.
|
||||||
|
|
||||||
|
**Files (new)**:
|
||||||
|
|
||||||
|
- `integration/internal/harness/postgres_container.go` —
|
||||||
|
`StartPostgresContainer(t testing.TB) *PostgresRuntime`. The runtime
|
||||||
|
exposes `BaseDSN()`, `DSNForSchema(schema, role string) string`, and
|
||||||
|
`EnsureRoleAndSchema(ctx, schema, role, password string) error` so each
|
||||||
|
test can prepare an isolated schema for the service it is booting.
|
||||||
|
- `integration/internal/harness/postgres_container_test.go` — smoke test.
|
||||||
|
|
||||||
|
**Files (touched)**:
|
||||||
|
|
||||||
|
- `integration/internal/harness/binary.go` — extend `Process`/launch
|
||||||
|
helpers with `WithPostgres(rt *PostgresRuntime, schema, role string)`
|
||||||
|
that injects the right `<S>_POSTGRES_PRIMARY_DSN`. (Existing API already
|
||||||
|
takes `env map[string]string`; this is a thin wrapper.)
|
||||||
|
- `integration/go.mod` — add the testcontainers Postgres module.
|
||||||
|
|
||||||
|
**Out of scope**: no integration suite is yet wired to Postgres; each
|
||||||
|
service stage wires in its suites.
|
||||||
|
|
||||||
|
**Verification**:
|
||||||
|
|
||||||
|
- `cd integration && go test ./internal/harness/...` passes.
|
||||||
|
- `cd integration && go test ./...` still green for all existing suites
|
||||||
|
(Redis-only services remain Redis-only).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~Stage 3~~ — User Service migration (pilot)
|
||||||
|
|
||||||
|
**Goal**: replace User Service's Redis durable storage with PostgreSQL. The
|
||||||
|
two Redis Streams (`user:domain_events`, `user:lifecycle_events`) remain on
|
||||||
|
Redis. This stage is the pilot; subsequent service stages copy its shape.
|
||||||
|
|
||||||
|
**Schema (`user` schema)**:
|
||||||
|
|
||||||
|
- `accounts` (user_id PK, email UNIQUE, user_name UNIQUE, display_name,
|
||||||
|
preferred_language, time_zone, declared_country, created_at, updated_at,
|
||||||
|
deleted_at).
|
||||||
|
- `blocked_emails` (email PK, reason_code, blocked_at, actor_type, actor_id,
|
||||||
|
resolved_user_id).
|
||||||
|
- `entitlement_records` (record_id PK, user_id FK, plan_code, is_paid,
|
||||||
|
starts_at, ends_at, source, actor_type, actor_id, reason_code,
|
||||||
|
updated_at).
|
||||||
|
- `entitlement_snapshots` (user_id PK FK → accounts, …current effective
|
||||||
|
values mirroring Redis snapshot shape).
|
||||||
|
- `sanction_records` (record_id PK, user_id FK, sanction_code, scope,
|
||||||
|
reason_code, actor_type, actor_id, applied_at, expires_at, removed_at,
|
||||||
|
removed_by_type, removed_by_id, removed_reason_code).
|
||||||
|
- `sanction_active` (user_id, sanction_code, record_id) PRIMARY KEY
|
||||||
|
(user_id, sanction_code).
|
||||||
|
- `limit_records`, `limit_active` — analogous to sanctions.
|
||||||
|
- Indexes: `accounts(created_at DESC, user_id DESC)` for newest-first
|
||||||
|
pagination; `accounts(declared_country)`;
|
||||||
|
`entitlement_snapshots(plan_code, is_paid)`;
|
||||||
|
`entitlement_snapshots(ends_at) WHERE is_paid AND ends_at IS NOT NULL`;
|
||||||
|
`sanction_active(sanction_code)`; `limit_active(limit_code)`. Eligibility
|
||||||
|
flags become computed predicates on these columns.
|
||||||
|
|
||||||
|
**Files (new)**:
|
||||||
|
|
||||||
|
- `galaxy/user/internal/adapters/postgres/migrations/00001_init.sql` —
|
||||||
|
full schema with grants (`GRANT USAGE ON SCHEMA user TO userservice;
|
||||||
|
GRANT … ON ALL TABLES …;`).
|
||||||
|
- `galaxy/user/internal/adapters/postgres/migrations/migrations.go` —
|
||||||
|
`//go:embed *.sql` and a `Migrations() embed.FS` accessor.
|
||||||
|
- `galaxy/user/internal/adapters/postgres/jet/...` — generated code
|
||||||
|
(commit-checked).
|
||||||
|
- `galaxy/user/internal/adapters/postgres/userstore/store.go` — Postgres
|
||||||
|
implementation of `ports.UserAccountStore` and `ports.AuthDirectoryStore`.
|
||||||
|
- `galaxy/user/internal/adapters/postgres/userstore/entitlement_store.go` —
|
||||||
|
Postgres implementation of `EntitlementSnapshotStore` and
|
||||||
|
`EntitlementHistoryStore`.
|
||||||
|
- `galaxy/user/internal/adapters/postgres/userstore/policy_store.go` —
|
||||||
|
Postgres implementation of `SanctionStore` and `LimitStore`.
|
||||||
|
- `galaxy/user/internal/adapters/postgres/userstore/list_store.go` —
|
||||||
|
Postgres implementation of `UserListStore` (pagination + filters
|
||||||
|
expressed as SQL).
|
||||||
|
- `galaxy/user/internal/adapters/postgres/userstore/store_test.go` and
|
||||||
|
siblings — testcontainers-backed unit tests covering the same matrix the
|
||||||
|
current Redis tests cover.
|
||||||
|
- `galaxy/user/Makefile` — `jet` target.
|
||||||
|
- `galaxy/user/docs/postgres-migration.md` — decision record (schema
|
||||||
|
shape, why we keep `entitlement_snapshots` denormalised, eligibility
|
||||||
|
expressed as SQL predicates, schema role grants).
|
||||||
|
|
||||||
|
**Files (touched)**:
|
||||||
|
|
||||||
|
- `galaxy/user/internal/config/config.go` — add Postgres config; refactor
|
||||||
|
Redis config to master/replica/password (drop `TLS_ENABLED`, `USERNAME`).
|
||||||
|
- `galaxy/user/internal/config/config_test.go` — update to new env shape.
|
||||||
|
- `galaxy/user/internal/app/runtime.go` — open Postgres pool, run
|
||||||
|
migrations on startup before listeners open, wire postgres adapters
|
||||||
|
into services. Redis client now serves only the two stream publishers.
|
||||||
|
- `galaxy/user/README.md` — replace "Redis-backed user state" with the
|
||||||
|
new persistence model, update env-var section.
|
||||||
|
- `galaxy/user/docs/runbook.md`, `galaxy/user/docs/runtime.md`,
|
||||||
|
`galaxy/user/docs/examples.md` — update storage references and
|
||||||
|
config sections.
|
||||||
|
- `galaxy/user/go.mod` — add `github.com/jackc/pgx/v5{,/stdlib}`,
|
||||||
|
`github.com/pressly/goose/v3`, `github.com/go-jet/jet/v2`,
|
||||||
|
`github.com/testcontainers/testcontainers-go/modules/postgres`. Use
|
||||||
|
`pkg/postgres`, `pkg/redisconn`.
|
||||||
|
|
||||||
|
**Files (deleted)**:
|
||||||
|
|
||||||
|
- `galaxy/user/internal/adapters/redis/userstore/` — entire directory.
|
||||||
|
- The portions of `galaxy/user/internal/adapters/redisstate/keyspace.go`
|
||||||
|
that defined account/entitlement/sanction/limit/index keys (keep only
|
||||||
|
what `domainevents` and `lifecycleevents` publishers still require — if
|
||||||
|
none, delete the file outright).
|
||||||
|
|
||||||
|
**Files retained on Redis**:
|
||||||
|
|
||||||
|
- `galaxy/user/internal/adapters/redis/domainevents/publisher.go`.
|
||||||
|
- `galaxy/user/internal/adapters/redis/lifecycleevents/publisher.go`.
|
||||||
|
|
||||||
|
**Touched integration suites** (each gets a Postgres container in addition
|
||||||
|
to the existing Redis one):
|
||||||
|
|
||||||
|
- `integration/authsessionuser/`
|
||||||
|
- `integration/gatewayauthsessionuser/`
|
||||||
|
- `integration/gatewayauthsessionusermail/`
|
||||||
|
- `integration/notificationuser/`
|
||||||
|
- `integration/lobbyuser/`
|
||||||
|
|
||||||
|
**Verification**:
|
||||||
|
|
||||||
|
- `cd galaxy/user && make jet && go test ./...` (Docker needed).
|
||||||
|
- `cd integration && go test ./authsessionuser/... ./gatewayauthsessionuser/... ./gatewayauthsessionusermail/... ./notificationuser/... ./lobbyuser/...`
|
||||||
|
- Manual smoke against a `docker-compose` stack (PG + Redis with
|
||||||
|
passwords) using flows from `galaxy/user/docs/examples.md`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~Stage 4~~ — Mail Service migration
|
||||||
|
|
||||||
|
This stage is implemented.
|
||||||
|
|
||||||
|
**Goal**: move durable mail storage (deliveries, attempts, dead letters,
|
||||||
|
malformed commands, payloads, idempotency, attempt schedule) into
|
||||||
|
PostgreSQL. Keep Redis only for the inbound `mail:delivery_commands`
|
||||||
|
stream and its consumer offset.
|
||||||
|
|
||||||
|
**Schema (`mail` schema)**:
|
||||||
|
|
||||||
|
- `deliveries` (delivery_id PK, source, status, recipient_envelope JSONB,
|
||||||
|
subject, text_body, html_body, payload_mode, template_id,
|
||||||
|
idempotency_source, idempotency_key, locale_fallback_used,
|
||||||
|
next_attempt_at, attempt_count, max_attempts, created_at, updated_at).
|
||||||
|
- INDEX (status, next_attempt_at) for the scheduler.
|
||||||
|
- UNIQUE (idempotency_source, idempotency_key) — the idempotency record
|
||||||
|
IS this row (no separate kv).
|
||||||
|
- INDEX (created_at DESC) for operator listings; INDEX on status, source,
|
||||||
|
template_id, recipient as needed.
|
||||||
|
- `attempts` (delivery_id FK, attempt_no, status, provider_summary,
|
||||||
|
scheduled_for_ms, started_at_ms, completed_at_ms, PRIMARY KEY
|
||||||
|
(delivery_id, attempt_no)).
|
||||||
|
- `dead_letters` (delivery_id PK FK, final_attempt_count, max_attempts,
|
||||||
|
failure_classification, failure_message, created_at_ms).
|
||||||
|
- `delivery_payloads` (delivery_id PK FK, template_variables JSONB).
|
||||||
|
- `malformed_commands` (stream_entry_id PK, failure_code, failure_message,
|
||||||
|
raw_fields JSONB, recorded_at_ms; INDEX created_at).
|
||||||
|
|
||||||
|
**Files**: mirror Stage 3 (postgres adapter package, migrations, jet
|
||||||
|
codegen, Makefile, decision record, removal of corresponding
|
||||||
|
`internal/adapters/redisstate/*` files for migrated entities, retention
|
||||||
|
of stream offset and consumer wiring on Redis).
|
||||||
|
|
||||||
|
**Worker change**: the mail attempt scheduler loop replaces
|
||||||
|
`ZRANGEBYSCORE` over `mail:attempt_schedule` with
|
||||||
|
`SELECT … FROM deliveries WHERE status IN ('queued','retry_pending') AND next_attempt_at <= now() ORDER BY next_attempt_at LIMIT N FOR UPDATE SKIP LOCKED`.
|
||||||
|
|
||||||
|
**Files (deleted)**:
|
||||||
|
|
||||||
|
- `galaxy/mail/internal/adapters/redisstate/auth_acceptance_store.go`
|
||||||
|
- `galaxy/mail/internal/adapters/redisstate/generic_acceptance_store.go`
|
||||||
|
- `galaxy/mail/internal/adapters/redisstate/attempt_execution_store.go`
|
||||||
|
- `galaxy/mail/internal/adapters/redisstate/operator_store.go`
|
||||||
|
- `galaxy/mail/internal/adapters/redisstate/malformed_command_store.go`
|
||||||
|
- `galaxy/mail/internal/adapters/redisstate/render_store.go`
|
||||||
|
- The portions of `galaxy/mail/internal/adapters/redisstate/keyspace.go`
|
||||||
|
no longer used (`mail:attempt_schedule`, `mail:idempotency:*`, all
|
||||||
|
delivery/attempt/dead-letter/index keys).
|
||||||
|
|
||||||
|
**Files retained on Redis**:
|
||||||
|
|
||||||
|
- `galaxy/mail/internal/adapters/redisstate/stream_offset_store.go` (offset
|
||||||
|
for `mail:delivery_commands` consumer).
|
||||||
|
- The command stream consumer wiring itself.
|
||||||
|
|
||||||
|
**Touched integration suites**:
|
||||||
|
|
||||||
|
- `integration/authsessionmail/`
|
||||||
|
- `integration/gatewayauthsessionmail/`
|
||||||
|
- `integration/gatewayauthsessionusermail/`
|
||||||
|
- `integration/notificationmail/`
|
||||||
|
|
||||||
|
**Verification**: per Stage 3 pattern; plus end-to-end smoke that pushes
|
||||||
|
a delivery through retry_pending → provider_accepted using the SMTP stub.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~Stage 5~~ — Notification Service migration
|
||||||
|
|
||||||
|
This stage is implemented.
|
||||||
|
|
||||||
|
**Goal**: move durable notification storage (records, routes, idempotency,
|
||||||
|
dead letters, malformed intents) into PostgreSQL. Keep Redis for the
|
||||||
|
inbound `notification:intents` stream, the outbound `gateway:client-events`
|
||||||
|
stream, the outbound `mail:delivery_commands` stream, the corresponding
|
||||||
|
stream offsets, and the short-lived per-route lease (`route_leases:*`).
|
||||||
|
|
||||||
|
**Schema (`notification` schema)**:
|
||||||
|
|
||||||
|
- `records` (notification_id PK, notification_type, producer, audience_kind,
|
||||||
|
recipient_user_ids JSONB, payload JSONB, idempotency_key,
|
||||||
|
request_fingerprint, request_id, trace_id, occurred_at_ms,
|
||||||
|
accepted_at_ms, updated_at_ms).
|
||||||
|
- UNIQUE (producer, idempotency_key) — idempotency record IS this row.
|
||||||
|
- `routes` (notification_id, route_id, channel, recipient_ref, status,
|
||||||
|
attempt_count, max_attempts, next_attempt_at_ms, resolved_email,
|
||||||
|
resolved_locale, last_error_classification, last_error_message,
|
||||||
|
last_error_at_ms, created_at_ms, updated_at_ms, published_at_ms,
|
||||||
|
dead_lettered_at_ms, skipped_at_ms, PRIMARY KEY
|
||||||
|
(notification_id, route_id)).
|
||||||
|
- INDEX (status, next_attempt_at_ms) for the scheduler.
|
||||||
|
- `dead_letters` (notification_id, route_id PK FK, channel, recipient_ref,
|
||||||
|
final_attempt_count, max_attempts, failure_classification,
|
||||||
|
failure_message, recovery_hint, created_at_ms).
|
||||||
|
- `malformed_intents` (stream_entry_id PK, notification_type, producer,
|
||||||
|
idempotency_key, failure_code, failure_message, raw_fields JSONB,
|
||||||
|
recorded_at_ms).
|
||||||
|
|
||||||
|
**Worker change**: route publisher selects work via the same
|
||||||
|
`FOR UPDATE SKIP LOCKED` pattern as Mail. The Redis lease is still used
|
||||||
|
as a short-lived, per-process exclusivity hint atop the SQL claim.
|
||||||
|
|
||||||
|
**Files (deleted)**:
|
||||||
|
|
||||||
|
- `galaxy/notification/internal/adapters/redisstate/acceptance_store.go`
|
||||||
|
- `galaxy/notification/internal/adapters/redisstate/route_state_store.go`
|
||||||
|
- `galaxy/notification/internal/adapters/redisstate/malformed_intent_store.go`
|
||||||
|
- The portions of
|
||||||
|
`galaxy/notification/internal/adapters/redisstate/keyspace.go` no longer
|
||||||
|
used (records, routes, idempotency, dead_letters, malformed_intents).
|
||||||
|
|
||||||
|
**Files retained on Redis**:
|
||||||
|
|
||||||
|
- `galaxy/notification/internal/adapters/redisstate/stream_offset_store.go`.
|
||||||
|
- Route lease key generator (still under `redisstate/`, narrowed to leases
|
||||||
|
only).
|
||||||
|
- All stream consumer/publisher wiring.
|
||||||
|
|
||||||
|
**Touched integration suites**:
|
||||||
|
|
||||||
|
- `integration/notificationgateway/`
|
||||||
|
- `integration/notificationmail/`
|
||||||
|
- `integration/notificationuser/`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~Stage 6A~~ — Lobby Service: core enrollment entities
|
||||||
|
|
||||||
|
**Goal**: move `Game`, `Application`, `Invite`, `Membership` records and
|
||||||
|
their indexes into PostgreSQL. RaceNameDirectory, GameTurnStats,
|
||||||
|
GapActivation, EvaluationGuard, StreamOffset remain on Redis until later
|
||||||
|
sub-stages.
|
||||||
|
|
||||||
|
**Schema (`lobby` schema, partial)**:
|
||||||
|
|
||||||
|
- `games` (game_id PK, owner_id, kind ('public'|'private'), status,
|
||||||
|
created_at, updated_at, runtime_snapshot JSONB, runtime_binding JSONB,
|
||||||
|
…other denormalised game settings).
|
||||||
|
- INDEX (status, created_at).
|
||||||
|
- INDEX (owner_id) WHERE kind = 'private'.
|
||||||
|
- `applications` (application_id PK, game_id FK, user_id, status,
|
||||||
|
canonical_key, submitted_at, decided_at).
|
||||||
|
- PARTIAL UNIQUE INDEX (user_id, game_id) WHERE status = 'active' —
|
||||||
|
enforces the single-active constraint at the DB level (replaces
|
||||||
|
`lobby:user_game_application:*:*`).
|
||||||
|
- INDEX (game_id), INDEX (user_id).
|
||||||
|
- `invites` (invite_id PK, game_id FK, inviter_id, invitee_id, race_name,
|
||||||
|
status, created_at, expires_at, decided_at).
|
||||||
|
- INDEX (game_id), INDEX (invitee_id), INDEX (inviter_id).
|
||||||
|
- INDEX (status, expires_at) for any expiration scanner if needed.
|
||||||
|
- `memberships` (membership_id PK, game_id FK, user_id, status, joined_at,
|
||||||
|
canonical_key, …).
|
||||||
|
- INDEX (game_id), INDEX (user_id).
|
||||||
|
|
||||||
|
**Files (new)**:
|
||||||
|
|
||||||
|
- `galaxy/lobby/internal/adapters/postgres/migrations/00001_core_entities.sql`.
|
||||||
|
- `galaxy/lobby/internal/adapters/postgres/migrations/migrations.go`.
|
||||||
|
- `galaxy/lobby/internal/adapters/postgres/jet/...`.
|
||||||
|
- `galaxy/lobby/internal/adapters/postgres/gamestore/store.go`.
|
||||||
|
- `galaxy/lobby/internal/adapters/postgres/applicationstore/store.go`.
|
||||||
|
- `galaxy/lobby/internal/adapters/postgres/invitestore/store.go`.
|
||||||
|
- `galaxy/lobby/internal/adapters/postgres/membershipstore/store.go`.
|
||||||
|
- Test files for each store using the existing test patterns.
|
||||||
|
- `galaxy/lobby/Makefile` (`jet` target).
|
||||||
|
- `galaxy/lobby/docs/postgres-migration.md` (decision record covering
|
||||||
|
this sub-stage and what is intentionally left for 6B/6C).
|
||||||
|
|
||||||
|
**Files (touched)**:
|
||||||
|
|
||||||
|
- `galaxy/lobby/internal/config/config.go` — add Postgres config; refactor
|
||||||
|
Redis config to the new shape.
|
||||||
|
- `galaxy/lobby/internal/app/runtime.go` — open Postgres pool, run
|
||||||
|
migrations on startup, wire core PG-backed stores into services.
|
||||||
|
RaceNameDirectory and stats/guard stores still wired to Redis until 6B/6C.
|
||||||
|
- `galaxy/lobby/README.md` and `galaxy/lobby/docs/runbook.md` — updated
|
||||||
|
to describe core entities on PG, RND/stats still on Redis until 6B/6C.
|
||||||
|
|
||||||
|
**Files (deleted)**:
|
||||||
|
|
||||||
|
- `galaxy/lobby/internal/adapters/redisstate/gamestore.go`,
|
||||||
|
`applicationstore.go`, `invitestore.go`, `membershipstore.go`.
|
||||||
|
- The corresponding sections of `redisstate/keyspace.go`.
|
||||||
|
|
||||||
|
**Stub adapters retained**: `gamestub/`, `applicationstub/`, `invitestub/`,
|
||||||
|
`membershipstub/` stay — they are pure in-memory ports useful for tests
|
||||||
|
that don't need real PG.
|
||||||
|
|
||||||
|
**Touched integration suites**:
|
||||||
|
|
||||||
|
- `integration/lobbyuser/`
|
||||||
|
- `integration/lobbynotification/`
|
||||||
|
|
||||||
|
**Verification**: per Stage 3 pattern; plus the existing lobby HTTP
|
||||||
|
contract tests against the public/internal ports.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~Stage 6B~~ — Lobby Service: RaceNameDirectory
|
||||||
|
|
||||||
|
This stage is implemented.
|
||||||
|
|
||||||
|
**Goal**: replace the Lua-backed Redis `RaceNameDirectory` with a PG
|
||||||
|
implementation that preserves the two-tier model (registered / reservation /
|
||||||
|
pending_registration) and atomic registration semantics via SQL
|
||||||
|
transactions and (where required) advisory locks.
|
||||||
|
|
||||||
|
**Schema (additions to `lobby` schema)**:
|
||||||
|
|
||||||
|
- `race_names` (canonical_key PK, holder_user_id, binding_kind ('registered'
|
||||||
|
| 'reserved' | 'pending_registration'), source_game_id, eligible_until_ms,
|
||||||
|
registered_at_ms, reserved_at_ms).
|
||||||
|
- INDEX (holder_user_id) for `ListRegistered`/`ListReservations`/
|
||||||
|
`ListPendingRegistrations` queries.
|
||||||
|
- PARTIAL INDEX (eligible_until_ms) WHERE binding_kind =
|
||||||
|
'pending_registration' for the expiration scanner.
|
||||||
|
- The confusable-pair policy is enforced at write time inside
|
||||||
|
`BEGIN … COMMIT` transactions; `Reserve`/`Register`/
|
||||||
|
`MarkPendingRegistration` use `SELECT … FOR UPDATE` on the canonical
|
||||||
|
keys involved (or PG advisory locks keyed by `hashtext(canonical_key)`)
|
||||||
|
to serialise concurrent attempts.
|
||||||
|
|
||||||
|
**Files (new)**:
|
||||||
|
|
||||||
|
- `galaxy/lobby/internal/adapters/postgres/migrations/00002_race_names.sql`.
|
||||||
|
- `galaxy/lobby/internal/adapters/postgres/racenamedir/directory.go` —
|
||||||
|
Postgres implementation of `ports.RaceNameDirectory`.
|
||||||
|
- `galaxy/lobby/internal/adapters/postgres/racenamedir/directory_test.go`
|
||||||
|
— runs the existing shared suite at
|
||||||
|
`galaxy/lobby/internal/ports/racenamedirtest/suite.go`.
|
||||||
|
|
||||||
|
**Files (touched)**:
|
||||||
|
|
||||||
|
- `galaxy/lobby/internal/app/runtime.go` — wire PG RND.
|
||||||
|
- `galaxy/lobby/internal/ports/racenamedirtest/suite.go` — only
|
||||||
|
shape-preserving updates if the suite assumed Redis-only behaviour
|
||||||
|
(e.g. SCAN-based list ordering).
|
||||||
|
- `galaxy/lobby/README.md`, `galaxy/lobby/docs/runbook.md` — RND now PG-
|
||||||
|
backed; canonical_lookup cache no longer needed (PG indexed lookup is
|
||||||
|
fast enough; remove the Redis cache key from `redisstate/keyspace.go`).
|
||||||
|
|
||||||
|
**Files (deleted)**:
|
||||||
|
|
||||||
|
- `galaxy/lobby/internal/adapters/redisstate/racenamedir.go` and the
|
||||||
|
embedded Lua scripts.
|
||||||
|
- `galaxy/lobby/internal/adapters/racenamestub/` stays (useful for unit
|
||||||
|
tests that don't need PG).
|
||||||
|
|
||||||
|
**Worker change**: the pending-registration expiration worker switches
|
||||||
|
from `ZRANGEBYSCORE` on `lobby:race_names:pending_index` to
|
||||||
|
`SELECT … FROM race_names WHERE binding_kind='pending_registration' AND eligible_until_ms <= now()`.
|
||||||
|
|
||||||
|
**Verification**: shared port suite (`racenamedirtest`) green against PG
|
||||||
|
adapter; lobby unit tests green; `integration/lobbyuser/`,
|
||||||
|
`integration/lobbynotification/` green.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~Stage 6C~~ — Lobby Service: workers, ephemeral stores, cleanup
|
||||||
|
|
||||||
|
This stage is implemented.
|
||||||
|
|
||||||
|
**Goal**: finish the lobby migration. Confirm what stays Redis-only,
|
||||||
|
update workers that touch both backends, drop dead Redis adapters.
|
||||||
|
|
||||||
|
**Stays on Redis (per architectural rules)**:
|
||||||
|
|
||||||
|
- `GameTurnStatsStore` — ephemeral per-game aggregate, deleted at game
|
||||||
|
finish, rebuildable from GM events.
|
||||||
|
- `EvaluationGuardStore` — ephemeral marker.
|
||||||
|
- `GapActivationStore` — short-lived gap-window timestamp cache.
|
||||||
|
- `StreamOffsetStore` — runtime coordination per the architectural rule.
|
||||||
|
- All stream consumers and publishers (`gm:lobby_events`,
|
||||||
|
`runtime:job_results`, `user:lifecycle_events`, `notification:intents`).
|
||||||
|
|
||||||
|
This is documented in `galaxy/lobby/docs/postgres-migration.md`.
|
||||||
|
|
||||||
|
**Files (touched)**:
|
||||||
|
|
||||||
|
- `galaxy/lobby/internal/worker/gmevents/consumer.go` — write durable
|
||||||
|
updates via PG-backed `GameStore`.
|
||||||
|
- `galaxy/lobby/internal/worker/runtimejobresult/consumer.go` — same.
|
||||||
|
- `galaxy/lobby/internal/adapters/userlifecycle/consumer.go` (and the
|
||||||
|
worker that drives it) — RND release, membership/application/invite
|
||||||
|
cascade all flow through PG.
|
||||||
|
- `galaxy/lobby/internal/worker/pendingregistration/worker.go` — PG-based
|
||||||
|
scan, no Redis ZSET.
|
||||||
|
- `galaxy/lobby/internal/worker/enrollmentautomation/worker.go` — uses PG
|
||||||
|
`GameStore.GetByStatus("enrollment_open")`.
|
||||||
|
- `galaxy/lobby/internal/adapters/redisstate/keyspace.go` — pruned to the
|
||||||
|
remaining Redis keys (turn stats, gap activation, evaluation guard,
|
||||||
|
stream offsets, lifecycle stream consumer state).
|
||||||
|
- `galaxy/lobby/README.md`, `galaxy/lobby/docs/runtime.md`,
|
||||||
|
`galaxy/lobby/docs/runbook.md`, `galaxy/lobby/docs/examples.md` —
|
||||||
|
finalised storage descriptions.
|
||||||
|
|
||||||
|
**Files (deleted)**:
|
||||||
|
|
||||||
|
- Anything left in `galaxy/lobby/internal/adapters/redisstate/` whose
|
||||||
|
only consumer was a port now PG-backed (see 6A/6B deletions).
|
||||||
|
|
||||||
|
**Verification**:
|
||||||
|
|
||||||
|
- All previously-green lobby unit tests pass with PG-backed adapters.
|
||||||
|
- `integration/lobbyuser/`, `integration/lobbynotification/` pass.
|
||||||
|
- `grep -rn "redisstate" galaxy/lobby/internal/` returns only the keys
|
||||||
|
intentionally retained on Redis.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~Stage 7~~ — Gateway and Auth/Session: Redis configuration refactor
|
||||||
|
|
||||||
|
This stage is implemented.
|
||||||
|
|
||||||
|
**Goal**: apply the new Redis configuration shape (master/replica/password,
|
||||||
|
drop TLS/USERNAME) to Gateway and Auth/Session. No PG migration; these
|
||||||
|
services intentionally stay Redis-only.
|
||||||
|
|
||||||
|
**Files (touched)**:
|
||||||
|
|
||||||
|
- `galaxy/gateway/internal/config/config.go` — switch `RedisConfig`
|
||||||
|
fields to the `pkg/redisconn.Config` shape; update the three
|
||||||
|
prefixes: `GATEWAY_SESSION_CACHE_REDIS_*`, `GATEWAY_REPLAY_REDIS_*`,
|
||||||
|
`GATEWAY_SESSION_EVENTS_REDIS_*`. Drop `TLS_ENABLED`, `USERNAME`.
|
||||||
|
- `galaxy/gateway/internal/session/redis.go`,
|
||||||
|
`galaxy/gateway/internal/replay/redis.go`,
|
||||||
|
`galaxy/gateway/internal/events/subscriber.go` — adopt new client
|
||||||
|
constructor via `pkg/redisconn`.
|
||||||
|
- `galaxy/gateway/internal/config/config_test.go`,
|
||||||
|
`galaxy/gateway/internal/session/redis_test.go`,
|
||||||
|
`galaxy/gateway/internal/replay/redis_test.go` — updated to new env shape.
|
||||||
|
- `galaxy/authsession/internal/config/config.go` — same pattern; drop
|
||||||
|
TLS, USERNAME.
|
||||||
|
- `galaxy/authsession/internal/adapters/redis/sessionstore/store.go`,
|
||||||
|
`challengestore/store.go`, `projectionpublisher/publisher.go`,
|
||||||
|
`sendemailcodeabuse/protector.go`, `configprovider/store.go` — adopt
|
||||||
|
new client.
|
||||||
|
- `galaxy/authsession/internal/config/config_test.go` — updated.
|
||||||
|
- `galaxy/gateway/README.md`, `galaxy/authsession/README.md`,
|
||||||
|
`galaxy/gateway/docs/runbook.md`, `galaxy/authsession/docs/runbook.md`
|
||||||
|
— note that Redis-only is intentional and reference the `ARCHITECTURE.md`
|
||||||
|
rule on TTL-bounded auth state.
|
||||||
|
|
||||||
|
**No deletions of business logic**; only env-var refactor and adapter
|
||||||
|
plumbing through `pkg/redisconn`.
|
||||||
|
|
||||||
|
**Touched integration suites**:
|
||||||
|
|
||||||
|
- `integration/gatewayauthsession/`
|
||||||
|
- `integration/authsession/`
|
||||||
|
- (every suite that boots gateway or authsession picks up the new env vars
|
||||||
|
via the harness; confirm none still pass `*_REDIS_TLS_ENABLED`).
|
||||||
|
|
||||||
|
**Verification**:
|
||||||
|
|
||||||
|
- `cd galaxy/gateway && go test ./...`
|
||||||
|
- `cd galaxy/authsession && go test ./...`
|
||||||
|
- `cd integration && go test ./gatewayauthsession/... ./authsession/...`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~Stage 8~~ — GeoProfile: documentation only
|
||||||
|
|
||||||
|
**Goal**: ensure the GeoProfile plan and README reflect the new
|
||||||
|
persistence rules so its future implementation follows them. No code
|
||||||
|
exists yet.
|
||||||
|
|
||||||
|
**Files (touched)**:
|
||||||
|
|
||||||
|
- `galaxy/geoprofile/PLAN.md` — add a stage referencing `pkg/postgres`
|
||||||
|
and `pkg/redisconn`; specify that observed-country aggregates,
|
||||||
|
declared_country history and review records will live in a `geoprofile`
|
||||||
|
schema, while ephemeral per-session signals (if any) stay on Redis.
|
||||||
|
- `galaxy/geoprofile/README.md` — note ownership of the `geoprofile`
|
||||||
|
schema and the stack choices.
|
||||||
|
|
||||||
|
**No code change**.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ~~Stage 9~~ — Final sweep
|
||||||
|
|
||||||
|
**Goal**: confirm no dead Redis adapter code, no orphaned stub, no
|
||||||
|
broken doc reference. Remove the *Migration Window* caveat from
|
||||||
|
`ARCHITECTURE.md` once all stages are done.
|
||||||
|
|
||||||
|
**Activities**:
|
||||||
|
|
||||||
|
- Walk every PG-backed service: `grep -rn "redis" galaxy/<svc>/internal/adapters/`
|
||||||
|
and verify every match belongs to a still-active stream/cache/runtime
|
||||||
|
use case.
|
||||||
|
- Walk integration suites: confirm each one provisions only the
|
||||||
|
containers it actually needs; no stale env vars.
|
||||||
|
- Update `ARCHITECTURE.md` to drop the *Migration Window* sub-section.
|
||||||
|
- Combine sequences of migration `.sql` files into a single first file.
|
||||||
|
Rewrite SQL-code, not just concat.
|
||||||
|
The reason is that project still in in development state and all schema updates
|
||||||
|
can go directly in the only and first step of relevant migrations. This should
|
||||||
|
be represented in `ARCHITECTURE.md` as well.
|
||||||
|
- One round of `go test ./...` in every module plus
|
||||||
|
`cd integration && go test ./...`.
|
||||||
|
|
||||||
|
**Verification**:
|
||||||
|
|
||||||
|
- All tests pass in every module.
|
||||||
|
- No file matches `// TODO.*postgres` or `// TODO.*migrate`.
|
||||||
|
- `git grep -n REDIS_TLS_ENABLED REDIS_USERNAME` returns nothing under
|
||||||
|
`galaxy/` (these env vars are fully retired).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification strategy (whole project)
|
||||||
|
|
||||||
|
After each stage:
|
||||||
|
|
||||||
|
- `cd /Users/id/src/go/galaxy/pkg && go test ./...`
|
||||||
|
- `cd /Users/id/src/go/galaxy/<changed_service> && go test ./...`
|
||||||
|
(with Docker available for testcontainers).
|
||||||
|
- `cd /Users/id/src/go/galaxy/integration && go test ./<affected_suites>/...`
|
||||||
|
- Manual smoke against a `docker-compose` stack (PG + Redis, both with
|
||||||
|
passwords) using the example flows in each service's `docs/examples.md`.
|
||||||
|
|
||||||
|
After Stage 9:
|
||||||
|
|
||||||
|
- `cd /Users/id/src/go/galaxy/integration && go test ./...` end to end
|
||||||
|
against real PG + real Redis.
|
||||||
|
- Confirm `git grep -nE 'REDIS_(TLS_ENABLED|USERNAME)'` returns nothing
|
||||||
|
under `galaxy/`.
|
||||||
|
- Confirm `git grep -n 'TODO.*(postgres|migrate)'` returns nothing.
|
||||||
|
|
||||||
|
## Out of scope
|
||||||
|
|
||||||
|
- `galaxy/game` — explicitly excluded by the project owner.
|
||||||
|
- Production deployment manifests (Helm/k8s) — local `docker-compose` is
|
||||||
|
enough for development.
|
||||||
|
- Backup/restore tooling configuration — `pg_dump` and WAL archiving are
|
||||||
|
available out of the box; operational setup is not part of this plan.
|
||||||
|
- Sentinel/Cluster Redis topology code paths — config exposes replica
|
||||||
|
addresses for future use; no failover routing implemented yet.
|
||||||
|
- Read-traffic routing to PG replicas — config exposes
|
||||||
|
`*_POSTGRES_REPLICA_DSNS` for future use; no routing implemented yet.
|
||||||
|
- `golangci-lint` config addition — not part of this migration.
|
||||||
|
- CI pipeline — no `.github/workflows/` exists; not added by this plan.
|
||||||
|
|
||||||
|
## Risks and notes
|
||||||
|
|
||||||
|
- **`go-jet` codegen requires a live database**. The `make jet` target
|
||||||
|
per service uses `testcontainers-go` to bring up a transient PG, applies
|
||||||
|
the same goose migrations the service applies at startup, then runs
|
||||||
|
`jet -dsn=… -path=internal/adapters/postgres/jet`. Generated code is
|
||||||
|
committed; consumers don't need Docker just to build.
|
||||||
|
- **Schema-per-service vs single-DB cross-service joins**: there are no
|
||||||
|
cross-schema joins in this plan. Each service reads only its own schema;
|
||||||
|
cross-service data flows go via Redis Streams (event bus) or HTTP
|
||||||
|
contracts (User Service is queried by Lobby for eligibility) — same as
|
||||||
|
today. The DB-level role grants enforce this.
|
||||||
|
- **Pending registration expiration worker**: under Redis it scanned a
|
||||||
|
global ZSET; under PG it does an indexed scan. The partial index on
|
||||||
|
`eligible_until_ms WHERE binding_kind='pending_registration'` keeps the
|
||||||
|
scan cheap.
|
||||||
|
- **Idempotency under crash**: with idempotency expressed as a UNIQUE
|
||||||
|
constraint on the durable record, recovery is "the row either exists or
|
||||||
|
it doesn't" — no Redis-loss window where duplicates can sneak through.
|
||||||
|
- **lib/pq vs pgx (revisit)**: confirmed pgx/v5 + jet via stdlib adapter.
|
||||||
|
The `make jet` target will pass `-source=postgres` to jet (the dialect
|
||||||
|
is independent of which Go driver runs the queries at runtime).
|
||||||
|
- **No backward-compat shim for env vars**: `*_REDIS_TLS_ENABLED` and
|
||||||
|
`*_REDIS_USERNAME` are retired in one cut. Any external dev environment
|
||||||
|
that sets these will start failing fast at startup with a clear error
|
||||||
|
emitted by `pkg/redisconn.LoadFromEnv`.
|
||||||
@@ -9,7 +9,11 @@
|
|||||||
|
|
||||||
Startup requires:
|
Startup requires:
|
||||||
|
|
||||||
- one reachable Redis deployment configured by `AUTHSESSION_REDIS_ADDR`
|
- one reachable Redis master configured by `AUTHSESSION_REDIS_MASTER_ADDR`
|
||||||
|
with mandatory `AUTHSESSION_REDIS_PASSWORD`. The connection topology
|
||||||
|
follows the project-wide rules in `ARCHITECTURE.md §Persistence Backends`
|
||||||
|
(one master plus zero-or-more replicas, no TLS, no Redis ACL username);
|
||||||
|
see also `docs/redis-config.md`.
|
||||||
|
|
||||||
That Redis deployment is used for:
|
That Redis deployment is used for:
|
||||||
|
|
||||||
|
|||||||
@@ -0,0 +1,88 @@
|
|||||||
|
# Decision: Redis configuration shape
|
||||||
|
|
||||||
|
PG_PLAN.md §7. Captures the standing rules adopted by Auth/Session Service
|
||||||
|
when it joined the project-wide Redis topology defined in
|
||||||
|
`ARCHITECTURE.md §Persistence Backends`.
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
Auth/Session Service intentionally stays Redis-only. All authsession state
|
||||||
|
is TTL-bounded and recoverable from a fresh login flow:
|
||||||
|
|
||||||
|
- challenge records expire with the login window;
|
||||||
|
- device-session records expire with their session TTL;
|
||||||
|
- gateway projection cache keys are write-through reflections of the
|
||||||
|
source-of-truth session record;
|
||||||
|
- the gateway-session-events stream is consumed lazily by the gateway and
|
||||||
|
trimmed by `MAXLEN ~`;
|
||||||
|
- the resend-throttle protector is purely TTL-driven.
|
||||||
|
|
||||||
|
Stage 7 brought authsession in line with the steady-state rules established
|
||||||
|
in Stage 0: every Galaxy service uses one master plus zero-or-more replicas
|
||||||
|
with a mandatory password, no TLS, and no Redis ACL username; the connection
|
||||||
|
is configured by the shared `pkg/redisconn` helper.
|
||||||
|
|
||||||
|
## Decisions
|
||||||
|
|
||||||
|
### One shared `*redis.Client` owned by the runtime
|
||||||
|
|
||||||
|
`internal/app/runtime.go` constructs a single `*redis.Client` via
|
||||||
|
`internal/adapters/redis.NewClient`, attaches OpenTelemetry tracing and
|
||||||
|
metrics via `internal/adapters/redis.InstrumentClient`, performs one bounded
|
||||||
|
`PING` via `internal/adapters/redis.Ping`, and registers `client.Close` for
|
||||||
|
shutdown. The challenge store, session store, config provider, projection
|
||||||
|
publisher and resend-throttle protector all receive this same client.
|
||||||
|
|
||||||
|
Adapters no longer build or own a Redis client. Their `Config` structs hold
|
||||||
|
only namespace and per-adapter timeout settings (no Addr/Username/Password/
|
||||||
|
DB/TLSEnabled). Adapter constructors take `(*redis.Client, Config)`.
|
||||||
|
|
||||||
|
### One env-var prefix per service
|
||||||
|
|
||||||
|
Connection topology is loaded from a single
|
||||||
|
`AUTHSESSION_REDIS_*` group via `redisconn.LoadFromEnv("AUTHSESSION")`:
|
||||||
|
|
||||||
|
- `AUTHSESSION_REDIS_MASTER_ADDR` (required)
|
||||||
|
- `AUTHSESSION_REDIS_REPLICA_ADDRS` (optional, comma-separated; currently
|
||||||
|
unused, reserved for future read-routing)
|
||||||
|
- `AUTHSESSION_REDIS_PASSWORD` (required)
|
||||||
|
- `AUTHSESSION_REDIS_DB` (default `0`)
|
||||||
|
- `AUTHSESSION_REDIS_OPERATION_TIMEOUT` (default `250ms`)
|
||||||
|
|
||||||
|
The per-adapter namespace and stream env vars (`*_KEY_PREFIX`,
|
||||||
|
`*_STREAM`, `*_STREAM_MAX_LEN`) keep their existing names and semantics —
|
||||||
|
they describe key shape, not connection topology.
|
||||||
|
|
||||||
|
### Retired env vars (hard removal)
|
||||||
|
|
||||||
|
- `AUTHSESSION_REDIS_ADDR` — replaced by `AUTHSESSION_REDIS_MASTER_ADDR`.
|
||||||
|
- `AUTHSESSION_REDIS_USERNAME` — Redis ACL not used.
|
||||||
|
- `AUTHSESSION_REDIS_TLS_ENABLED` — TLS disabled by policy.
|
||||||
|
- `AUTHSESSION_REDIS_OPERATION_TIMEOUT` keeps its name (it now lives in
|
||||||
|
`redisconn.Config`).
|
||||||
|
|
||||||
|
`pkg/redisconn.LoadFromEnv` rejects `AUTHSESSION_REDIS_TLS_ENABLED` and
|
||||||
|
`AUTHSESSION_REDIS_USERNAME` at startup with a clear error pointing to
|
||||||
|
`ARCHITECTURE.md §Persistence Backends`. There is no backward-compatibility
|
||||||
|
shim; this is consistent with the project-wide rule that the migration
|
||||||
|
window has no production deploys to preserve.
|
||||||
|
|
||||||
|
### Telemetry
|
||||||
|
|
||||||
|
`redisconn.Instrument` wires `redisotel.InstrumentTracing` (with
|
||||||
|
`WithDBStatement(false)`) and `redisotel.InstrumentMetrics`. This is the
|
||||||
|
first authsession release that emits Redis tracing and connection-pool
|
||||||
|
metrics; downstream dashboards will start populating without further
|
||||||
|
changes.
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
- Test code that previously constructed a Redis client per adapter must now
|
||||||
|
construct one client and pass it to every adapter under test (see the
|
||||||
|
pattern in `internal/adapters/redis/<adapter>/store_test.go`).
|
||||||
|
- Operators must set `AUTHSESSION_REDIS_PASSWORD`. A passwordless local
|
||||||
|
Redis is still acceptable as long as a placeholder password is supplied
|
||||||
|
to the binary; Redis without `requirepass` accepts AUTH unconditionally.
|
||||||
|
- The integration test harness passes `AUTHSESSION_REDIS_PASSWORD =
|
||||||
|
"integration"` alongside `AUTHSESSION_REDIS_MASTER_ADDR` (see
|
||||||
|
`integration/internal/harness/authsessionservice.go`).
|
||||||
+14
-13
@@ -7,10 +7,16 @@ verification, shutdown, and common authsession incidents.
|
|||||||
|
|
||||||
Before starting the process, confirm:
|
Before starting the process, confirm:
|
||||||
|
|
||||||
- `AUTHSESSION_REDIS_ADDR` points to the Redis deployment used for authsession
|
- `AUTHSESSION_REDIS_MASTER_ADDR` and `AUTHSESSION_REDIS_PASSWORD` point to the
|
||||||
source-of-truth data, resend throttling, and gateway projection
|
Redis deployment used for authsession source-of-truth data, resend
|
||||||
- the configured Redis ACL, DB, TLS, and key-prefix settings match the target
|
throttling, and gateway projection. Optional read replicas may be listed in
|
||||||
environment
|
`AUTHSESSION_REDIS_REPLICA_ADDRS` (currently unused; reserved for future
|
||||||
|
read-routing).
|
||||||
|
- the configured Redis DB and key-prefix settings match the target environment.
|
||||||
|
Per `ARCHITECTURE.md §Persistence Backends`, Redis traffic is
|
||||||
|
password-protected and TLS is disabled by policy; the deprecated
|
||||||
|
`AUTHSESSION_REDIS_TLS_ENABLED` and `AUTHSESSION_REDIS_USERNAME` variables
|
||||||
|
are no longer accepted and cause a hard fail at startup.
|
||||||
- if `AUTHSESSION_USER_SERVICE_MODE=rest`, both
|
- if `AUTHSESSION_USER_SERVICE_MODE=rest`, both
|
||||||
`AUTHSESSION_USER_SERVICE_BASE_URL` and
|
`AUTHSESSION_USER_SERVICE_BASE_URL` and
|
||||||
`AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT` are configured
|
`AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT` are configured
|
||||||
@@ -21,15 +27,10 @@ Before starting the process, confirm:
|
|||||||
- `gateway:session:` cache key prefix
|
- `gateway:session:` cache key prefix
|
||||||
- `gateway:session_events` stream name
|
- `gateway:session_events` stream name
|
||||||
|
|
||||||
At startup the process performs bounded `PING` checks for:
|
At startup the process performs one bounded `PING` against the shared Redis
|
||||||
|
client used by every adapter (challenge store, session store, config provider,
|
||||||
- challenge store
|
gateway projection publisher, resend-throttle protector). Startup fails fast
|
||||||
- session store
|
if the ping fails.
|
||||||
- config provider
|
|
||||||
- gateway projection publisher
|
|
||||||
- resend-throttle protector
|
|
||||||
|
|
||||||
Startup fails fast if any of those checks fail.
|
|
||||||
|
|
||||||
Expected listener state after a healthy start:
|
Expected listener state after a healthy start:
|
||||||
|
|
||||||
|
|||||||
@@ -101,7 +101,8 @@ gateway-facing projection namespaces as a derived integration view.
|
|||||||
|
|
||||||
Required for all process starts:
|
Required for all process starts:
|
||||||
|
|
||||||
- `AUTHSESSION_REDIS_ADDR`
|
- `AUTHSESSION_REDIS_MASTER_ADDR`
|
||||||
|
- `AUTHSESSION_REDIS_PASSWORD`
|
||||||
|
|
||||||
Core process config:
|
Core process config:
|
||||||
|
|
||||||
@@ -124,13 +125,23 @@ Internal HTTP config:
|
|||||||
- `AUTHSESSION_INTERNAL_HTTP_IDLE_TIMEOUT`
|
- `AUTHSESSION_INTERNAL_HTTP_IDLE_TIMEOUT`
|
||||||
- `AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT`
|
- `AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT`
|
||||||
|
|
||||||
Redis connectivity and namespace config:
|
Redis connection topology (managed by `pkg/redisconn`,
|
||||||
|
see `ARCHITECTURE.md §Persistence Backends`):
|
||||||
|
|
||||||
- `AUTHSESSION_REDIS_USERNAME`
|
- `AUTHSESSION_REDIS_MASTER_ADDR` (required)
|
||||||
- `AUTHSESSION_REDIS_PASSWORD`
|
- `AUTHSESSION_REDIS_REPLICA_ADDRS` (optional, comma-separated; reserved for
|
||||||
|
future read-routing — currently unused)
|
||||||
|
- `AUTHSESSION_REDIS_PASSWORD` (required)
|
||||||
- `AUTHSESSION_REDIS_DB`
|
- `AUTHSESSION_REDIS_DB`
|
||||||
- `AUTHSESSION_REDIS_TLS_ENABLED`
|
|
||||||
- `AUTHSESSION_REDIS_OPERATION_TIMEOUT`
|
- `AUTHSESSION_REDIS_OPERATION_TIMEOUT`
|
||||||
|
|
||||||
|
> Removed: `AUTHSESSION_REDIS_ADDR`, `AUTHSESSION_REDIS_USERNAME`,
|
||||||
|
> `AUTHSESSION_REDIS_TLS_ENABLED`. `pkg/redisconn.LoadFromEnv` rejects the
|
||||||
|
> deprecated `*_REDIS_TLS_ENABLED` and `*_REDIS_USERNAME` variables at
|
||||||
|
> startup; see `docs/redis-config.md` for the rationale.
|
||||||
|
|
||||||
|
Redis namespace and stream config:
|
||||||
|
|
||||||
- `AUTHSESSION_REDIS_CHALLENGE_KEY_PREFIX`
|
- `AUTHSESSION_REDIS_CHALLENGE_KEY_PREFIX`
|
||||||
- `AUTHSESSION_REDIS_SESSION_KEY_PREFIX`
|
- `AUTHSESSION_REDIS_SESSION_KEY_PREFIX`
|
||||||
- `AUTHSESSION_REDIS_USER_SESSIONS_KEY_PREFIX`
|
- `AUTHSESSION_REDIS_USER_SESSIONS_KEY_PREFIX`
|
||||||
|
|||||||
@@ -292,53 +292,33 @@ func newGatewayCompatibilityHarness(t *testing.T, options gatewayCompatibilityOp
|
|||||||
redisServer.Set(gatewayCompatibilitySessionLimitKey, strconv.Itoa(*options.SessionLimit))
|
redisServer.Set(gatewayCompatibilitySessionLimitKey, strconv.Itoa(*options.SessionLimit))
|
||||||
}
|
}
|
||||||
|
|
||||||
challengeStore, err := challengestore.New(challengestore.Config{
|
challengeStore, err := challengestore.New(redisClient, challengestore.Config{
|
||||||
Addr: redisServer.Addr(),
|
|
||||||
DB: 0,
|
|
||||||
KeyPrefix: gatewayCompatibilityChallengeKeyPrefix,
|
KeyPrefix: gatewayCompatibilityChallengeKeyPrefix,
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
})
|
})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, challengeStore.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
sessionStore, err := sessionstore.New(sessionstore.Config{
|
sessionStore, err := sessionstore.New(redisClient, sessionstore.Config{
|
||||||
Addr: redisServer.Addr(),
|
|
||||||
DB: 0,
|
|
||||||
SessionKeyPrefix: gatewayCompatibilitySessionKeyPrefix,
|
SessionKeyPrefix: gatewayCompatibilitySessionKeyPrefix,
|
||||||
UserSessionsKeyPrefix: gatewayCompatibilityUserSessionsKeyPrefix,
|
UserSessionsKeyPrefix: gatewayCompatibilityUserSessionsKeyPrefix,
|
||||||
UserActiveSessionsKeyPrefix: gatewayCompatibilityUserActiveKeyPrefix,
|
UserActiveSessionsKeyPrefix: gatewayCompatibilityUserActiveKeyPrefix,
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
})
|
})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, sessionStore.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
configStore, err := configprovider.New(configprovider.Config{
|
configStore, err := configprovider.New(redisClient, configprovider.Config{
|
||||||
Addr: redisServer.Addr(),
|
|
||||||
DB: 0,
|
|
||||||
SessionLimitKey: gatewayCompatibilitySessionLimitKey,
|
SessionLimitKey: gatewayCompatibilitySessionLimitKey,
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
})
|
})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, configStore.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
publisher, err := projectionpublisher.New(projectionpublisher.Config{
|
publisher, err := projectionpublisher.New(redisClient, projectionpublisher.Config{
|
||||||
Addr: redisServer.Addr(),
|
|
||||||
DB: 0,
|
|
||||||
SessionCacheKeyPrefix: gatewayCompatibilitySessionCacheKeyPrefix,
|
SessionCacheKeyPrefix: gatewayCompatibilitySessionCacheKeyPrefix,
|
||||||
SessionEventsStream: gatewayCompatibilitySessionEventsStream,
|
SessionEventsStream: gatewayCompatibilitySessionEventsStream,
|
||||||
StreamMaxLen: gatewayCompatibilityStreamMaxLen,
|
StreamMaxLen: gatewayCompatibilityStreamMaxLen,
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
})
|
})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, publisher.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
userDirectory := &userservice.StubDirectory{}
|
userDirectory := &userservice.StubDirectory{}
|
||||||
if options.SeedBlockedEmail {
|
if options.SeedBlockedEmail {
|
||||||
|
|||||||
+15
-7
@@ -1,8 +1,9 @@
|
|||||||
module galaxy/authsession
|
module galaxy/authsession
|
||||||
|
|
||||||
go 1.26.0
|
go 1.26.1
|
||||||
|
|
||||||
require (
|
require (
|
||||||
|
galaxy/redisconn v0.0.0-00010101000000-000000000000
|
||||||
github.com/alicebob/miniredis/v2 v2.37.0
|
github.com/alicebob/miniredis/v2 v2.37.0
|
||||||
github.com/getkin/kin-openapi v0.135.0
|
github.com/getkin/kin-openapi v0.135.0
|
||||||
github.com/gin-gonic/gin v1.12.0
|
github.com/gin-gonic/gin v1.12.0
|
||||||
@@ -21,7 +22,7 @@ require (
|
|||||||
go.opentelemetry.io/otel/sdk/metric v1.43.0
|
go.opentelemetry.io/otel/sdk/metric v1.43.0
|
||||||
go.opentelemetry.io/otel/trace v1.43.0
|
go.opentelemetry.io/otel/trace v1.43.0
|
||||||
go.uber.org/zap v1.27.1
|
go.uber.org/zap v1.27.1
|
||||||
golang.org/x/crypto v0.49.0
|
golang.org/x/crypto v0.50.0
|
||||||
golang.org/x/text v0.36.0
|
golang.org/x/text v0.36.0
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -52,7 +53,7 @@ require (
|
|||||||
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
||||||
github.com/leodido/go-urn v1.4.0 // indirect
|
github.com/leodido/go-urn v1.4.0 // indirect
|
||||||
github.com/mailru/easyjson v0.7.7 // indirect
|
github.com/mailru/easyjson v0.7.7 // indirect
|
||||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
github.com/mattn/go-isatty v0.0.21 // indirect
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
|
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
|
||||||
@@ -72,13 +73,20 @@ require (
|
|||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect
|
||||||
go.opentelemetry.io/proto/otlp v1.10.0 // indirect
|
go.opentelemetry.io/proto/otlp v1.10.0 // indirect
|
||||||
go.uber.org/atomic v1.11.0 // indirect
|
go.uber.org/atomic v1.11.0 // indirect
|
||||||
go.uber.org/multierr v1.10.0 // indirect
|
go.uber.org/multierr v1.11.0 // indirect
|
||||||
golang.org/x/arch v0.25.0 // indirect
|
golang.org/x/arch v0.25.0 // indirect
|
||||||
golang.org/x/net v0.52.0 // indirect
|
golang.org/x/net v0.53.0 // indirect
|
||||||
golang.org/x/sys v0.42.0 // indirect
|
golang.org/x/sys v0.43.0 // indirect
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 // indirect
|
||||||
google.golang.org/grpc v1.80.0 // indirect
|
google.golang.org/grpc v1.80.0 // indirect
|
||||||
google.golang.org/protobuf v1.36.11 // indirect
|
google.golang.org/protobuf v1.36.11 // indirect
|
||||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||||
)
|
)
|
||||||
|
|
||||||
|
require (
|
||||||
|
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 // indirect
|
||||||
|
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0 // indirect
|
||||||
|
)
|
||||||
|
|
||||||
|
replace galaxy/redisconn => ../pkg/redisconn
|
||||||
|
|||||||
+43
-13
@@ -5,9 +5,11 @@ github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdb
|
|||||||
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
|
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
|
||||||
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
|
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
|
||||||
github.com/bytedance/gopkg v0.1.4 h1:oZnQwnX82KAIWb7033bEwtxvTqXcYMxDBaQxo5JJHWM=
|
github.com/bytedance/gopkg v0.1.4 h1:oZnQwnX82KAIWb7033bEwtxvTqXcYMxDBaQxo5JJHWM=
|
||||||
|
github.com/bytedance/gopkg v0.1.4/go.mod h1:v1zWfPm21Fb+OsyXN2VAHdL6TBb2L88anLQgdyje6R4=
|
||||||
github.com/bytedance/sonic v1.15.0 h1:/PXeWFaR5ElNcVE84U0dOHjiMHQOwNIx3K4ymzh/uSE=
|
github.com/bytedance/sonic v1.15.0 h1:/PXeWFaR5ElNcVE84U0dOHjiMHQOwNIx3K4ymzh/uSE=
|
||||||
github.com/bytedance/sonic v1.15.0/go.mod h1:tFkWrPz0/CUCLEF4ri4UkHekCIcdnkqXw9VduqpJh0k=
|
github.com/bytedance/sonic v1.15.0/go.mod h1:tFkWrPz0/CUCLEF4ri4UkHekCIcdnkqXw9VduqpJh0k=
|
||||||
github.com/bytedance/sonic/loader v0.5.1 h1:Ygpfa9zwRCCKSlrp5bBP/b/Xzc3VxsAW+5NIYXrOOpI=
|
github.com/bytedance/sonic/loader v0.5.1 h1:Ygpfa9zwRCCKSlrp5bBP/b/Xzc3VxsAW+5NIYXrOOpI=
|
||||||
|
github.com/bytedance/sonic/loader v0.5.1/go.mod h1:AR4NYCk5DdzZizZ5djGqQ92eEhCCcdf5x77udYiSJRo=
|
||||||
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
|
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
|
||||||
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
|
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||||
@@ -17,12 +19,15 @@ github.com/cloudwego/base64x v0.1.6/go.mod h1:OFcloc187FXDaYHvrNIjxSe8ncn0OOM8gE
|
|||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||||
|
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||||
github.com/gabriel-vasile/mimetype v1.4.13 h1:46nXokslUBsAJE/wMsp5gtO500a4F3Nkz9Ufpk2AcUM=
|
github.com/gabriel-vasile/mimetype v1.4.13 h1:46nXokslUBsAJE/wMsp5gtO500a4F3Nkz9Ufpk2AcUM=
|
||||||
github.com/gabriel-vasile/mimetype v1.4.13/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
|
github.com/gabriel-vasile/mimetype v1.4.13/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
|
||||||
github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg=
|
github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg=
|
||||||
|
github.com/getkin/kin-openapi v0.135.0/go.mod h1:6dd5FJl6RdX4usBtFBaQhk9q62Yb2J0Mk5IhUO/QqFI=
|
||||||
github.com/gin-contrib/sse v1.1.1 h1:uGYpNwTacv5R68bSGMapo62iLTRa9l5zxGCps4hK6ko=
|
github.com/gin-contrib/sse v1.1.1 h1:uGYpNwTacv5R68bSGMapo62iLTRa9l5zxGCps4hK6ko=
|
||||||
|
github.com/gin-contrib/sse v1.1.1/go.mod h1:QXzuVkA0YO7o/gun03UI1Q+FTI8ZV/n5t03kIQAI89s=
|
||||||
github.com/gin-gonic/gin v1.12.0 h1:b3YAbrZtnf8N//yjKeU2+MQsh2mY5htkZidOM7O0wG8=
|
github.com/gin-gonic/gin v1.12.0 h1:b3YAbrZtnf8N//yjKeU2+MQsh2mY5htkZidOM7O0wG8=
|
||||||
github.com/gin-gonic/gin v1.12.0/go.mod h1:VxccKfsSllpKshkBWgVgRniFFAzFb9csfngsqANjnLc=
|
github.com/gin-gonic/gin v1.12.0/go.mod h1:VxccKfsSllpKshkBWgVgRniFFAzFb9csfngsqANjnLc=
|
||||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||||
@@ -41,9 +46,11 @@ github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/Nu
|
|||||||
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
|
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
|
||||||
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
|
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
|
||||||
github.com/go-playground/validator/v10 v10.30.2 h1:JiFIMtSSHb2/XBUbWM4i/MpeQm9ZK2xqPNk8vgvu5JQ=
|
github.com/go-playground/validator/v10 v10.30.2 h1:JiFIMtSSHb2/XBUbWM4i/MpeQm9ZK2xqPNk8vgvu5JQ=
|
||||||
|
github.com/go-playground/validator/v10 v10.30.2/go.mod h1:mAf2pIOVXjTEBrwUMGKkCWKKPs9NheYGabeB04txQSc=
|
||||||
github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM=
|
github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM=
|
||||||
github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
|
github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
|
||||||
github.com/goccy/go-json v0.10.6 h1:p8HrPJzOakx/mn/bQtjgNjdTcN+/S6FcG2CTtQOrHVU=
|
github.com/goccy/go-json v0.10.6 h1:p8HrPJzOakx/mn/bQtjgNjdTcN+/S6FcG2CTtQOrHVU=
|
||||||
|
github.com/goccy/go-json v0.10.6/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
|
||||||
github.com/goccy/go-yaml v1.19.2 h1:PmFC1S6h8ljIz6gMRBopkjP1TVT7xuwrButHID66PoM=
|
github.com/goccy/go-yaml v1.19.2 h1:PmFC1S6h8ljIz6gMRBopkjP1TVT7xuwrButHID66PoM=
|
||||||
github.com/goccy/go-yaml v1.19.2/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA=
|
github.com/goccy/go-yaml v1.19.2/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA=
|
||||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||||
@@ -69,8 +76,8 @@ github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
|
|||||||
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
||||||
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
||||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
github.com/mattn/go-isatty v0.0.21 h1:xYae+lCNBP7QuW4PUnNG61ffM4hVIfm+zUzDuSzYLGs=
|
||||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
github.com/mattn/go-isatty v0.0.21/go.mod h1:ZXfXG4SQHsB/w3ZeOYbR0PrPwLy+n6xiMrJlRFqopa4=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
@@ -79,16 +86,24 @@ github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjY
|
|||||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
|
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
|
||||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
|
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
|
||||||
github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48=
|
github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48=
|
||||||
|
github.com/oasdiff/yaml v0.0.9/go.mod h1:8lvhgJG4xiKPj3HN5lDow4jZHPlx1i7dIwzkdAo6oAM=
|
||||||
github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g=
|
github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g=
|
||||||
|
github.com/oasdiff/yaml3 v0.0.9/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o=
|
||||||
github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM=
|
github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM=
|
||||||
|
github.com/pelletier/go-toml/v2 v2.3.0/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
|
||||||
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
|
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
|
||||||
github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw=
|
github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||||
|
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8=
|
github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8=
|
||||||
github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII=
|
github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII=
|
||||||
github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw=
|
github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw=
|
||||||
github.com/quic-go/quic-go v0.59.0/go.mod h1:upnsH4Ju1YkqpLXC305eW3yDZ4NfnNbmQRCMWS58IKU=
|
github.com/quic-go/quic-go v0.59.0/go.mod h1:upnsH4Ju1YkqpLXC305eW3yDZ4NfnNbmQRCMWS58IKU=
|
||||||
|
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 h1:QY4nmPHLFAJjtT5O4OMUEOxP8WVaRNOFpcbmxT2NLZU=
|
||||||
|
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0/go.mod h1:WH8cY/0fT41Bsf341qzo8v4nx0GCE8FykAA23IVbVmo=
|
||||||
|
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0 h1:2dKdoEYBJ0CZCLPiCdvvc7luz3DPwY6hKdzjL6m1eHE=
|
||||||
|
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0/go.mod h1:WzkrVG9ro9BwCQD0eJOWn6AGL4Z1CleGflM45w1hu10=
|
||||||
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
||||||
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
||||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||||
@@ -119,19 +134,33 @@ go.mongodb.org/mongo-driver/v2 v2.5.0/go.mod h1:yOI9kBsufol30iFsl1slpdq1I0eHPzyb
|
|||||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||||
go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0 h1:5FXSL2s6afUC1bzNzl1iedZZ8yqR7GOhbCoEXtyeK6Q=
|
go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0 h1:5FXSL2s6afUC1bzNzl1iedZZ8yqR7GOhbCoEXtyeK6Q=
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/github.com/gin-gonic/gin/otelgin v0.68.0/go.mod h1:MdHW7tLtkeGJnR4TyOrnd5D0zUGZQB1l84uHCe8hRpE=
|
||||||
go.opentelemetry.io/contrib/propagators/b3 v1.43.0 h1:CETqV3QLLPTy5yNrqyMr41VnAOOD4lsRved7n4QG00A=
|
go.opentelemetry.io/contrib/propagators/b3 v1.43.0 h1:CETqV3QLLPTy5yNrqyMr41VnAOOD4lsRved7n4QG00A=
|
||||||
|
go.opentelemetry.io/contrib/propagators/b3 v1.43.0/go.mod h1:Q4mCiCdziYzpNR0g+6UqVotAlCDZdzz6L8jwY4knOrw=
|
||||||
go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I=
|
go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I=
|
||||||
|
go.opentelemetry.io/otel v1.43.0/go.mod h1:JuG+u74mvjvcm8vj8pI5XiHy1zDeoCS2LB1spIq7Ay0=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0 h1:8UQVDcZxOJLtX6gxtDt3vY2WTgvZqMQRzjsqiIHQdkc=
|
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0 h1:8UQVDcZxOJLtX6gxtDt3vY2WTgvZqMQRzjsqiIHQdkc=
|
||||||
|
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0/go.mod h1:2lmweYCiHYpEjQ/lSJBYhj9jP1zvCvQW4BqL9dnT7FQ=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0 h1:w1K+pCJoPpQifuVpsKamUdn9U0zM3xUziVOqsGksUrY=
|
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0 h1:w1K+pCJoPpQifuVpsKamUdn9U0zM3xUziVOqsGksUrY=
|
||||||
|
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.43.0/go.mod h1:HBy4BjzgVE8139ieRI75oXm3EcDN+6GhD88JT1Kjvxg=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 h1:88Y4s2C8oTui1LGM6bTWkw0ICGcOLCAI5l6zsD1j20k=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 h1:88Y4s2C8oTui1LGM6bTWkw0ICGcOLCAI5l6zsD1j20k=
|
||||||
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0/go.mod h1:Vl1/iaggsuRlrHf/hfPJPvVag77kKyvrLeD10kpMl+A=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 h1:RAE+JPfvEmvy+0LzyUA25/SGawPwIUbZ6u0Wug54sLc=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0 h1:RAE+JPfvEmvy+0LzyUA25/SGawPwIUbZ6u0Wug54sLc=
|
||||||
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.43.0/go.mod h1:AGmbycVGEsRx9mXMZ75CsOyhSP6MFIcj/6dnG+vhVjk=
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 h1:3iZJKlCZufyRzPzlQhUIWVmfltrXuGyfjREgGP3UUjc=
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 h1:3iZJKlCZufyRzPzlQhUIWVmfltrXuGyfjREgGP3UUjc=
|
||||||
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0/go.mod h1:/G+nUPfhq2e+qiXMGxMwumDrP5jtzU+mWN7/sjT2rak=
|
||||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0 h1:TC+BewnDpeiAmcscXbGMfxkO+mwYUwE/VySwvw88PfA=
|
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0 h1:TC+BewnDpeiAmcscXbGMfxkO+mwYUwE/VySwvw88PfA=
|
||||||
|
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.43.0/go.mod h1:J/ZyF4vfPwsSr9xJSPyQ4LqtcTPULFR64KwTikGLe+A=
|
||||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0 h1:mS47AX77OtFfKG4vtp+84kuGSFZHTyxtXIN269vChY0=
|
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0 h1:mS47AX77OtFfKG4vtp+84kuGSFZHTyxtXIN269vChY0=
|
||||||
|
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.43.0/go.mod h1:PJnsC41lAGncJlPUniSwM81gc80GkgWJWr3cu2nKEtU=
|
||||||
go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM=
|
go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM=
|
||||||
|
go.opentelemetry.io/otel/metric v1.43.0/go.mod h1:RDnPtIxvqlgO8GRW18W6Z/4P462ldprJtfxHxyKd2PY=
|
||||||
go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg=
|
go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg=
|
||||||
|
go.opentelemetry.io/otel/sdk v1.43.0/go.mod h1:P+IkVU3iWukmiit/Yf9AWvpyRDlUeBaRg6Y+C58QHzg=
|
||||||
go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw=
|
go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw=
|
||||||
|
go.opentelemetry.io/otel/sdk/metric v1.43.0/go.mod h1:C/RJtwSEJ5hzTiUz5pXF1kILHStzb9zFlIEe85bhj6A=
|
||||||
go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A=
|
go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A=
|
||||||
|
go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0=
|
||||||
go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g=
|
go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g=
|
||||||
go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk=
|
go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk=
|
||||||
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
||||||
@@ -140,25 +169,26 @@ go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
|||||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||||
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
|
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
|
||||||
go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=
|
go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=
|
||||||
go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ=
|
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||||
go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||||
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
|
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
|
||||||
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
||||||
golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE=
|
golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE=
|
||||||
golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
|
golang.org/x/arch v0.25.0/go.mod h1:0X+GdSIP+kL5wPmpK7sdkEVTt2XoYP0cSjQSbZBwOi8=
|
||||||
golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
|
golang.org/x/crypto v0.50.0 h1:zO47/JPrL6vsNkINmLoo/PH1gcxpls50DNogFvB5ZGI=
|
||||||
golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
|
golang.org/x/crypto v0.50.0/go.mod h1:3muZ7vA7PBCE6xgPX7nkzzjiUq87kRItoJQM1Yo8S+Q=
|
||||||
golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw=
|
golang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=
|
||||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/net v0.53.0/go.mod h1:JvMuJH7rrdiCfbeHoo3fCQU24Lf5JJwT9W3sJFulfgs=
|
||||||
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
|
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
|
||||||
golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
||||||
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
||||||
|
golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
|
||||||
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
||||||
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA=
|
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA=
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M=
|
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M=
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg=
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 h1:XF8+t6QQiS0o9ArVan/HW8Q7cycNPGsJf6GA2nXxYAg=
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
||||||
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
||||||
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
||||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ package challengestore
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
@@ -26,23 +25,10 @@ const expirationGracePeriod = 5 * time.Minute
|
|||||||
|
|
||||||
const defaultPreferredLanguage = "en"
|
const defaultPreferredLanguage = "en"
|
||||||
|
|
||||||
// Config configures one Redis-backed challenge store instance.
|
// Config configures one Redis-backed challenge store instance. The store does
|
||||||
|
// not own its Redis client; the runtime supplies a shared client constructed
|
||||||
|
// via `pkg/redisconn`.
|
||||||
type Config struct {
|
type Config struct {
|
||||||
// Addr is the Redis network address in host:port form.
|
|
||||||
Addr string
|
|
||||||
|
|
||||||
// Username is the optional Redis ACL username.
|
|
||||||
Username string
|
|
||||||
|
|
||||||
// Password is the optional Redis ACL password.
|
|
||||||
Password string
|
|
||||||
|
|
||||||
// DB is the Redis logical database index.
|
|
||||||
DB int
|
|
||||||
|
|
||||||
// TLSEnabled enables TLS with a conservative minimum protocol version.
|
|
||||||
TLSEnabled bool
|
|
||||||
|
|
||||||
// KeyPrefix is the namespace prefix applied to every challenge key.
|
// KeyPrefix is the namespace prefix applied to every challenge key.
|
||||||
KeyPrefix string
|
KeyPrefix string
|
||||||
|
|
||||||
@@ -74,13 +60,11 @@ type redisRecord struct {
|
|||||||
ConfirmedAt *string `json:"confirmed_at,omitempty"`
|
ConfirmedAt *string `json:"confirmed_at,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// New constructs a Redis-backed challenge store from cfg.
|
// New constructs a Redis-backed challenge store that uses client and applies
|
||||||
func New(cfg Config) (*Store, error) {
|
// the namespace and timeout settings from cfg.
|
||||||
if strings.TrimSpace(cfg.Addr) == "" {
|
func New(client *redis.Client, cfg Config) (*Store, error) {
|
||||||
return nil, errors.New("new redis challenge store: redis addr must not be empty")
|
if client == nil {
|
||||||
}
|
return nil, errors.New("new redis challenge store: nil redis client")
|
||||||
if cfg.DB < 0 {
|
|
||||||
return nil, errors.New("new redis challenge store: redis db must not be negative")
|
|
||||||
}
|
}
|
||||||
if strings.TrimSpace(cfg.KeyPrefix) == "" {
|
if strings.TrimSpace(cfg.KeyPrefix) == "" {
|
||||||
return nil, errors.New("new redis challenge store: redis key prefix must not be empty")
|
return nil, errors.New("new redis challenge store: redis key prefix must not be empty")
|
||||||
@@ -89,50 +73,13 @@ func New(cfg Config) (*Store, error) {
|
|||||||
return nil, errors.New("new redis challenge store: operation timeout must be positive")
|
return nil, errors.New("new redis challenge store: operation timeout must be positive")
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &redis.Options{
|
|
||||||
Addr: cfg.Addr,
|
|
||||||
Username: cfg.Username,
|
|
||||||
Password: cfg.Password,
|
|
||||||
DB: cfg.DB,
|
|
||||||
Protocol: 2,
|
|
||||||
DisableIdentity: true,
|
|
||||||
}
|
|
||||||
if cfg.TLSEnabled {
|
|
||||||
options.TLSConfig = &tls.Config{MinVersion: tls.VersionTLS12}
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Store{
|
return &Store{
|
||||||
client: redis.NewClient(options),
|
client: client,
|
||||||
keyPrefix: cfg.KeyPrefix,
|
keyPrefix: cfg.KeyPrefix,
|
||||||
operationTimeout: cfg.OperationTimeout,
|
operationTimeout: cfg.OperationTimeout,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close releases the underlying Redis client resources.
|
|
||||||
func (s *Store) Close() error {
|
|
||||||
if s == nil || s.client == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.client.Close()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ping verifies that the configured Redis backend is reachable within the
|
|
||||||
// adapter operation timeout budget.
|
|
||||||
func (s *Store) Ping(ctx context.Context) error {
|
|
||||||
operationCtx, cancel, err := s.operationContext(ctx, "ping redis challenge store")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := s.client.Ping(operationCtx).Err(); err != nil {
|
|
||||||
return fmt.Errorf("ping redis challenge store: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get returns the stored challenge for challengeID.
|
// Get returns the stored challenge for challengeID.
|
||||||
func (s *Store) Get(ctx context.Context, challengeID common.ChallengeID) (challenge.Challenge, error) {
|
func (s *Store) Get(ctx context.Context, challengeID common.ChallengeID) (challenge.Challenge, error) {
|
||||||
if err := challengeID.Validate(); err != nil {
|
if err := challengeID.Validate(); err != nil {
|
||||||
|
|||||||
@@ -13,10 +13,26 @@ import (
|
|||||||
"galaxy/authsession/internal/ports"
|
"galaxy/authsession/internal/ports"
|
||||||
|
|
||||||
"github.com/alicebob/miniredis/v2"
|
"github.com/alicebob/miniredis/v2"
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func newRedisClient(t *testing.T, server *miniredis.Miniredis) *redis.Client {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
client := redis.NewClient(&redis.Options{
|
||||||
|
Addr: server.Addr(),
|
||||||
|
Protocol: 2,
|
||||||
|
DisableIdentity: true,
|
||||||
|
})
|
||||||
|
t.Cleanup(func() {
|
||||||
|
assert.NoError(t, client.Close())
|
||||||
|
})
|
||||||
|
|
||||||
|
return client
|
||||||
|
}
|
||||||
|
|
||||||
func TestStoreContract(t *testing.T) {
|
func TestStoreContract(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
@@ -32,64 +48,44 @@ func TestNew(t *testing.T) {
|
|||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
server := miniredis.RunT(t)
|
||||||
|
client := newRedisClient(t, server)
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
|
client *redis.Client
|
||||||
cfg Config
|
cfg Config
|
||||||
wantErr string
|
wantErr string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
name: "valid config",
|
name: "valid config",
|
||||||
cfg: Config{
|
client: client,
|
||||||
Addr: server.Addr(),
|
cfg: Config{KeyPrefix: "authsession:challenge:", OperationTimeout: 250 * time.Millisecond},
|
||||||
DB: 2,
|
|
||||||
KeyPrefix: "authsession:challenge:",
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "empty addr",
|
name: "nil client",
|
||||||
cfg: Config{
|
client: nil,
|
||||||
KeyPrefix: "authsession:challenge:",
|
cfg: Config{KeyPrefix: "authsession:challenge:", OperationTimeout: 250 * time.Millisecond},
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
wantErr: "nil redis client",
|
||||||
},
|
|
||||||
wantErr: "redis addr must not be empty",
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "negative db",
|
name: "empty key prefix",
|
||||||
cfg: Config{
|
client: client,
|
||||||
Addr: server.Addr(),
|
cfg: Config{OperationTimeout: 250 * time.Millisecond},
|
||||||
DB: -1,
|
|
||||||
KeyPrefix: "authsession:challenge:",
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis db must not be negative",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty key prefix",
|
|
||||||
cfg: Config{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis key prefix must not be empty",
|
wantErr: "redis key prefix must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "non-positive operation timeout",
|
name: "non-positive operation timeout",
|
||||||
cfg: Config{
|
client: client,
|
||||||
Addr: server.Addr(),
|
cfg: Config{KeyPrefix: "authsession:challenge:"},
|
||||||
KeyPrefix: "authsession:challenge:",
|
|
||||||
},
|
|
||||||
wantErr: "operation timeout must be positive",
|
wantErr: "operation timeout must be positive",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
store, err := New(tt.cfg)
|
store, err := New(tt.client, tt.cfg)
|
||||||
if tt.wantErr != "" {
|
if tt.wantErr != "" {
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
assert.ErrorContains(t, err, tt.wantErr)
|
assert.ErrorContains(t, err, tt.wantErr)
|
||||||
@@ -97,22 +93,11 @@ func TestNew(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
require.NotNil(t, store)
|
||||||
assert.NoError(t, store.Close())
|
|
||||||
})
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestStorePing(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
|
||||||
store := newTestStore(t, server, Config{})
|
|
||||||
|
|
||||||
require.NoError(t, store.Ping(context.Background()))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStoreCreateAndGet(t *testing.T) {
|
func TestStoreCreateAndGet(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
@@ -429,9 +414,6 @@ func TestStoreCompareAndSwap(t *testing.T) {
|
|||||||
func newTestStore(t *testing.T, server *miniredis.Miniredis, cfg Config) *Store {
|
func newTestStore(t *testing.T, server *miniredis.Miniredis, cfg Config) *Store {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
if cfg.Addr == "" {
|
|
||||||
cfg.Addr = server.Addr()
|
|
||||||
}
|
|
||||||
if cfg.KeyPrefix == "" {
|
if cfg.KeyPrefix == "" {
|
||||||
cfg.KeyPrefix = "authsession:challenge:"
|
cfg.KeyPrefix = "authsession:challenge:"
|
||||||
}
|
}
|
||||||
@@ -439,13 +421,9 @@ func newTestStore(t *testing.T, server *miniredis.Miniredis, cfg Config) *Store
|
|||||||
cfg.OperationTimeout = 250 * time.Millisecond
|
cfg.OperationTimeout = 250 * time.Millisecond
|
||||||
}
|
}
|
||||||
|
|
||||||
store, err := New(cfg)
|
store, err := New(newRedisClient(t, server), cfg)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, store.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
return store
|
return store
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -540,17 +518,6 @@ func mustMarshalJSON(t *testing.T, value any) string {
|
|||||||
return string(payload)
|
return string(payload)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestStorePingNilContext(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
|
||||||
store := newTestStore(t, server, Config{})
|
|
||||||
|
|
||||||
err := store.Ping(nil)
|
|
||||||
require.Error(t, err)
|
|
||||||
assert.ErrorContains(t, err, "nil context")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStoreGetNilContext(t *testing.T) {
|
func TestStoreGetNilContext(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
|
|||||||
@@ -0,0 +1,56 @@
|
|||||||
|
// Package redisadapter provides the Redis client helpers used by Auth/Session
|
||||||
|
// Service runtime wiring. The helpers wrap `pkg/redisconn` so the runtime
|
||||||
|
// keeps the same construction surface as the other Galaxy services.
|
||||||
|
package redisadapter
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"galaxy/authsession/internal/config"
|
||||||
|
"galaxy/authsession/internal/telemetry"
|
||||||
|
"galaxy/redisconn"
|
||||||
|
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewClient constructs one Redis client from cfg using the shared
|
||||||
|
// `pkg/redisconn` helper, which enforces the master/replica/password env-var
|
||||||
|
// shape.
|
||||||
|
func NewClient(cfg config.RedisConfig) *redis.Client {
|
||||||
|
return redisconn.NewMasterClient(cfg.Conn)
|
||||||
|
}
|
||||||
|
|
||||||
|
// InstrumentClient attaches Redis tracing and metrics exporters to client
|
||||||
|
// when telemetryRuntime is available.
|
||||||
|
func InstrumentClient(client *redis.Client, telemetryRuntime *telemetry.Runtime) error {
|
||||||
|
if client == nil {
|
||||||
|
return fmt.Errorf("instrument redis client: nil client")
|
||||||
|
}
|
||||||
|
if telemetryRuntime == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return redisconn.Instrument(
|
||||||
|
client,
|
||||||
|
redisconn.WithTracerProvider(telemetryRuntime.TracerProvider()),
|
||||||
|
redisconn.WithMeterProvider(telemetryRuntime.MeterProvider()),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ping performs the startup Redis connectivity check bounded by
|
||||||
|
// cfg.Conn.OperationTimeout.
|
||||||
|
func Ping(ctx context.Context, cfg config.RedisConfig, client *redis.Client) error {
|
||||||
|
if client == nil {
|
||||||
|
return fmt.Errorf("ping redis: nil client")
|
||||||
|
}
|
||||||
|
|
||||||
|
pingCtx, cancel := context.WithTimeout(ctx, cfg.Conn.OperationTimeout)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
if err := client.Ping(pingCtx).Err(); err != nil {
|
||||||
|
return fmt.Errorf("ping redis: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -4,7 +4,6 @@ package configprovider
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"strconv"
|
"strconv"
|
||||||
@@ -16,23 +15,10 @@ import (
|
|||||||
"github.com/redis/go-redis/v9"
|
"github.com/redis/go-redis/v9"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Config configures one Redis-backed config provider instance.
|
// Config configures one Redis-backed config provider instance. The store does
|
||||||
|
// not own its Redis client; the runtime supplies a shared client constructed
|
||||||
|
// via `pkg/redisconn`.
|
||||||
type Config struct {
|
type Config struct {
|
||||||
// Addr is the Redis network address in host:port form.
|
|
||||||
Addr string
|
|
||||||
|
|
||||||
// Username is the optional Redis ACL username.
|
|
||||||
Username string
|
|
||||||
|
|
||||||
// Password is the optional Redis ACL password.
|
|
||||||
Password string
|
|
||||||
|
|
||||||
// DB is the Redis logical database index.
|
|
||||||
DB int
|
|
||||||
|
|
||||||
// TLSEnabled enables TLS with a conservative minimum protocol version.
|
|
||||||
TLSEnabled bool
|
|
||||||
|
|
||||||
// SessionLimitKey identifies the single Redis string key that stores the
|
// SessionLimitKey identifies the single Redis string key that stores the
|
||||||
// active-session-limit configuration value.
|
// active-session-limit configuration value.
|
||||||
SessionLimitKey string
|
SessionLimitKey string
|
||||||
@@ -48,63 +34,25 @@ type Store struct {
|
|||||||
operationTimeout time.Duration
|
operationTimeout time.Duration
|
||||||
}
|
}
|
||||||
|
|
||||||
// New constructs a Redis-backed config provider from cfg.
|
// New constructs a Redis-backed config provider that uses client and applies
|
||||||
func New(cfg Config) (*Store, error) {
|
// the namespace and timeout settings from cfg.
|
||||||
|
func New(client *redis.Client, cfg Config) (*Store, error) {
|
||||||
switch {
|
switch {
|
||||||
case strings.TrimSpace(cfg.Addr) == "":
|
case client == nil:
|
||||||
return nil, errors.New("new redis config provider: redis addr must not be empty")
|
return nil, errors.New("new redis config provider: nil redis client")
|
||||||
case cfg.DB < 0:
|
|
||||||
return nil, errors.New("new redis config provider: redis db must not be negative")
|
|
||||||
case strings.TrimSpace(cfg.SessionLimitKey) == "":
|
case strings.TrimSpace(cfg.SessionLimitKey) == "":
|
||||||
return nil, errors.New("new redis config provider: session limit key must not be empty")
|
return nil, errors.New("new redis config provider: session limit key must not be empty")
|
||||||
case cfg.OperationTimeout <= 0:
|
case cfg.OperationTimeout <= 0:
|
||||||
return nil, errors.New("new redis config provider: operation timeout must be positive")
|
return nil, errors.New("new redis config provider: operation timeout must be positive")
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &redis.Options{
|
|
||||||
Addr: cfg.Addr,
|
|
||||||
Username: cfg.Username,
|
|
||||||
Password: cfg.Password,
|
|
||||||
DB: cfg.DB,
|
|
||||||
Protocol: 2,
|
|
||||||
DisableIdentity: true,
|
|
||||||
}
|
|
||||||
if cfg.TLSEnabled {
|
|
||||||
options.TLSConfig = &tls.Config{MinVersion: tls.VersionTLS12}
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Store{
|
return &Store{
|
||||||
client: redis.NewClient(options),
|
client: client,
|
||||||
sessionLimitKey: cfg.SessionLimitKey,
|
sessionLimitKey: cfg.SessionLimitKey,
|
||||||
operationTimeout: cfg.OperationTimeout,
|
operationTimeout: cfg.OperationTimeout,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close releases the underlying Redis client resources.
|
|
||||||
func (s *Store) Close() error {
|
|
||||||
if s == nil || s.client == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.client.Close()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ping verifies that the configured Redis backend is reachable within the
|
|
||||||
// adapter operation timeout budget.
|
|
||||||
func (s *Store) Ping(ctx context.Context) error {
|
|
||||||
operationCtx, cancel, err := s.operationContext(ctx, "ping redis config provider")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := s.client.Ping(operationCtx).Err(); err != nil {
|
|
||||||
return fmt.Errorf("ping redis config provider: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// LoadSessionLimit returns the current active-session-limit configuration.
|
// LoadSessionLimit returns the current active-session-limit configuration.
|
||||||
// Missing or invalid Redis values are treated as “limit absent” by policy.
|
// Missing or invalid Redis values are treated as “limit absent” by policy.
|
||||||
func (s *Store) LoadSessionLimit(ctx context.Context) (ports.SessionLimitConfig, error) {
|
func (s *Store) LoadSessionLimit(ctx context.Context) (ports.SessionLimitConfig, error) {
|
||||||
|
|||||||
@@ -10,10 +10,26 @@ import (
|
|||||||
"galaxy/authsession/internal/ports"
|
"galaxy/authsession/internal/ports"
|
||||||
|
|
||||||
"github.com/alicebob/miniredis/v2"
|
"github.com/alicebob/miniredis/v2"
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func newRedisClient(t *testing.T, server *miniredis.Miniredis) *redis.Client {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
client := redis.NewClient(&redis.Options{
|
||||||
|
Addr: server.Addr(),
|
||||||
|
Protocol: 2,
|
||||||
|
DisableIdentity: true,
|
||||||
|
})
|
||||||
|
t.Cleanup(func() {
|
||||||
|
assert.NoError(t, client.Close())
|
||||||
|
})
|
||||||
|
|
||||||
|
return client
|
||||||
|
}
|
||||||
|
|
||||||
func TestStoreContract(t *testing.T) {
|
func TestStoreContract(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
@@ -41,64 +57,40 @@ func TestNew(t *testing.T) {
|
|||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
server := miniredis.RunT(t)
|
||||||
|
client := newRedisClient(t, server)
|
||||||
|
|
||||||
|
validCfg := Config{
|
||||||
|
SessionLimitKey: "authsession:config:active-session-limit",
|
||||||
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
|
}
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
|
client *redis.Client
|
||||||
cfg Config
|
cfg Config
|
||||||
wantErr string
|
wantErr string
|
||||||
}{
|
}{
|
||||||
|
{name: "valid config", client: client, cfg: validCfg},
|
||||||
|
{name: "nil client", client: nil, cfg: validCfg, wantErr: "nil redis client"},
|
||||||
{
|
{
|
||||||
name: "valid config",
|
name: "empty session limit key",
|
||||||
cfg: Config{
|
client: client,
|
||||||
Addr: server.Addr(),
|
cfg: Config{OperationTimeout: 250 * time.Millisecond},
|
||||||
DB: 2,
|
|
||||||
SessionLimitKey: "authsession:config:active-session-limit",
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty addr",
|
|
||||||
cfg: Config{
|
|
||||||
SessionLimitKey: "authsession:config:active-session-limit",
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis addr must not be empty",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "negative db",
|
|
||||||
cfg: Config{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
DB: -1,
|
|
||||||
SessionLimitKey: "authsession:config:active-session-limit",
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis db must not be negative",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty session limit key",
|
|
||||||
cfg: Config{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "session limit key must not be empty",
|
wantErr: "session limit key must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "non positive timeout",
|
name: "non positive timeout",
|
||||||
cfg: Config{
|
client: client,
|
||||||
Addr: server.Addr(),
|
cfg: Config{SessionLimitKey: "authsession:config:active-session-limit"},
|
||||||
SessionLimitKey: "authsession:config:active-session-limit",
|
|
||||||
},
|
|
||||||
wantErr: "operation timeout must be positive",
|
wantErr: "operation timeout must be positive",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
store, err := New(tt.cfg)
|
store, err := New(tt.client, tt.cfg)
|
||||||
if tt.wantErr != "" {
|
if tt.wantErr != "" {
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
assert.ErrorContains(t, err, tt.wantErr)
|
assert.ErrorContains(t, err, tt.wantErr)
|
||||||
@@ -106,22 +98,11 @@ func TestNew(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
require.NotNil(t, store)
|
||||||
assert.NoError(t, store.Close())
|
|
||||||
})
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestStorePing(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
|
||||||
store := newTestStore(t, server, Config{})
|
|
||||||
|
|
||||||
require.NoError(t, store.Ping(context.Background()))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStoreLoadSessionLimit(t *testing.T) {
|
func TestStoreLoadSessionLimit(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
@@ -201,8 +182,6 @@ func TestStoreLoadSessionLimit(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
@@ -242,23 +221,9 @@ func TestStoreLoadSessionLimitNilContext(t *testing.T) {
|
|||||||
assert.ErrorContains(t, err, "nil context")
|
assert.ErrorContains(t, err, "nil context")
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestStorePingNilContext(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
|
||||||
store := newTestStore(t, server, Config{})
|
|
||||||
|
|
||||||
err := store.Ping(nil)
|
|
||||||
require.Error(t, err)
|
|
||||||
assert.ErrorContains(t, err, "nil context")
|
|
||||||
}
|
|
||||||
|
|
||||||
func newTestStore(t *testing.T, server *miniredis.Miniredis, cfg Config) *Store {
|
func newTestStore(t *testing.T, server *miniredis.Miniredis, cfg Config) *Store {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
if cfg.Addr == "" {
|
|
||||||
cfg.Addr = server.Addr()
|
|
||||||
}
|
|
||||||
if cfg.SessionLimitKey == "" {
|
if cfg.SessionLimitKey == "" {
|
||||||
cfg.SessionLimitKey = "authsession:config:active-session-limit"
|
cfg.SessionLimitKey = "authsession:config:active-session-limit"
|
||||||
}
|
}
|
||||||
@@ -266,13 +231,9 @@ func newTestStore(t *testing.T, server *miniredis.Miniredis, cfg Config) *Store
|
|||||||
cfg.OperationTimeout = 250 * time.Millisecond
|
cfg.OperationTimeout = 250 * time.Millisecond
|
||||||
}
|
}
|
||||||
|
|
||||||
store, err := New(cfg)
|
store, err := New(newRedisClient(t, server), cfg)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, store.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
return store
|
return store
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ package projectionpublisher
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
@@ -19,22 +18,9 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
// Config configures one Redis-backed gateway session projection publisher.
|
// Config configures one Redis-backed gateway session projection publisher.
|
||||||
|
// The publisher does not own its Redis client; the runtime supplies a shared
|
||||||
|
// client constructed via `pkg/redisconn`.
|
||||||
type Config struct {
|
type Config struct {
|
||||||
// Addr is the Redis network address in host:port form.
|
|
||||||
Addr string
|
|
||||||
|
|
||||||
// Username is the optional Redis ACL username.
|
|
||||||
Username string
|
|
||||||
|
|
||||||
// Password is the optional Redis ACL password.
|
|
||||||
Password string
|
|
||||||
|
|
||||||
// DB is the Redis logical database index.
|
|
||||||
DB int
|
|
||||||
|
|
||||||
// TLSEnabled enables TLS with a conservative minimum protocol version.
|
|
||||||
TLSEnabled bool
|
|
||||||
|
|
||||||
// SessionCacheKeyPrefix is the namespace prefix applied to gateway session
|
// SessionCacheKeyPrefix is the namespace prefix applied to gateway session
|
||||||
// cache keys. The raw device session identifier is appended directly.
|
// cache keys. The raw device session identifier is appended directly.
|
||||||
SessionCacheKeyPrefix string
|
SessionCacheKeyPrefix string
|
||||||
@@ -68,14 +54,12 @@ type cacheRecord struct {
|
|||||||
RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"`
|
RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// New constructs a Redis-backed gateway session projection publisher from
|
// New constructs a Redis-backed gateway session projection publisher that
|
||||||
// cfg.
|
// uses client and applies the namespace and timeout settings from cfg.
|
||||||
func New(cfg Config) (*Publisher, error) {
|
func New(client *redis.Client, cfg Config) (*Publisher, error) {
|
||||||
switch {
|
switch {
|
||||||
case strings.TrimSpace(cfg.Addr) == "":
|
case client == nil:
|
||||||
return nil, errors.New("new redis projection publisher: redis addr must not be empty")
|
return nil, errors.New("new redis projection publisher: nil redis client")
|
||||||
case cfg.DB < 0:
|
|
||||||
return nil, errors.New("new redis projection publisher: redis db must not be negative")
|
|
||||||
case strings.TrimSpace(cfg.SessionCacheKeyPrefix) == "":
|
case strings.TrimSpace(cfg.SessionCacheKeyPrefix) == "":
|
||||||
return nil, errors.New("new redis projection publisher: session cache key prefix must not be empty")
|
return nil, errors.New("new redis projection publisher: session cache key prefix must not be empty")
|
||||||
case strings.TrimSpace(cfg.SessionEventsStream) == "":
|
case strings.TrimSpace(cfg.SessionEventsStream) == "":
|
||||||
@@ -86,20 +70,8 @@ func New(cfg Config) (*Publisher, error) {
|
|||||||
return nil, errors.New("new redis projection publisher: operation timeout must be positive")
|
return nil, errors.New("new redis projection publisher: operation timeout must be positive")
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &redis.Options{
|
|
||||||
Addr: cfg.Addr,
|
|
||||||
Username: cfg.Username,
|
|
||||||
Password: cfg.Password,
|
|
||||||
DB: cfg.DB,
|
|
||||||
Protocol: 2,
|
|
||||||
DisableIdentity: true,
|
|
||||||
}
|
|
||||||
if cfg.TLSEnabled {
|
|
||||||
options.TLSConfig = &tls.Config{MinVersion: tls.VersionTLS12}
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Publisher{
|
return &Publisher{
|
||||||
client: redis.NewClient(options),
|
client: client,
|
||||||
sessionCacheKeyPrefix: cfg.SessionCacheKeyPrefix,
|
sessionCacheKeyPrefix: cfg.SessionCacheKeyPrefix,
|
||||||
sessionEventsStream: cfg.SessionEventsStream,
|
sessionEventsStream: cfg.SessionEventsStream,
|
||||||
streamMaxLen: cfg.StreamMaxLen,
|
streamMaxLen: cfg.StreamMaxLen,
|
||||||
@@ -107,31 +79,6 @@ func New(cfg Config) (*Publisher, error) {
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close releases the underlying Redis client resources.
|
|
||||||
func (p *Publisher) Close() error {
|
|
||||||
if p == nil || p.client == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return p.client.Close()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ping verifies that the configured Redis backend is reachable within the
|
|
||||||
// adapter operation timeout budget.
|
|
||||||
func (p *Publisher) Ping(ctx context.Context) error {
|
|
||||||
operationCtx, cancel, err := p.operationContext(ctx, "ping redis projection publisher")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := p.client.Ping(operationCtx).Err(); err != nil {
|
|
||||||
return fmt.Errorf("ping redis projection publisher: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// PublishSession writes one gateway-compatible session snapshot into the
|
// PublishSession writes one gateway-compatible session snapshot into the
|
||||||
// gateway cache namespace and appends the same snapshot to the gateway session
|
// gateway cache namespace and appends the same snapshot to the gateway session
|
||||||
// event stream within one Redis transaction.
|
// event stream within one Redis transaction.
|
||||||
|
|||||||
@@ -15,57 +15,51 @@ import (
|
|||||||
"galaxy/authsession/internal/domain/gatewayprojection"
|
"galaxy/authsession/internal/domain/gatewayprojection"
|
||||||
|
|
||||||
"github.com/alicebob/miniredis/v2"
|
"github.com/alicebob/miniredis/v2"
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func newRedisClient(t *testing.T, server *miniredis.Miniredis) *redis.Client {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
client := redis.NewClient(&redis.Options{
|
||||||
|
Addr: server.Addr(),
|
||||||
|
Protocol: 2,
|
||||||
|
DisableIdentity: true,
|
||||||
|
})
|
||||||
|
t.Cleanup(func() {
|
||||||
|
assert.NoError(t, client.Close())
|
||||||
|
})
|
||||||
|
|
||||||
|
return client
|
||||||
|
}
|
||||||
|
|
||||||
func TestNew(t *testing.T) {
|
func TestNew(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
server := miniredis.RunT(t)
|
||||||
|
client := newRedisClient(t, server)
|
||||||
|
|
||||||
|
validCfg := Config{
|
||||||
|
SessionCacheKeyPrefix: "gateway:session:",
|
||||||
|
SessionEventsStream: "gateway:session_events",
|
||||||
|
StreamMaxLen: 1024,
|
||||||
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
|
}
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
|
client *redis.Client
|
||||||
cfg Config
|
cfg Config
|
||||||
wantErr string
|
wantErr string
|
||||||
}{
|
}{
|
||||||
|
{name: "valid config", client: client, cfg: validCfg},
|
||||||
|
{name: "nil client", client: nil, cfg: validCfg, wantErr: "nil redis client"},
|
||||||
{
|
{
|
||||||
name: "valid config",
|
name: "empty session cache key prefix",
|
||||||
|
client: client,
|
||||||
cfg: Config{
|
cfg: Config{
|
||||||
Addr: server.Addr(),
|
|
||||||
DB: 3,
|
|
||||||
SessionCacheKeyPrefix: "gateway:session:",
|
|
||||||
SessionEventsStream: "gateway:session_events",
|
|
||||||
StreamMaxLen: 1024,
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty addr",
|
|
||||||
cfg: Config{
|
|
||||||
SessionCacheKeyPrefix: "gateway:session:",
|
|
||||||
SessionEventsStream: "gateway:session_events",
|
|
||||||
StreamMaxLen: 1024,
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis addr must not be empty",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "negative db",
|
|
||||||
cfg: Config{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
DB: -1,
|
|
||||||
SessionCacheKeyPrefix: "gateway:session:",
|
|
||||||
SessionEventsStream: "gateway:session_events",
|
|
||||||
StreamMaxLen: 1024,
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis db must not be negative",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty session cache key prefix",
|
|
||||||
cfg: Config{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
SessionEventsStream: "gateway:session_events",
|
SessionEventsStream: "gateway:session_events",
|
||||||
StreamMaxLen: 1024,
|
StreamMaxLen: 1024,
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
@@ -73,9 +67,9 @@ func TestNew(t *testing.T) {
|
|||||||
wantErr: "session cache key prefix must not be empty",
|
wantErr: "session cache key prefix must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "empty session events stream",
|
name: "empty session events stream",
|
||||||
|
client: client,
|
||||||
cfg: Config{
|
cfg: Config{
|
||||||
Addr: server.Addr(),
|
|
||||||
SessionCacheKeyPrefix: "gateway:session:",
|
SessionCacheKeyPrefix: "gateway:session:",
|
||||||
StreamMaxLen: 1024,
|
StreamMaxLen: 1024,
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
@@ -83,9 +77,9 @@ func TestNew(t *testing.T) {
|
|||||||
wantErr: "session events stream must not be empty",
|
wantErr: "session events stream must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "non positive stream max len",
|
name: "non positive stream max len",
|
||||||
|
client: client,
|
||||||
cfg: Config{
|
cfg: Config{
|
||||||
Addr: server.Addr(),
|
|
||||||
SessionCacheKeyPrefix: "gateway:session:",
|
SessionCacheKeyPrefix: "gateway:session:",
|
||||||
SessionEventsStream: "gateway:session_events",
|
SessionEventsStream: "gateway:session_events",
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
@@ -93,9 +87,9 @@ func TestNew(t *testing.T) {
|
|||||||
wantErr: "stream max len must be positive",
|
wantErr: "stream max len must be positive",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "non positive timeout",
|
name: "non positive timeout",
|
||||||
|
client: client,
|
||||||
cfg: Config{
|
cfg: Config{
|
||||||
Addr: server.Addr(),
|
|
||||||
SessionCacheKeyPrefix: "gateway:session:",
|
SessionCacheKeyPrefix: "gateway:session:",
|
||||||
SessionEventsStream: "gateway:session_events",
|
SessionEventsStream: "gateway:session_events",
|
||||||
StreamMaxLen: 1024,
|
StreamMaxLen: 1024,
|
||||||
@@ -105,12 +99,10 @@ func TestNew(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
publisher, err := New(tt.cfg)
|
publisher, err := New(tt.client, tt.cfg)
|
||||||
if tt.wantErr != "" {
|
if tt.wantErr != "" {
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
assert.ErrorContains(t, err, tt.wantErr)
|
assert.ErrorContains(t, err, tt.wantErr)
|
||||||
@@ -118,22 +110,11 @@ func TestNew(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
require.NotNil(t, publisher)
|
||||||
assert.NoError(t, publisher.Close())
|
|
||||||
})
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestPublisherPing(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
|
||||||
publisher := newTestPublisher(t, server, Config{})
|
|
||||||
|
|
||||||
require.NoError(t, publisher.Ping(context.Background()))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestPublisherPublishSessionActive(t *testing.T) {
|
func TestPublisherPublishSessionActive(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
@@ -331,23 +312,9 @@ func TestPublisherPublishSessionBackendFailure(t *testing.T) {
|
|||||||
assert.ErrorContains(t, err, "publish session projection")
|
assert.ErrorContains(t, err, "publish session projection")
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestPublisherPingNilContext(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
|
||||||
publisher := newTestPublisher(t, server, Config{})
|
|
||||||
|
|
||||||
err := publisher.Ping(nil)
|
|
||||||
require.Error(t, err)
|
|
||||||
assert.ErrorContains(t, err, "nil context")
|
|
||||||
}
|
|
||||||
|
|
||||||
func newTestPublisher(t *testing.T, server *miniredis.Miniredis, cfg Config) *Publisher {
|
func newTestPublisher(t *testing.T, server *miniredis.Miniredis, cfg Config) *Publisher {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
if cfg.Addr == "" {
|
|
||||||
cfg.Addr = server.Addr()
|
|
||||||
}
|
|
||||||
if cfg.SessionCacheKeyPrefix == "" {
|
if cfg.SessionCacheKeyPrefix == "" {
|
||||||
cfg.SessionCacheKeyPrefix = "gateway:session:"
|
cfg.SessionCacheKeyPrefix = "gateway:session:"
|
||||||
}
|
}
|
||||||
@@ -361,11 +328,8 @@ func newTestPublisher(t *testing.T, server *miniredis.Miniredis, cfg Config) *Pu
|
|||||||
cfg.OperationTimeout = 250 * time.Millisecond
|
cfg.OperationTimeout = 250 * time.Millisecond
|
||||||
}
|
}
|
||||||
|
|
||||||
publisher, err := New(cfg)
|
publisher, err := New(newRedisClient(t, server), cfg)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, publisher.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
return publisher
|
return publisher
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ package sendemailcodeabuse
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
@@ -18,23 +17,10 @@ import (
|
|||||||
"github.com/redis/go-redis/v9"
|
"github.com/redis/go-redis/v9"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Config configures one Redis-backed send-email-code abuse protector.
|
// Config configures one Redis-backed send-email-code abuse protector. The
|
||||||
|
// protector does not own its Redis client; the runtime supplies a shared
|
||||||
|
// client constructed via `pkg/redisconn`.
|
||||||
type Config struct {
|
type Config struct {
|
||||||
// Addr is the Redis network address in host:port form.
|
|
||||||
Addr string
|
|
||||||
|
|
||||||
// Username is the optional Redis ACL username.
|
|
||||||
Username string
|
|
||||||
|
|
||||||
// Password is the optional Redis ACL password.
|
|
||||||
Password string
|
|
||||||
|
|
||||||
// DB is the Redis logical database index.
|
|
||||||
DB int
|
|
||||||
|
|
||||||
// TLSEnabled enables TLS with a conservative minimum protocol version.
|
|
||||||
TLSEnabled bool
|
|
||||||
|
|
||||||
// KeyPrefix is the namespace prefix applied to every resend-throttle key.
|
// KeyPrefix is the namespace prefix applied to every resend-throttle key.
|
||||||
KeyPrefix string
|
KeyPrefix string
|
||||||
|
|
||||||
@@ -50,63 +36,25 @@ type Protector struct {
|
|||||||
operationTimeout time.Duration
|
operationTimeout time.Duration
|
||||||
}
|
}
|
||||||
|
|
||||||
// New constructs a Redis-backed resend-throttle protector from cfg.
|
// New constructs a Redis-backed resend-throttle protector that uses client
|
||||||
func New(cfg Config) (*Protector, error) {
|
// and applies the namespace and timeout settings from cfg.
|
||||||
|
func New(client *redis.Client, cfg Config) (*Protector, error) {
|
||||||
switch {
|
switch {
|
||||||
case strings.TrimSpace(cfg.Addr) == "":
|
case client == nil:
|
||||||
return nil, errors.New("new redis send email code abuse protector: redis addr must not be empty")
|
return nil, errors.New("new redis send email code abuse protector: nil redis client")
|
||||||
case cfg.DB < 0:
|
|
||||||
return nil, errors.New("new redis send email code abuse protector: redis db must not be negative")
|
|
||||||
case strings.TrimSpace(cfg.KeyPrefix) == "":
|
case strings.TrimSpace(cfg.KeyPrefix) == "":
|
||||||
return nil, errors.New("new redis send email code abuse protector: redis key prefix must not be empty")
|
return nil, errors.New("new redis send email code abuse protector: redis key prefix must not be empty")
|
||||||
case cfg.OperationTimeout <= 0:
|
case cfg.OperationTimeout <= 0:
|
||||||
return nil, errors.New("new redis send email code abuse protector: operation timeout must be positive")
|
return nil, errors.New("new redis send email code abuse protector: operation timeout must be positive")
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &redis.Options{
|
|
||||||
Addr: cfg.Addr,
|
|
||||||
Username: cfg.Username,
|
|
||||||
Password: cfg.Password,
|
|
||||||
DB: cfg.DB,
|
|
||||||
Protocol: 2,
|
|
||||||
DisableIdentity: true,
|
|
||||||
}
|
|
||||||
if cfg.TLSEnabled {
|
|
||||||
options.TLSConfig = &tls.Config{MinVersion: tls.VersionTLS12}
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Protector{
|
return &Protector{
|
||||||
client: redis.NewClient(options),
|
client: client,
|
||||||
keyPrefix: cfg.KeyPrefix,
|
keyPrefix: cfg.KeyPrefix,
|
||||||
operationTimeout: cfg.OperationTimeout,
|
operationTimeout: cfg.OperationTimeout,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close releases the underlying Redis client resources.
|
|
||||||
func (p *Protector) Close() error {
|
|
||||||
if p == nil || p.client == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return p.client.Close()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ping verifies that the configured Redis backend is reachable within the
|
|
||||||
// adapter operation timeout budget.
|
|
||||||
func (p *Protector) Ping(ctx context.Context) error {
|
|
||||||
operationCtx, cancel, err := p.operationContext(ctx, "ping redis send email code abuse protector")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := p.client.Ping(operationCtx).Err(); err != nil {
|
|
||||||
return fmt.Errorf("ping redis send email code abuse protector: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// CheckAndReserve applies the fixed resend cooldown using one TTL key per
|
// CheckAndReserve applies the fixed resend cooldown using one TTL key per
|
||||||
// normalized e-mail address.
|
// normalized e-mail address.
|
||||||
func (p *Protector) CheckAndReserve(ctx context.Context, input ports.SendEmailCodeAbuseInput) (ports.SendEmailCodeAbuseResult, error) {
|
func (p *Protector) CheckAndReserve(ctx context.Context, input ports.SendEmailCodeAbuseInput) (ports.SendEmailCodeAbuseResult, error) {
|
||||||
|
|||||||
@@ -10,72 +10,64 @@ import (
|
|||||||
"galaxy/authsession/internal/ports"
|
"galaxy/authsession/internal/ports"
|
||||||
|
|
||||||
"github.com/alicebob/miniredis/v2"
|
"github.com/alicebob/miniredis/v2"
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func newRedisClient(t *testing.T, server *miniredis.Miniredis) *redis.Client {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
client := redis.NewClient(&redis.Options{
|
||||||
|
Addr: server.Addr(),
|
||||||
|
Protocol: 2,
|
||||||
|
DisableIdentity: true,
|
||||||
|
})
|
||||||
|
t.Cleanup(func() {
|
||||||
|
assert.NoError(t, client.Close())
|
||||||
|
})
|
||||||
|
|
||||||
|
return client
|
||||||
|
}
|
||||||
|
|
||||||
func TestNew(t *testing.T) {
|
func TestNew(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
server := miniredis.RunT(t)
|
||||||
|
client := newRedisClient(t, server)
|
||||||
|
|
||||||
|
validCfg := Config{
|
||||||
|
KeyPrefix: "authsession:send-email-code-throttle:",
|
||||||
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
|
}
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
|
client *redis.Client
|
||||||
cfg Config
|
cfg Config
|
||||||
wantErr string
|
wantErr string
|
||||||
}{
|
}{
|
||||||
|
{name: "valid config", client: client, cfg: validCfg},
|
||||||
|
{name: "nil client", client: nil, cfg: validCfg, wantErr: "nil redis client"},
|
||||||
{
|
{
|
||||||
name: "valid config",
|
name: "empty key prefix",
|
||||||
cfg: Config{
|
client: client,
|
||||||
Addr: server.Addr(),
|
cfg: Config{OperationTimeout: 250 * time.Millisecond},
|
||||||
DB: 1,
|
|
||||||
KeyPrefix: "authsession:send-email-code-throttle:",
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty addr",
|
|
||||||
cfg: Config{
|
|
||||||
KeyPrefix: "authsession:send-email-code-throttle:",
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis addr must not be empty",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "negative db",
|
|
||||||
cfg: Config{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
DB: -1,
|
|
||||||
KeyPrefix: "authsession:send-email-code-throttle:",
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis db must not be negative",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty key prefix",
|
|
||||||
cfg: Config{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis key prefix must not be empty",
|
wantErr: "redis key prefix must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "non-positive timeout",
|
name: "non-positive timeout",
|
||||||
cfg: Config{
|
client: client,
|
||||||
Addr: server.Addr(),
|
cfg: Config{KeyPrefix: "authsession:send-email-code-throttle:"},
|
||||||
KeyPrefix: "authsession:send-email-code-throttle:",
|
|
||||||
},
|
|
||||||
wantErr: "operation timeout must be positive",
|
wantErr: "operation timeout must be positive",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
protector, err := New(tt.cfg)
|
protector, err := New(tt.client, tt.cfg)
|
||||||
if tt.wantErr != "" {
|
if tt.wantErr != "" {
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
assert.ErrorContains(t, err, tt.wantErr)
|
assert.ErrorContains(t, err, tt.wantErr)
|
||||||
@@ -83,22 +75,11 @@ func TestNew(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
require.NotNil(t, protector)
|
||||||
assert.NoError(t, protector.Close())
|
|
||||||
})
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestProtectorPing(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
|
||||||
protector := newTestProtector(t, server, Config{})
|
|
||||||
|
|
||||||
require.NoError(t, protector.Ping(context.Background()))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestProtectorCheckAndReserve(t *testing.T) {
|
func TestProtectorCheckAndReserve(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
@@ -156,9 +137,6 @@ func TestProtectorNilContext(t *testing.T) {
|
|||||||
func newTestProtector(t *testing.T, server *miniredis.Miniredis, cfg Config) *Protector {
|
func newTestProtector(t *testing.T, server *miniredis.Miniredis, cfg Config) *Protector {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
if cfg.Addr == "" {
|
|
||||||
cfg.Addr = server.Addr()
|
|
||||||
}
|
|
||||||
if cfg.KeyPrefix == "" {
|
if cfg.KeyPrefix == "" {
|
||||||
cfg.KeyPrefix = "authsession:send-email-code-throttle:"
|
cfg.KeyPrefix = "authsession:send-email-code-throttle:"
|
||||||
}
|
}
|
||||||
@@ -166,11 +144,8 @@ func newTestProtector(t *testing.T, server *miniredis.Miniredis, cfg Config) *Pr
|
|||||||
cfg.OperationTimeout = 250 * time.Millisecond
|
cfg.OperationTimeout = 250 * time.Millisecond
|
||||||
}
|
}
|
||||||
|
|
||||||
protector, err := New(cfg)
|
protector, err := New(newRedisClient(t, server), cfg)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, protector.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
return protector
|
return protector
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ package sessionstore
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
@@ -24,23 +23,10 @@ import (
|
|||||||
|
|
||||||
const mutationRetryLimit = 3
|
const mutationRetryLimit = 3
|
||||||
|
|
||||||
// Config configures one Redis-backed session store instance.
|
// Config configures one Redis-backed session store instance. The store does
|
||||||
|
// not own its Redis client; the runtime supplies a shared client constructed
|
||||||
|
// via `pkg/redisconn`.
|
||||||
type Config struct {
|
type Config struct {
|
||||||
// Addr is the Redis network address in host:port form.
|
|
||||||
Addr string
|
|
||||||
|
|
||||||
// Username is the optional Redis ACL username.
|
|
||||||
Username string
|
|
||||||
|
|
||||||
// Password is the optional Redis ACL password.
|
|
||||||
Password string
|
|
||||||
|
|
||||||
// DB is the Redis logical database index.
|
|
||||||
DB int
|
|
||||||
|
|
||||||
// TLSEnabled enables TLS with a conservative minimum protocol version.
|
|
||||||
TLSEnabled bool
|
|
||||||
|
|
||||||
// SessionKeyPrefix is the namespace prefix applied to primary session keys.
|
// SessionKeyPrefix is the namespace prefix applied to primary session keys.
|
||||||
SessionKeyPrefix string
|
SessionKeyPrefix string
|
||||||
|
|
||||||
@@ -78,13 +64,12 @@ type redisRecord struct {
|
|||||||
RevokeActorID string `json:"revoke_actor_id,omitempty"`
|
RevokeActorID string `json:"revoke_actor_id,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// New constructs a Redis-backed session store from cfg.
|
// New constructs a Redis-backed session store that uses client and applies
|
||||||
func New(cfg Config) (*Store, error) {
|
// the namespace and timeout settings from cfg.
|
||||||
|
func New(client *redis.Client, cfg Config) (*Store, error) {
|
||||||
switch {
|
switch {
|
||||||
case strings.TrimSpace(cfg.Addr) == "":
|
case client == nil:
|
||||||
return nil, errors.New("new redis session store: redis addr must not be empty")
|
return nil, errors.New("new redis session store: nil redis client")
|
||||||
case cfg.DB < 0:
|
|
||||||
return nil, errors.New("new redis session store: redis db must not be negative")
|
|
||||||
case strings.TrimSpace(cfg.SessionKeyPrefix) == "":
|
case strings.TrimSpace(cfg.SessionKeyPrefix) == "":
|
||||||
return nil, errors.New("new redis session store: session key prefix must not be empty")
|
return nil, errors.New("new redis session store: session key prefix must not be empty")
|
||||||
case strings.TrimSpace(cfg.UserSessionsKeyPrefix) == "":
|
case strings.TrimSpace(cfg.UserSessionsKeyPrefix) == "":
|
||||||
@@ -95,20 +80,8 @@ func New(cfg Config) (*Store, error) {
|
|||||||
return nil, errors.New("new redis session store: operation timeout must be positive")
|
return nil, errors.New("new redis session store: operation timeout must be positive")
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &redis.Options{
|
|
||||||
Addr: cfg.Addr,
|
|
||||||
Username: cfg.Username,
|
|
||||||
Password: cfg.Password,
|
|
||||||
DB: cfg.DB,
|
|
||||||
Protocol: 2,
|
|
||||||
DisableIdentity: true,
|
|
||||||
}
|
|
||||||
if cfg.TLSEnabled {
|
|
||||||
options.TLSConfig = &tls.Config{MinVersion: tls.VersionTLS12}
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Store{
|
return &Store{
|
||||||
client: redis.NewClient(options),
|
client: client,
|
||||||
sessionKeyPrefix: cfg.SessionKeyPrefix,
|
sessionKeyPrefix: cfg.SessionKeyPrefix,
|
||||||
userSessionsKeyPrefix: cfg.UserSessionsKeyPrefix,
|
userSessionsKeyPrefix: cfg.UserSessionsKeyPrefix,
|
||||||
userActiveSessionsKeyPrefix: cfg.UserActiveSessionsKeyPrefix,
|
userActiveSessionsKeyPrefix: cfg.UserActiveSessionsKeyPrefix,
|
||||||
@@ -116,31 +89,6 @@ func New(cfg Config) (*Store, error) {
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close releases the underlying Redis client resources.
|
|
||||||
func (s *Store) Close() error {
|
|
||||||
if s == nil || s.client == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.client.Close()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ping verifies that the configured Redis backend is reachable within the
|
|
||||||
// adapter operation timeout budget.
|
|
||||||
func (s *Store) Ping(ctx context.Context) error {
|
|
||||||
operationCtx, cancel, err := s.operationContext(ctx, "ping redis session store")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := s.client.Ping(operationCtx).Err(); err != nil {
|
|
||||||
return fmt.Errorf("ping redis session store: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get returns the stored session for deviceSessionID.
|
// Get returns the stored session for deviceSessionID.
|
||||||
func (s *Store) Get(ctx context.Context, deviceSessionID common.DeviceSessionID) (devicesession.Session, error) {
|
func (s *Store) Get(ctx context.Context, deviceSessionID common.DeviceSessionID) (devicesession.Session, error) {
|
||||||
if err := deviceSessionID.Validate(); err != nil {
|
if err := deviceSessionID.Validate(); err != nil {
|
||||||
|
|||||||
@@ -13,10 +13,26 @@ import (
|
|||||||
"galaxy/authsession/internal/ports"
|
"galaxy/authsession/internal/ports"
|
||||||
|
|
||||||
"github.com/alicebob/miniredis/v2"
|
"github.com/alicebob/miniredis/v2"
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func newRedisClient(t *testing.T, server *miniredis.Miniredis) *redis.Client {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
client := redis.NewClient(&redis.Options{
|
||||||
|
Addr: server.Addr(),
|
||||||
|
Protocol: 2,
|
||||||
|
DisableIdentity: true,
|
||||||
|
})
|
||||||
|
t.Cleanup(func() {
|
||||||
|
assert.NoError(t, client.Close())
|
||||||
|
})
|
||||||
|
|
||||||
|
return client
|
||||||
|
}
|
||||||
|
|
||||||
func TestStoreContract(t *testing.T) {
|
func TestStoreContract(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
@@ -32,49 +48,27 @@ func TestNew(t *testing.T) {
|
|||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
server := miniredis.RunT(t)
|
||||||
|
client := newRedisClient(t, server)
|
||||||
|
|
||||||
|
validCfg := Config{
|
||||||
|
SessionKeyPrefix: "authsession:session:",
|
||||||
|
UserSessionsKeyPrefix: "authsession:user-sessions:",
|
||||||
|
UserActiveSessionsKeyPrefix: "authsession:user-active-sessions:",
|
||||||
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
|
}
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
|
client *redis.Client
|
||||||
cfg Config
|
cfg Config
|
||||||
wantErr string
|
wantErr string
|
||||||
}{
|
}{
|
||||||
|
{name: "valid config", client: client, cfg: validCfg},
|
||||||
|
{name: "nil client", client: nil, cfg: validCfg, wantErr: "nil redis client"},
|
||||||
{
|
{
|
||||||
name: "valid config",
|
name: "empty session prefix",
|
||||||
|
client: client,
|
||||||
cfg: Config{
|
cfg: Config{
|
||||||
Addr: server.Addr(),
|
|
||||||
DB: 1,
|
|
||||||
SessionKeyPrefix: "authsession:session:",
|
|
||||||
UserSessionsKeyPrefix: "authsession:user-sessions:",
|
|
||||||
UserActiveSessionsKeyPrefix: "authsession:user-active-sessions:",
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty addr",
|
|
||||||
cfg: Config{
|
|
||||||
SessionKeyPrefix: "authsession:session:",
|
|
||||||
UserSessionsKeyPrefix: "authsession:user-sessions:",
|
|
||||||
UserActiveSessionsKeyPrefix: "authsession:user-active-sessions:",
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis addr must not be empty",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "negative db",
|
|
||||||
cfg: Config{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
DB: -1,
|
|
||||||
SessionKeyPrefix: "authsession:session:",
|
|
||||||
UserSessionsKeyPrefix: "authsession:user-sessions:",
|
|
||||||
UserActiveSessionsKeyPrefix: "authsession:user-active-sessions:",
|
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis db must not be negative",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty session prefix",
|
|
||||||
cfg: Config{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
UserSessionsKeyPrefix: "authsession:user-sessions:",
|
UserSessionsKeyPrefix: "authsession:user-sessions:",
|
||||||
UserActiveSessionsKeyPrefix: "authsession:user-active-sessions:",
|
UserActiveSessionsKeyPrefix: "authsession:user-active-sessions:",
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
@@ -82,9 +76,9 @@ func TestNew(t *testing.T) {
|
|||||||
wantErr: "session key prefix must not be empty",
|
wantErr: "session key prefix must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "empty all sessions prefix",
|
name: "empty all sessions prefix",
|
||||||
|
client: client,
|
||||||
cfg: Config{
|
cfg: Config{
|
||||||
Addr: server.Addr(),
|
|
||||||
SessionKeyPrefix: "authsession:session:",
|
SessionKeyPrefix: "authsession:session:",
|
||||||
UserActiveSessionsKeyPrefix: "authsession:user-active-sessions:",
|
UserActiveSessionsKeyPrefix: "authsession:user-active-sessions:",
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
@@ -92,9 +86,9 @@ func TestNew(t *testing.T) {
|
|||||||
wantErr: "user sessions key prefix must not be empty",
|
wantErr: "user sessions key prefix must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "empty active sessions prefix",
|
name: "empty active sessions prefix",
|
||||||
|
client: client,
|
||||||
cfg: Config{
|
cfg: Config{
|
||||||
Addr: server.Addr(),
|
|
||||||
SessionKeyPrefix: "authsession:session:",
|
SessionKeyPrefix: "authsession:session:",
|
||||||
UserSessionsKeyPrefix: "authsession:user-sessions:",
|
UserSessionsKeyPrefix: "authsession:user-sessions:",
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
@@ -102,9 +96,9 @@ func TestNew(t *testing.T) {
|
|||||||
wantErr: "user active sessions key prefix must not be empty",
|
wantErr: "user active sessions key prefix must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "non positive timeout",
|
name: "non positive timeout",
|
||||||
|
client: client,
|
||||||
cfg: Config{
|
cfg: Config{
|
||||||
Addr: server.Addr(),
|
|
||||||
SessionKeyPrefix: "authsession:session:",
|
SessionKeyPrefix: "authsession:session:",
|
||||||
UserSessionsKeyPrefix: "authsession:user-sessions:",
|
UserSessionsKeyPrefix: "authsession:user-sessions:",
|
||||||
UserActiveSessionsKeyPrefix: "authsession:user-active-sessions:",
|
UserActiveSessionsKeyPrefix: "authsession:user-active-sessions:",
|
||||||
@@ -114,12 +108,10 @@ func TestNew(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
store, err := New(tt.cfg)
|
store, err := New(tt.client, tt.cfg)
|
||||||
if tt.wantErr != "" {
|
if tt.wantErr != "" {
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
assert.ErrorContains(t, err, tt.wantErr)
|
assert.ErrorContains(t, err, tt.wantErr)
|
||||||
@@ -127,22 +119,11 @@ func TestNew(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
require.NotNil(t, store)
|
||||||
assert.NoError(t, store.Close())
|
|
||||||
})
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestStorePing(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
|
||||||
store := newTestStore(t, server, Config{})
|
|
||||||
|
|
||||||
require.NoError(t, store.Ping(context.Background()))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStoreCreateAndGetActive(t *testing.T) {
|
func TestStoreCreateAndGetActive(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
@@ -558,9 +539,6 @@ func TestStoreRevokeAllByUserIDDetectsCorruptActiveIndex(t *testing.T) {
|
|||||||
func newTestStore(t *testing.T, server *miniredis.Miniredis, cfg Config) *Store {
|
func newTestStore(t *testing.T, server *miniredis.Miniredis, cfg Config) *Store {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
if cfg.Addr == "" {
|
|
||||||
cfg.Addr = server.Addr()
|
|
||||||
}
|
|
||||||
if cfg.SessionKeyPrefix == "" {
|
if cfg.SessionKeyPrefix == "" {
|
||||||
cfg.SessionKeyPrefix = "authsession:session:"
|
cfg.SessionKeyPrefix = "authsession:session:"
|
||||||
}
|
}
|
||||||
@@ -574,13 +552,9 @@ func newTestStore(t *testing.T, server *miniredis.Miniredis, cfg Config) *Store
|
|||||||
cfg.OperationTimeout = 250 * time.Millisecond
|
cfg.OperationTimeout = 250 * time.Millisecond
|
||||||
}
|
}
|
||||||
|
|
||||||
store, err := New(cfg)
|
store, err := New(newRedisClient(t, server), cfg)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, store.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
return store
|
return store
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ import (
|
|||||||
|
|
||||||
"galaxy/authsession/internal/adapters/local"
|
"galaxy/authsession/internal/adapters/local"
|
||||||
"galaxy/authsession/internal/adapters/mail"
|
"galaxy/authsession/internal/adapters/mail"
|
||||||
|
redisadapter "galaxy/authsession/internal/adapters/redis"
|
||||||
"galaxy/authsession/internal/adapters/redis/challengestore"
|
"galaxy/authsession/internal/adapters/redis/challengestore"
|
||||||
"galaxy/authsession/internal/adapters/redis/configprovider"
|
"galaxy/authsession/internal/adapters/redis/configprovider"
|
||||||
"galaxy/authsession/internal/adapters/redis/projectionpublisher"
|
"galaxy/authsession/internal/adapters/redis/projectionpublisher"
|
||||||
@@ -26,17 +27,10 @@ import (
|
|||||||
"galaxy/authsession/internal/service/sendemailcode"
|
"galaxy/authsession/internal/service/sendemailcode"
|
||||||
"galaxy/authsession/internal/telemetry"
|
"galaxy/authsession/internal/telemetry"
|
||||||
|
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
"go.uber.org/zap"
|
"go.uber.org/zap"
|
||||||
)
|
)
|
||||||
|
|
||||||
type pinger interface {
|
|
||||||
Ping(context.Context) error
|
|
||||||
}
|
|
||||||
|
|
||||||
type closer interface {
|
|
||||||
Close() error
|
|
||||||
}
|
|
||||||
|
|
||||||
// Runtime owns the runnable authsession application plus the adapter cleanup
|
// Runtime owns the runnable authsession application plus the adapter cleanup
|
||||||
// functions that must run after the process stops.
|
// functions that must run after the process stops.
|
||||||
type Runtime struct {
|
type Runtime struct {
|
||||||
@@ -65,91 +59,64 @@ func NewRuntime(ctx context.Context, cfg config.Config, logger *zap.Logger, tele
|
|||||||
return nil, errors.Join(err, runtime.Close())
|
return nil, errors.Join(err, runtime.Close())
|
||||||
}
|
}
|
||||||
|
|
||||||
challengeStore, err := challengestore.New(challengestore.Config{
|
redisClient := redisadapter.NewClient(cfg.Redis)
|
||||||
Addr: cfg.Redis.Addr,
|
if err := redisadapter.InstrumentClient(redisClient, telemetryRuntime); err != nil {
|
||||||
Username: cfg.Redis.Username,
|
return cleanupOnError(fmt.Errorf("new authsession runtime: %w", err))
|
||||||
Password: cfg.Redis.Password,
|
}
|
||||||
DB: cfg.Redis.DB,
|
runtime.cleanupFns = append(runtime.cleanupFns, func() error {
|
||||||
TLSEnabled: cfg.Redis.TLSEnabled,
|
err := redisClient.Close()
|
||||||
|
if errors.Is(err, redis.ErrClosed) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
})
|
||||||
|
if err := redisadapter.Ping(ctx, cfg.Redis, redisClient); err != nil {
|
||||||
|
return cleanupOnError(fmt.Errorf("new authsession runtime: %w", err))
|
||||||
|
}
|
||||||
|
|
||||||
|
challengeStore, err := challengestore.New(redisClient, challengestore.Config{
|
||||||
KeyPrefix: cfg.Redis.ChallengeKeyPrefix,
|
KeyPrefix: cfg.Redis.ChallengeKeyPrefix,
|
||||||
OperationTimeout: cfg.Redis.OperationTimeout,
|
OperationTimeout: cfg.Redis.Conn.OperationTimeout,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return cleanupOnError(fmt.Errorf("new authsession runtime: challenge store: %w", err))
|
return cleanupOnError(fmt.Errorf("new authsession runtime: challenge store: %w", err))
|
||||||
}
|
}
|
||||||
runtime.cleanupFns = append(runtime.cleanupFns, challengeStore.Close)
|
|
||||||
|
|
||||||
sessionStore, err := sessionstore.New(sessionstore.Config{
|
sessionStore, err := sessionstore.New(redisClient, sessionstore.Config{
|
||||||
Addr: cfg.Redis.Addr,
|
|
||||||
Username: cfg.Redis.Username,
|
|
||||||
Password: cfg.Redis.Password,
|
|
||||||
DB: cfg.Redis.DB,
|
|
||||||
TLSEnabled: cfg.Redis.TLSEnabled,
|
|
||||||
SessionKeyPrefix: cfg.Redis.SessionKeyPrefix,
|
SessionKeyPrefix: cfg.Redis.SessionKeyPrefix,
|
||||||
UserSessionsKeyPrefix: cfg.Redis.UserSessionsKeyPrefix,
|
UserSessionsKeyPrefix: cfg.Redis.UserSessionsKeyPrefix,
|
||||||
UserActiveSessionsKeyPrefix: cfg.Redis.UserActiveSessionsKeyPrefix,
|
UserActiveSessionsKeyPrefix: cfg.Redis.UserActiveSessionsKeyPrefix,
|
||||||
OperationTimeout: cfg.Redis.OperationTimeout,
|
OperationTimeout: cfg.Redis.Conn.OperationTimeout,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return cleanupOnError(fmt.Errorf("new authsession runtime: session store: %w", err))
|
return cleanupOnError(fmt.Errorf("new authsession runtime: session store: %w", err))
|
||||||
}
|
}
|
||||||
runtime.cleanupFns = append(runtime.cleanupFns, sessionStore.Close)
|
|
||||||
|
|
||||||
configStore, err := configprovider.New(configprovider.Config{
|
configStore, err := configprovider.New(redisClient, configprovider.Config{
|
||||||
Addr: cfg.Redis.Addr,
|
|
||||||
Username: cfg.Redis.Username,
|
|
||||||
Password: cfg.Redis.Password,
|
|
||||||
DB: cfg.Redis.DB,
|
|
||||||
TLSEnabled: cfg.Redis.TLSEnabled,
|
|
||||||
SessionLimitKey: cfg.Redis.SessionLimitKey,
|
SessionLimitKey: cfg.Redis.SessionLimitKey,
|
||||||
OperationTimeout: cfg.Redis.OperationTimeout,
|
OperationTimeout: cfg.Redis.Conn.OperationTimeout,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return cleanupOnError(fmt.Errorf("new authsession runtime: config provider: %w", err))
|
return cleanupOnError(fmt.Errorf("new authsession runtime: config provider: %w", err))
|
||||||
}
|
}
|
||||||
runtime.cleanupFns = append(runtime.cleanupFns, configStore.Close)
|
|
||||||
|
|
||||||
publisher, err := projectionpublisher.New(projectionpublisher.Config{
|
publisher, err := projectionpublisher.New(redisClient, projectionpublisher.Config{
|
||||||
Addr: cfg.Redis.Addr,
|
|
||||||
Username: cfg.Redis.Username,
|
|
||||||
Password: cfg.Redis.Password,
|
|
||||||
DB: cfg.Redis.DB,
|
|
||||||
TLSEnabled: cfg.Redis.TLSEnabled,
|
|
||||||
SessionCacheKeyPrefix: cfg.Redis.GatewaySessionCacheKeyPrefix,
|
SessionCacheKeyPrefix: cfg.Redis.GatewaySessionCacheKeyPrefix,
|
||||||
SessionEventsStream: cfg.Redis.GatewaySessionEventsStream,
|
SessionEventsStream: cfg.Redis.GatewaySessionEventsStream,
|
||||||
StreamMaxLen: cfg.Redis.GatewaySessionEventsStreamMaxLen,
|
StreamMaxLen: cfg.Redis.GatewaySessionEventsStreamMaxLen,
|
||||||
OperationTimeout: cfg.Redis.OperationTimeout,
|
OperationTimeout: cfg.Redis.Conn.OperationTimeout,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return cleanupOnError(fmt.Errorf("new authsession runtime: projection publisher: %w", err))
|
return cleanupOnError(fmt.Errorf("new authsession runtime: projection publisher: %w", err))
|
||||||
}
|
}
|
||||||
runtime.cleanupFns = append(runtime.cleanupFns, publisher.Close)
|
|
||||||
|
|
||||||
abuseProtector, err := sendemailcodeabuse.New(sendemailcodeabuse.Config{
|
abuseProtector, err := sendemailcodeabuse.New(redisClient, sendemailcodeabuse.Config{
|
||||||
Addr: cfg.Redis.Addr,
|
|
||||||
Username: cfg.Redis.Username,
|
|
||||||
Password: cfg.Redis.Password,
|
|
||||||
DB: cfg.Redis.DB,
|
|
||||||
TLSEnabled: cfg.Redis.TLSEnabled,
|
|
||||||
KeyPrefix: cfg.Redis.SendEmailCodeThrottleKeyPrefix,
|
KeyPrefix: cfg.Redis.SendEmailCodeThrottleKeyPrefix,
|
||||||
OperationTimeout: cfg.Redis.OperationTimeout,
|
OperationTimeout: cfg.Redis.Conn.OperationTimeout,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return cleanupOnError(fmt.Errorf("new authsession runtime: send email code abuse protector: %w", err))
|
return cleanupOnError(fmt.Errorf("new authsession runtime: send email code abuse protector: %w", err))
|
||||||
}
|
}
|
||||||
runtime.cleanupFns = append(runtime.cleanupFns, abuseProtector.Close)
|
|
||||||
|
|
||||||
for name, dependency := range map[string]pinger{
|
|
||||||
"challenge store": challengeStore,
|
|
||||||
"session store": sessionStore,
|
|
||||||
"config provider": configStore,
|
|
||||||
"projection publisher": publisher,
|
|
||||||
"send email code abuse protector": abuseProtector,
|
|
||||||
} {
|
|
||||||
if err := dependency.Ping(ctx); err != nil {
|
|
||||||
return cleanupOnError(fmt.Errorf("new authsession runtime: ping %s: %w", name, err))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
clock := local.Clock{}
|
clock := local.Clock{}
|
||||||
idGenerator := local.IDGenerator{}
|
idGenerator := local.IDGenerator{}
|
||||||
|
|||||||
@@ -26,7 +26,8 @@ func TestNewRuntimeStartsAndStopsHTTPServers(t *testing.T) {
|
|||||||
redisServer := miniredis.RunT(t)
|
redisServer := miniredis.RunT(t)
|
||||||
|
|
||||||
cfg := config.DefaultConfig()
|
cfg := config.DefaultConfig()
|
||||||
cfg.Redis.Addr = redisServer.Addr()
|
cfg.Redis.Conn.MasterAddr = redisServer.Addr()
|
||||||
|
cfg.Redis.Conn.Password = "integration"
|
||||||
cfg.PublicHTTP.Addr = mustFreeAddr(t)
|
cfg.PublicHTTP.Addr = mustFreeAddr(t)
|
||||||
cfg.InternalHTTP.Addr = mustFreeAddr(t)
|
cfg.InternalHTTP.Addr = mustFreeAddr(t)
|
||||||
cfg.ShutdownTimeout = 10 * time.Second
|
cfg.ShutdownTimeout = 10 * time.Second
|
||||||
@@ -69,7 +70,8 @@ func TestNewRuntimeUsesRESTUserDirectoryWhenConfigured(t *testing.T) {
|
|||||||
defer userService.Close()
|
defer userService.Close()
|
||||||
|
|
||||||
cfg := config.DefaultConfig()
|
cfg := config.DefaultConfig()
|
||||||
cfg.Redis.Addr = redisServer.Addr()
|
cfg.Redis.Conn.MasterAddr = redisServer.Addr()
|
||||||
|
cfg.Redis.Conn.Password = "integration"
|
||||||
cfg.PublicHTTP.Addr = mustFreeAddr(t)
|
cfg.PublicHTTP.Addr = mustFreeAddr(t)
|
||||||
cfg.InternalHTTP.Addr = mustFreeAddr(t)
|
cfg.InternalHTTP.Addr = mustFreeAddr(t)
|
||||||
cfg.UserService.Mode = "rest"
|
cfg.UserService.Mode = "rest"
|
||||||
@@ -116,7 +118,8 @@ func TestNewRuntimeUsesRESTMailSenderWhenConfigured(t *testing.T) {
|
|||||||
defer mailService.Close()
|
defer mailService.Close()
|
||||||
|
|
||||||
cfg := config.DefaultConfig()
|
cfg := config.DefaultConfig()
|
||||||
cfg.Redis.Addr = redisServer.Addr()
|
cfg.Redis.Conn.MasterAddr = redisServer.Addr()
|
||||||
|
cfg.Redis.Conn.Password = "integration"
|
||||||
cfg.PublicHTTP.Addr = mustFreeAddr(t)
|
cfg.PublicHTTP.Addr = mustFreeAddr(t)
|
||||||
cfg.InternalHTTP.Addr = mustFreeAddr(t)
|
cfg.InternalHTTP.Addr = mustFreeAddr(t)
|
||||||
cfg.MailService.Mode = "rest"
|
cfg.MailService.Mode = "rest"
|
||||||
@@ -152,12 +155,13 @@ func TestNewRuntimeFailsFastWhenRedisPingChecksFail(t *testing.T) {
|
|||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
cfg := config.DefaultConfig()
|
cfg := config.DefaultConfig()
|
||||||
cfg.Redis.Addr = mustFreeAddr(t)
|
cfg.Redis.Conn.MasterAddr = mustFreeAddr(t)
|
||||||
|
cfg.Redis.Conn.Password = "integration"
|
||||||
|
|
||||||
runtime, err := NewRuntime(context.Background(), cfg, zap.NewNop(), nil)
|
runtime, err := NewRuntime(context.Background(), cfg, zap.NewNop(), nil)
|
||||||
require.Nil(t, runtime)
|
require.Nil(t, runtime)
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
assert.ErrorContains(t, err, "new authsession runtime: ping")
|
assert.ErrorContains(t, err, "ping redis")
|
||||||
}
|
}
|
||||||
|
|
||||||
func mustFreeAddr(t *testing.T) string {
|
func mustFreeAddr(t *testing.T) string {
|
||||||
|
|||||||
@@ -11,10 +11,13 @@ import (
|
|||||||
|
|
||||||
"galaxy/authsession/internal/api/internalhttp"
|
"galaxy/authsession/internal/api/internalhttp"
|
||||||
"galaxy/authsession/internal/api/publichttp"
|
"galaxy/authsession/internal/api/publichttp"
|
||||||
|
"galaxy/redisconn"
|
||||||
|
|
||||||
"go.uber.org/zap/zapcore"
|
"go.uber.org/zap/zapcore"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const authsessionRedisEnvPrefix = "AUTHSESSION"
|
||||||
|
|
||||||
const (
|
const (
|
||||||
shutdownTimeoutEnvVar = "AUTHSESSION_SHUTDOWN_TIMEOUT"
|
shutdownTimeoutEnvVar = "AUTHSESSION_SHUTDOWN_TIMEOUT"
|
||||||
logLevelEnvVar = "AUTHSESSION_LOG_LEVEL"
|
logLevelEnvVar = "AUTHSESSION_LOG_LEVEL"
|
||||||
@@ -31,13 +34,6 @@ const (
|
|||||||
internalHTTPIdleTimeoutEnvVar = "AUTHSESSION_INTERNAL_HTTP_IDLE_TIMEOUT"
|
internalHTTPIdleTimeoutEnvVar = "AUTHSESSION_INTERNAL_HTTP_IDLE_TIMEOUT"
|
||||||
internalHTTPRequestTimeoutEnvVar = "AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT"
|
internalHTTPRequestTimeoutEnvVar = "AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT"
|
||||||
|
|
||||||
redisAddrEnvVar = "AUTHSESSION_REDIS_ADDR"
|
|
||||||
redisUsernameEnvVar = "AUTHSESSION_REDIS_USERNAME"
|
|
||||||
redisPasswordEnvVar = "AUTHSESSION_REDIS_PASSWORD"
|
|
||||||
redisDBEnvVar = "AUTHSESSION_REDIS_DB"
|
|
||||||
redisTLSEnabledEnvVar = "AUTHSESSION_REDIS_TLS_ENABLED"
|
|
||||||
redisOperationTimeoutEnvVar = "AUTHSESSION_REDIS_OPERATION_TIMEOUT"
|
|
||||||
|
|
||||||
redisChallengeKeyPrefixEnvVar = "AUTHSESSION_REDIS_CHALLENGE_KEY_PREFIX"
|
redisChallengeKeyPrefixEnvVar = "AUTHSESSION_REDIS_CHALLENGE_KEY_PREFIX"
|
||||||
redisSessionKeyPrefixEnvVar = "AUTHSESSION_REDIS_SESSION_KEY_PREFIX"
|
redisSessionKeyPrefixEnvVar = "AUTHSESSION_REDIS_SESSION_KEY_PREFIX"
|
||||||
redisUserSessionsKeyPrefixEnvVar = "AUTHSESSION_REDIS_USER_SESSIONS_KEY_PREFIX"
|
redisUserSessionsKeyPrefixEnvVar = "AUTHSESSION_REDIS_USER_SESSIONS_KEY_PREFIX"
|
||||||
@@ -67,8 +63,6 @@ const (
|
|||||||
|
|
||||||
defaultShutdownTimeout = 5 * time.Second
|
defaultShutdownTimeout = 5 * time.Second
|
||||||
defaultLogLevel = "info"
|
defaultLogLevel = "info"
|
||||||
defaultRedisDB = 0
|
|
||||||
defaultRedisOperationTimeout = 250 * time.Millisecond
|
|
||||||
defaultChallengeKeyPrefix = "authsession:challenge:"
|
defaultChallengeKeyPrefix = "authsession:challenge:"
|
||||||
defaultSessionKeyPrefix = "authsession:session:"
|
defaultSessionKeyPrefix = "authsession:session:"
|
||||||
defaultUserSessionsKeyPrefix = "authsession:user-sessions:"
|
defaultUserSessionsKeyPrefix = "authsession:user-sessions:"
|
||||||
@@ -128,23 +122,10 @@ type LoggingConfig struct {
|
|||||||
|
|
||||||
// RedisConfig configures the Redis-backed authsession adapters.
|
// RedisConfig configures the Redis-backed authsession adapters.
|
||||||
type RedisConfig struct {
|
type RedisConfig struct {
|
||||||
// Addr is the shared Redis address used by the authsession adapters.
|
// Conn carries the master/replica/password connection topology shared by
|
||||||
Addr string
|
// every authsession Redis adapter, sourced from the AUTHSESSION_REDIS_*
|
||||||
|
// environment variables managed by `pkg/redisconn`.
|
||||||
// Username is the optional Redis ACL username.
|
Conn redisconn.Config
|
||||||
Username string
|
|
||||||
|
|
||||||
// Password is the optional Redis ACL password.
|
|
||||||
Password string
|
|
||||||
|
|
||||||
// DB is the Redis logical database index.
|
|
||||||
DB int
|
|
||||||
|
|
||||||
// TLSEnabled configures whether Redis connections use TLS.
|
|
||||||
TLSEnabled bool
|
|
||||||
|
|
||||||
// OperationTimeout bounds each adapter Redis round trip.
|
|
||||||
OperationTimeout time.Duration
|
|
||||||
|
|
||||||
// ChallengeKeyPrefix namespaces the challenge source-of-truth records.
|
// ChallengeKeyPrefix namespaces the challenge source-of-truth records.
|
||||||
ChallengeKeyPrefix string
|
ChallengeKeyPrefix string
|
||||||
@@ -248,8 +229,7 @@ func DefaultConfig() Config {
|
|||||||
PublicHTTP: publichttp.DefaultConfig(),
|
PublicHTTP: publichttp.DefaultConfig(),
|
||||||
InternalHTTP: internalhttp.DefaultConfig(),
|
InternalHTTP: internalhttp.DefaultConfig(),
|
||||||
Redis: RedisConfig{
|
Redis: RedisConfig{
|
||||||
DB: defaultRedisDB,
|
Conn: redisconn.DefaultConfig(),
|
||||||
OperationTimeout: defaultRedisOperationTimeout,
|
|
||||||
ChallengeKeyPrefix: defaultChallengeKeyPrefix,
|
ChallengeKeyPrefix: defaultChallengeKeyPrefix,
|
||||||
SessionKeyPrefix: defaultSessionKeyPrefix,
|
SessionKeyPrefix: defaultSessionKeyPrefix,
|
||||||
UserSessionsKeyPrefix: defaultUserSessionsKeyPrefix,
|
UserSessionsKeyPrefix: defaultUserSessionsKeyPrefix,
|
||||||
@@ -329,21 +309,11 @@ func LoadFromEnv() (Config, error) {
|
|||||||
return Config{}, fmt.Errorf("load authsession config: %w", err)
|
return Config{}, fmt.Errorf("load authsession config: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
cfg.Redis.Addr = loadStringEnvWithDefault(redisAddrEnvVar, cfg.Redis.Addr)
|
redisConn, err := redisconn.LoadFromEnv(authsessionRedisEnvPrefix)
|
||||||
cfg.Redis.Username = os.Getenv(redisUsernameEnvVar)
|
|
||||||
cfg.Redis.Password = os.Getenv(redisPasswordEnvVar)
|
|
||||||
cfg.Redis.DB, err = loadIntEnvWithDefault(redisDBEnvVar, cfg.Redis.DB)
|
|
||||||
if err != nil {
|
|
||||||
return Config{}, fmt.Errorf("load authsession config: %w", err)
|
|
||||||
}
|
|
||||||
cfg.Redis.TLSEnabled, err = loadBoolEnvWithDefault(redisTLSEnabledEnvVar, cfg.Redis.TLSEnabled)
|
|
||||||
if err != nil {
|
|
||||||
return Config{}, fmt.Errorf("load authsession config: %w", err)
|
|
||||||
}
|
|
||||||
cfg.Redis.OperationTimeout, err = loadDurationEnvWithDefault(redisOperationTimeoutEnvVar, cfg.Redis.OperationTimeout)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return Config{}, fmt.Errorf("load authsession config: %w", err)
|
return Config{}, fmt.Errorf("load authsession config: %w", err)
|
||||||
}
|
}
|
||||||
|
cfg.Redis.Conn = redisConn
|
||||||
cfg.Redis.ChallengeKeyPrefix = loadStringEnvWithDefault(redisChallengeKeyPrefixEnvVar, cfg.Redis.ChallengeKeyPrefix)
|
cfg.Redis.ChallengeKeyPrefix = loadStringEnvWithDefault(redisChallengeKeyPrefixEnvVar, cfg.Redis.ChallengeKeyPrefix)
|
||||||
cfg.Redis.SessionKeyPrefix = loadStringEnvWithDefault(redisSessionKeyPrefixEnvVar, cfg.Redis.SessionKeyPrefix)
|
cfg.Redis.SessionKeyPrefix = loadStringEnvWithDefault(redisSessionKeyPrefixEnvVar, cfg.Redis.SessionKeyPrefix)
|
||||||
cfg.Redis.UserSessionsKeyPrefix = loadStringEnvWithDefault(redisUserSessionsKeyPrefixEnvVar, cfg.Redis.UserSessionsKeyPrefix)
|
cfg.Redis.UserSessionsKeyPrefix = loadStringEnvWithDefault(redisUserSessionsKeyPrefixEnvVar, cfg.Redis.UserSessionsKeyPrefix)
|
||||||
@@ -404,15 +374,13 @@ func LoadFromEnv() (Config, error) {
|
|||||||
// Validate reports whether cfg contains a consistent authsession process
|
// Validate reports whether cfg contains a consistent authsession process
|
||||||
// configuration.
|
// configuration.
|
||||||
func (cfg Config) Validate() error {
|
func (cfg Config) Validate() error {
|
||||||
switch {
|
if cfg.ShutdownTimeout <= 0 {
|
||||||
case cfg.ShutdownTimeout <= 0:
|
|
||||||
return fmt.Errorf("load authsession config: %s must be positive", shutdownTimeoutEnvVar)
|
return fmt.Errorf("load authsession config: %s must be positive", shutdownTimeoutEnvVar)
|
||||||
case strings.TrimSpace(cfg.Redis.Addr) == "":
|
}
|
||||||
return fmt.Errorf("load authsession config: %s must not be empty", redisAddrEnvVar)
|
if err := cfg.Redis.Conn.Validate(); err != nil {
|
||||||
case cfg.Redis.DB < 0:
|
return fmt.Errorf("load authsession config: redis: %w", err)
|
||||||
return fmt.Errorf("load authsession config: %s must not be negative", redisDBEnvVar)
|
}
|
||||||
case cfg.Redis.OperationTimeout <= 0:
|
switch {
|
||||||
return fmt.Errorf("load authsession config: %s must be positive", redisOperationTimeoutEnvVar)
|
|
||||||
case strings.TrimSpace(cfg.Redis.ChallengeKeyPrefix) == "":
|
case strings.TrimSpace(cfg.Redis.ChallengeKeyPrefix) == "":
|
||||||
return fmt.Errorf("load authsession config: %s must not be empty", redisChallengeKeyPrefixEnvVar)
|
return fmt.Errorf("load authsession config: %s must not be empty", redisChallengeKeyPrefixEnvVar)
|
||||||
case strings.TrimSpace(cfg.Redis.SessionKeyPrefix) == "":
|
case strings.TrimSpace(cfg.Redis.SessionKeyPrefix) == "":
|
||||||
|
|||||||
@@ -8,8 +8,24 @@ import (
|
|||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
testRedisMasterAddrEnvVar = "AUTHSESSION_REDIS_MASTER_ADDR"
|
||||||
|
testRedisPasswordEnvVar = "AUTHSESSION_REDIS_PASSWORD"
|
||||||
|
testRedisReplicaEnvVar = "AUTHSESSION_REDIS_REPLICA_ADDRS"
|
||||||
|
testRedisDBEnvVar = "AUTHSESSION_REDIS_DB"
|
||||||
|
testRedisOpTimeoutEnvVar = "AUTHSESSION_REDIS_OPERATION_TIMEOUT"
|
||||||
|
testRedisTLSEnabledEnvVar = "AUTHSESSION_REDIS_TLS_ENABLED"
|
||||||
|
testRedisUsernameEnvVar = "AUTHSESSION_REDIS_USERNAME"
|
||||||
|
)
|
||||||
|
|
||||||
|
func setRequiredRedisEnv(t *testing.T) {
|
||||||
|
t.Helper()
|
||||||
|
t.Setenv(testRedisMasterAddrEnvVar, "127.0.0.1:6379")
|
||||||
|
t.Setenv(testRedisPasswordEnvVar, "secret")
|
||||||
|
}
|
||||||
|
|
||||||
func TestLoadFromEnvUsesDefaults(t *testing.T) {
|
func TestLoadFromEnvUsesDefaults(t *testing.T) {
|
||||||
t.Setenv(redisAddrEnvVar, "127.0.0.1:6379")
|
setRequiredRedisEnv(t)
|
||||||
|
|
||||||
cfg, err := LoadFromEnv()
|
cfg, err := LoadFromEnv()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -19,9 +35,11 @@ func TestLoadFromEnvUsesDefaults(t *testing.T) {
|
|||||||
assert.Equal(t, defaults.Logging.Level, cfg.Logging.Level)
|
assert.Equal(t, defaults.Logging.Level, cfg.Logging.Level)
|
||||||
assert.Equal(t, defaults.PublicHTTP, cfg.PublicHTTP)
|
assert.Equal(t, defaults.PublicHTTP, cfg.PublicHTTP)
|
||||||
assert.Equal(t, defaults.InternalHTTP, cfg.InternalHTTP)
|
assert.Equal(t, defaults.InternalHTTP, cfg.InternalHTTP)
|
||||||
assert.Equal(t, "127.0.0.1:6379", cfg.Redis.Addr)
|
assert.Equal(t, "127.0.0.1:6379", cfg.Redis.Conn.MasterAddr)
|
||||||
assert.Equal(t, defaults.Redis.DB, cfg.Redis.DB)
|
assert.Equal(t, "secret", cfg.Redis.Conn.Password)
|
||||||
assert.Equal(t, defaults.Redis.OperationTimeout, cfg.Redis.OperationTimeout)
|
assert.Equal(t, defaults.Redis.Conn.DB, cfg.Redis.Conn.DB)
|
||||||
|
assert.Equal(t, defaults.Redis.Conn.OperationTimeout, cfg.Redis.Conn.OperationTimeout)
|
||||||
|
assert.Empty(t, cfg.Redis.Conn.ReplicaAddrs)
|
||||||
assert.Equal(t, defaults.UserService, cfg.UserService)
|
assert.Equal(t, defaults.UserService, cfg.UserService)
|
||||||
assert.Equal(t, defaults.MailService, cfg.MailService)
|
assert.Equal(t, defaults.MailService, cfg.MailService)
|
||||||
assert.Equal(t, defaults.Telemetry.ServiceName, cfg.Telemetry.ServiceName)
|
assert.Equal(t, defaults.Telemetry.ServiceName, cfg.Telemetry.ServiceName)
|
||||||
@@ -36,12 +54,11 @@ func TestLoadFromEnvAppliesOverrides(t *testing.T) {
|
|||||||
t.Setenv(logLevelEnvVar, "debug")
|
t.Setenv(logLevelEnvVar, "debug")
|
||||||
t.Setenv(publicHTTPAddrEnvVar, "127.0.0.1:18080")
|
t.Setenv(publicHTTPAddrEnvVar, "127.0.0.1:18080")
|
||||||
t.Setenv(internalHTTPAddrEnvVar, "127.0.0.1:18081")
|
t.Setenv(internalHTTPAddrEnvVar, "127.0.0.1:18081")
|
||||||
t.Setenv(redisAddrEnvVar, "127.0.0.1:6380")
|
t.Setenv(testRedisMasterAddrEnvVar, "127.0.0.1:6380")
|
||||||
t.Setenv(redisUsernameEnvVar, "alice")
|
t.Setenv(testRedisPasswordEnvVar, "secret")
|
||||||
t.Setenv(redisPasswordEnvVar, "secret")
|
t.Setenv(testRedisReplicaEnvVar, "127.0.0.1:6381,127.0.0.1:6382")
|
||||||
t.Setenv(redisDBEnvVar, "3")
|
t.Setenv(testRedisDBEnvVar, "3")
|
||||||
t.Setenv(redisTLSEnabledEnvVar, "true")
|
t.Setenv(testRedisOpTimeoutEnvVar, "750ms")
|
||||||
t.Setenv(redisOperationTimeoutEnvVar, "750ms")
|
|
||||||
t.Setenv(userServiceModeEnvVar, "rest")
|
t.Setenv(userServiceModeEnvVar, "rest")
|
||||||
t.Setenv(userServiceBaseURLEnvVar, "http://127.0.0.1:19090")
|
t.Setenv(userServiceBaseURLEnvVar, "http://127.0.0.1:19090")
|
||||||
t.Setenv(userServiceRequestTimeoutEnvVar, "900ms")
|
t.Setenv(userServiceRequestTimeoutEnvVar, "900ms")
|
||||||
@@ -62,12 +79,11 @@ func TestLoadFromEnvAppliesOverrides(t *testing.T) {
|
|||||||
assert.Equal(t, "debug", cfg.Logging.Level)
|
assert.Equal(t, "debug", cfg.Logging.Level)
|
||||||
assert.Equal(t, "127.0.0.1:18080", cfg.PublicHTTP.Addr)
|
assert.Equal(t, "127.0.0.1:18080", cfg.PublicHTTP.Addr)
|
||||||
assert.Equal(t, "127.0.0.1:18081", cfg.InternalHTTP.Addr)
|
assert.Equal(t, "127.0.0.1:18081", cfg.InternalHTTP.Addr)
|
||||||
assert.Equal(t, "127.0.0.1:6380", cfg.Redis.Addr)
|
assert.Equal(t, "127.0.0.1:6380", cfg.Redis.Conn.MasterAddr)
|
||||||
assert.Equal(t, "alice", cfg.Redis.Username)
|
assert.Equal(t, "secret", cfg.Redis.Conn.Password)
|
||||||
assert.Equal(t, "secret", cfg.Redis.Password)
|
assert.Equal(t, []string{"127.0.0.1:6381", "127.0.0.1:6382"}, cfg.Redis.Conn.ReplicaAddrs)
|
||||||
assert.Equal(t, 3, cfg.Redis.DB)
|
assert.Equal(t, 3, cfg.Redis.Conn.DB)
|
||||||
assert.True(t, cfg.Redis.TLSEnabled)
|
assert.Equal(t, 750*time.Millisecond, cfg.Redis.Conn.OperationTimeout)
|
||||||
assert.Equal(t, 750*time.Millisecond, cfg.Redis.OperationTimeout)
|
|
||||||
assert.Equal(t, UserServiceConfig{
|
assert.Equal(t, UserServiceConfig{
|
||||||
Mode: "rest",
|
Mode: "rest",
|
||||||
BaseURL: "http://127.0.0.1:19090",
|
BaseURL: "http://127.0.0.1:19090",
|
||||||
@@ -104,10 +120,8 @@ func TestLoadFromEnvRejectsInvalidValues(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
t.Setenv(redisAddrEnvVar, "127.0.0.1:6379")
|
setRequiredRedisEnv(t)
|
||||||
t.Setenv(tt.envName, tt.envVal)
|
t.Setenv(tt.envName, tt.envVal)
|
||||||
if tt.envName == otelExporterOTLPTracesProtocolEnvVar {
|
if tt.envName == otelExporterOTLPTracesProtocolEnvVar {
|
||||||
t.Setenv(otelTracesExporterEnvVar, "otlp")
|
t.Setenv(otelTracesExporterEnvVar, "otlp")
|
||||||
@@ -121,7 +135,7 @@ func TestLoadFromEnvRejectsInvalidValues(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestLoadFromEnvRejectsInvalidRESTUserServiceConfiguration(t *testing.T) {
|
func TestLoadFromEnvRejectsInvalidRESTUserServiceConfiguration(t *testing.T) {
|
||||||
t.Setenv(redisAddrEnvVar, "127.0.0.1:6379")
|
setRequiredRedisEnv(t)
|
||||||
t.Setenv(userServiceModeEnvVar, "rest")
|
t.Setenv(userServiceModeEnvVar, "rest")
|
||||||
|
|
||||||
t.Run("missing base url", func(t *testing.T) {
|
t.Run("missing base url", func(t *testing.T) {
|
||||||
@@ -141,7 +155,7 @@ func TestLoadFromEnvRejectsInvalidRESTUserServiceConfiguration(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestLoadFromEnvRejectsInvalidRESTMailServiceConfiguration(t *testing.T) {
|
func TestLoadFromEnvRejectsInvalidRESTMailServiceConfiguration(t *testing.T) {
|
||||||
t.Setenv(redisAddrEnvVar, "127.0.0.1:6379")
|
setRequiredRedisEnv(t)
|
||||||
t.Setenv(mailServiceModeEnvVar, "rest")
|
t.Setenv(mailServiceModeEnvVar, "rest")
|
||||||
|
|
||||||
t.Run("missing base url", func(t *testing.T) {
|
t.Run("missing base url", func(t *testing.T) {
|
||||||
@@ -159,3 +173,40 @@ func TestLoadFromEnvRejectsInvalidRESTMailServiceConfiguration(t *testing.T) {
|
|||||||
assert.Contains(t, err.Error(), mailServiceRequestTimeoutEnvVar)
|
assert.Contains(t, err.Error(), mailServiceRequestTimeoutEnvVar)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestLoadFromEnvRejectsDeprecatedRedisVars(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
envName string
|
||||||
|
}{
|
||||||
|
{name: "tls enabled deprecated", envName: testRedisTLSEnabledEnvVar},
|
||||||
|
{name: "username deprecated", envName: testRedisUsernameEnvVar},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
setRequiredRedisEnv(t)
|
||||||
|
t.Setenv(tt.envName, "true")
|
||||||
|
|
||||||
|
_, err := LoadFromEnv()
|
||||||
|
require.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), tt.envName)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLoadFromEnvRequiresRedisMasterAddr(t *testing.T) {
|
||||||
|
t.Setenv(testRedisPasswordEnvVar, "secret")
|
||||||
|
|
||||||
|
_, err := LoadFromEnv()
|
||||||
|
require.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), testRedisMasterAddrEnvVar)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLoadFromEnvRequiresRedisPassword(t *testing.T) {
|
||||||
|
t.Setenv(testRedisMasterAddrEnvVar, "127.0.0.1:6379")
|
||||||
|
|
||||||
|
_, err := LoadFromEnv()
|
||||||
|
require.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), testRedisPasswordEnvVar)
|
||||||
|
}
|
||||||
|
|||||||
@@ -231,17 +231,13 @@ func newHardeningApp(t *testing.T, env *hardeningEnvironment, options hardeningA
|
|||||||
env.redisServer.Set(gatewayCompatibilitySessionLimitKey, strconv.Itoa(*options.SessionLimit))
|
env.redisServer.Set(gatewayCompatibilitySessionLimitKey, strconv.Itoa(*options.SessionLimit))
|
||||||
}
|
}
|
||||||
|
|
||||||
challengeStore, err := challengestore.New(challengestore.Config{
|
challengeStore, err := challengestore.New(env.redisClient, challengestore.Config{
|
||||||
Addr: env.redisAddr,
|
|
||||||
DB: 0,
|
|
||||||
KeyPrefix: gatewayCompatibilityChallengeKeyPrefix,
|
KeyPrefix: gatewayCompatibilityChallengeKeyPrefix,
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
})
|
})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
redisSessionStore, err := sessionstore.New(sessionstore.Config{
|
redisSessionStore, err := sessionstore.New(env.redisClient, sessionstore.Config{
|
||||||
Addr: env.redisAddr,
|
|
||||||
DB: 0,
|
|
||||||
SessionKeyPrefix: gatewayCompatibilitySessionKeyPrefix,
|
SessionKeyPrefix: gatewayCompatibilitySessionKeyPrefix,
|
||||||
UserSessionsKeyPrefix: gatewayCompatibilityUserSessionsKeyPrefix,
|
UserSessionsKeyPrefix: gatewayCompatibilityUserSessionsKeyPrefix,
|
||||||
UserActiveSessionsKeyPrefix: gatewayCompatibilityUserActiveKeyPrefix,
|
UserActiveSessionsKeyPrefix: gatewayCompatibilityUserActiveKeyPrefix,
|
||||||
@@ -249,17 +245,13 @@ func newHardeningApp(t *testing.T, env *hardeningEnvironment, options hardeningA
|
|||||||
})
|
})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
configStore, err := configprovider.New(configprovider.Config{
|
configStore, err := configprovider.New(env.redisClient, configprovider.Config{
|
||||||
Addr: env.redisAddr,
|
|
||||||
DB: 0,
|
|
||||||
SessionLimitKey: gatewayCompatibilitySessionLimitKey,
|
SessionLimitKey: gatewayCompatibilitySessionLimitKey,
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
})
|
})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
redisPublisher, err := projectionpublisher.New(projectionpublisher.Config{
|
redisPublisher, err := projectionpublisher.New(env.redisClient, projectionpublisher.Config{
|
||||||
Addr: env.redisAddr,
|
|
||||||
DB: 0,
|
|
||||||
SessionCacheKeyPrefix: gatewayCompatibilitySessionCacheKeyPrefix,
|
SessionCacheKeyPrefix: gatewayCompatibilitySessionCacheKeyPrefix,
|
||||||
SessionEventsStream: gatewayCompatibilitySessionEventsStream,
|
SessionEventsStream: gatewayCompatibilitySessionEventsStream,
|
||||||
StreamMaxLen: gatewayCompatibilityStreamMaxLen,
|
StreamMaxLen: gatewayCompatibilityStreamMaxLen,
|
||||||
@@ -373,10 +365,6 @@ func newHardeningApp(t *testing.T, env *hardeningEnvironment, options hardeningA
|
|||||||
app.closeFn = func() {
|
app.closeFn = func() {
|
||||||
stopPublic()
|
stopPublic()
|
||||||
stopInternal()
|
stopInternal()
|
||||||
assert.NoError(t, challengeStore.Close())
|
|
||||||
assert.NoError(t, redisSessionStore.Close())
|
|
||||||
assert.NoError(t, configStore.Close())
|
|
||||||
assert.NoError(t, redisPublisher.Close())
|
|
||||||
}
|
}
|
||||||
t.Cleanup(func() {
|
t.Cleanup(func() {
|
||||||
app.Close()
|
app.Close()
|
||||||
@@ -678,18 +666,13 @@ func TestProductionHardeningDuplicatePublishKeepsGatewayCacheCanonical(t *testin
|
|||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
env := newHardeningEnvironment(t)
|
env := newHardeningEnvironment(t)
|
||||||
publisher, err := projectionpublisher.New(projectionpublisher.Config{
|
publisher, err := projectionpublisher.New(env.redisClient, projectionpublisher.Config{
|
||||||
Addr: env.redisAddr,
|
|
||||||
DB: 0,
|
|
||||||
SessionCacheKeyPrefix: gatewayCompatibilitySessionCacheKeyPrefix,
|
SessionCacheKeyPrefix: gatewayCompatibilitySessionCacheKeyPrefix,
|
||||||
SessionEventsStream: gatewayCompatibilitySessionEventsStream,
|
SessionEventsStream: gatewayCompatibilitySessionEventsStream,
|
||||||
StreamMaxLen: gatewayCompatibilityStreamMaxLen,
|
StreamMaxLen: gatewayCompatibilityStreamMaxLen,
|
||||||
OperationTimeout: 250 * time.Millisecond,
|
OperationTimeout: 250 * time.Millisecond,
|
||||||
})
|
})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
defer func() {
|
|
||||||
assert.NoError(t, publisher.Close())
|
|
||||||
}()
|
|
||||||
|
|
||||||
snapshot := gatewayprojection.Snapshot{
|
snapshot := gatewayprojection.Snapshot{
|
||||||
DeviceSessionID: common.DeviceSessionID("device-session-1"),
|
DeviceSessionID: common.DeviceSessionID("device-session-1"),
|
||||||
|
|||||||
@@ -1,273 +0,0 @@
|
|||||||
package authsession
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"path/filepath"
|
|
||||||
"runtime"
|
|
||||||
"strings"
|
|
||||||
"syscall"
|
|
||||||
"testing"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"galaxy/authsession/internal/adapters/userservice"
|
|
||||||
"galaxy/authsession/internal/domain/common"
|
|
||||||
"galaxy/authsession/internal/domain/userresolution"
|
|
||||||
"galaxy/authsession/internal/ports"
|
|
||||||
|
|
||||||
"github.com/alicebob/miniredis/v2"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestUserServiceRESTClientWorksAgainstRealUserServiceRuntime(t *testing.T) {
|
|
||||||
redisServer := miniredis.RunT(t)
|
|
||||||
internalAddr := freeTCPAddress(t)
|
|
||||||
binaryPath := buildUserServiceBinary(t)
|
|
||||||
process := startUserServiceProcess(t, binaryPath, map[string]string{
|
|
||||||
"USERSERVICE_INTERNAL_HTTP_ADDR": internalAddr,
|
|
||||||
"USERSERVICE_REDIS_ADDR": redisServer.Addr(),
|
|
||||||
})
|
|
||||||
waitForTCP(t, process, internalAddr)
|
|
||||||
|
|
||||||
client, err := userservice.NewRESTClient(userservice.Config{
|
|
||||||
BaseURL: "http://" + internalAddr,
|
|
||||||
RequestTimeout: 500 * time.Millisecond,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("NewRESTClient() error = %v, want nil", err)
|
|
||||||
}
|
|
||||||
t.Cleanup(func() {
|
|
||||||
_ = client.Close()
|
|
||||||
})
|
|
||||||
|
|
||||||
creatableEmail := common.Email("pilot@example.com")
|
|
||||||
|
|
||||||
resolution, err := client.ResolveByEmail(context.Background(), creatableEmail)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("ResolveByEmail(creatable) error = %v, want nil", err)
|
|
||||||
}
|
|
||||||
if got, want := resolution.Kind, userresolution.KindCreatable; got != want {
|
|
||||||
t.Fatalf("ResolveByEmail(creatable).Kind = %q, want %q", got, want)
|
|
||||||
}
|
|
||||||
|
|
||||||
created, err := client.EnsureUserByEmail(context.Background(), ports.EnsureUserInput{
|
|
||||||
Email: creatableEmail,
|
|
||||||
RegistrationContext: &ports.RegistrationContext{
|
|
||||||
PreferredLanguage: "en",
|
|
||||||
TimeZone: "Europe/Kaliningrad",
|
|
||||||
},
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EnsureUserByEmail(created) error = %v, want nil", err)
|
|
||||||
}
|
|
||||||
if got, want := created.Outcome, ports.EnsureUserOutcomeCreated; got != want {
|
|
||||||
t.Fatalf("EnsureUserByEmail(created).Outcome = %q, want %q", got, want)
|
|
||||||
}
|
|
||||||
if created.UserID.IsZero() {
|
|
||||||
t.Fatalf("EnsureUserByEmail(created).UserID = zero, want non-zero")
|
|
||||||
}
|
|
||||||
|
|
||||||
existing, err := client.ResolveByEmail(context.Background(), creatableEmail)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("ResolveByEmail(existing) error = %v, want nil", err)
|
|
||||||
}
|
|
||||||
if got, want := existing.Kind, userresolution.KindExisting; got != want {
|
|
||||||
t.Fatalf("ResolveByEmail(existing).Kind = %q, want %q", got, want)
|
|
||||||
}
|
|
||||||
if got, want := existing.UserID, created.UserID; got != want {
|
|
||||||
t.Fatalf("ResolveByEmail(existing).UserID = %q, want %q", got, want)
|
|
||||||
}
|
|
||||||
|
|
||||||
exists, err := client.ExistsByUserID(context.Background(), created.UserID)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("ExistsByUserID(existing) error = %v, want nil", err)
|
|
||||||
}
|
|
||||||
if !exists {
|
|
||||||
t.Fatalf("ExistsByUserID(existing) = false, want true")
|
|
||||||
}
|
|
||||||
|
|
||||||
blocked, err := client.BlockByUserID(context.Background(), ports.BlockUserByIDInput{
|
|
||||||
UserID: created.UserID,
|
|
||||||
ReasonCode: userresolution.BlockReasonCode("policy_blocked"),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("BlockByUserID() error = %v, want nil", err)
|
|
||||||
}
|
|
||||||
if got, want := blocked.Outcome, ports.BlockUserOutcomeBlocked; got != want {
|
|
||||||
t.Fatalf("BlockByUserID().Outcome = %q, want %q", got, want)
|
|
||||||
}
|
|
||||||
if got, want := blocked.UserID, created.UserID; got != want {
|
|
||||||
t.Fatalf("BlockByUserID().UserID = %q, want %q", got, want)
|
|
||||||
}
|
|
||||||
|
|
||||||
repeated, err := client.BlockByEmail(context.Background(), ports.BlockUserByEmailInput{
|
|
||||||
Email: creatableEmail,
|
|
||||||
ReasonCode: userresolution.BlockReasonCode("policy_blocked"),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("BlockByEmail(repeated) error = %v, want nil", err)
|
|
||||||
}
|
|
||||||
if got, want := repeated.Outcome, ports.BlockUserOutcomeAlreadyBlocked; got != want {
|
|
||||||
t.Fatalf("BlockByEmail(repeated).Outcome = %q, want %q", got, want)
|
|
||||||
}
|
|
||||||
if got, want := repeated.UserID, created.UserID; got != want {
|
|
||||||
t.Fatalf("BlockByEmail(repeated).UserID = %q, want %q", got, want)
|
|
||||||
}
|
|
||||||
|
|
||||||
blockedResolution, err := client.ResolveByEmail(context.Background(), creatableEmail)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("ResolveByEmail(blocked) error = %v, want nil", err)
|
|
||||||
}
|
|
||||||
if got, want := blockedResolution.Kind, userresolution.KindBlocked; got != want {
|
|
||||||
t.Fatalf("ResolveByEmail(blocked).Kind = %q, want %q", got, want)
|
|
||||||
}
|
|
||||||
if got, want := blockedResolution.BlockReasonCode, userresolution.BlockReasonCode("policy_blocked"); got != want {
|
|
||||||
t.Fatalf("ResolveByEmail(blocked).BlockReasonCode = %q, want %q", got, want)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type userServiceProcess struct {
|
|
||||||
cmd *exec.Cmd
|
|
||||||
doneCh chan struct{}
|
|
||||||
logs bytes.Buffer
|
|
||||||
}
|
|
||||||
|
|
||||||
func startUserServiceProcess(t *testing.T, binaryPath string, env map[string]string) *userServiceProcess {
|
|
||||||
t.Helper()
|
|
||||||
|
|
||||||
cmd := exec.Command(binaryPath)
|
|
||||||
cmd.Env = mergeEnvironment(os.Environ(), env)
|
|
||||||
|
|
||||||
process := &userServiceProcess{
|
|
||||||
cmd: cmd,
|
|
||||||
doneCh: make(chan struct{}),
|
|
||||||
}
|
|
||||||
cmd.Stdout = &process.logs
|
|
||||||
cmd.Stderr = &process.logs
|
|
||||||
|
|
||||||
if err := cmd.Start(); err != nil {
|
|
||||||
t.Fatalf("start user service process: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
_ = cmd.Wait()
|
|
||||||
close(process.doneCh)
|
|
||||||
}()
|
|
||||||
|
|
||||||
t.Cleanup(func() {
|
|
||||||
stopUserServiceProcess(t, process)
|
|
||||||
if t.Failed() {
|
|
||||||
t.Logf("userservice logs:\n%s", process.logs.String())
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
return process
|
|
||||||
}
|
|
||||||
|
|
||||||
func stopUserServiceProcess(t *testing.T, process *userServiceProcess) {
|
|
||||||
t.Helper()
|
|
||||||
|
|
||||||
if process == nil || process.cmd == nil || process.cmd.Process == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
select {
|
|
||||||
case <-process.doneCh:
|
|
||||||
return
|
|
||||||
default:
|
|
||||||
}
|
|
||||||
|
|
||||||
_ = process.cmd.Process.Signal(syscall.SIGTERM)
|
|
||||||
|
|
||||||
select {
|
|
||||||
case <-process.doneCh:
|
|
||||||
case <-time.After(5 * time.Second):
|
|
||||||
_ = process.cmd.Process.Kill()
|
|
||||||
<-process.doneCh
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func waitForTCP(t *testing.T, process *userServiceProcess, address string) {
|
|
||||||
t.Helper()
|
|
||||||
|
|
||||||
deadline := time.Now().Add(10 * time.Second)
|
|
||||||
for time.Now().Before(deadline) {
|
|
||||||
select {
|
|
||||||
case <-process.doneCh:
|
|
||||||
t.Fatalf("userservice exited before %s became reachable\n%s", address, process.logs.String())
|
|
||||||
default:
|
|
||||||
}
|
|
||||||
|
|
||||||
conn, err := net.DialTimeout("tcp", address, 100*time.Millisecond)
|
|
||||||
if err == nil {
|
|
||||||
_ = conn.Close()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
time.Sleep(25 * time.Millisecond)
|
|
||||||
}
|
|
||||||
|
|
||||||
t.Fatalf("userservice did not become reachable at %s\n%s", address, process.logs.String())
|
|
||||||
}
|
|
||||||
|
|
||||||
func freeTCPAddress(t *testing.T) string {
|
|
||||||
t.Helper()
|
|
||||||
|
|
||||||
listener, err := net.Listen("tcp", "127.0.0.1:0")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("reserve free TCP address: %v", err)
|
|
||||||
}
|
|
||||||
defer listener.Close()
|
|
||||||
|
|
||||||
return listener.Addr().String()
|
|
||||||
}
|
|
||||||
|
|
||||||
func buildUserServiceBinary(t *testing.T) string {
|
|
||||||
t.Helper()
|
|
||||||
|
|
||||||
outputPath := filepath.Join(t.TempDir(), "userservice")
|
|
||||||
cmd := exec.Command("go", "build", "-o", outputPath, "./user/cmd/userservice")
|
|
||||||
cmd.Dir = repositoryRoot(t)
|
|
||||||
output, err := cmd.CombinedOutput()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("build userservice binary: %v\n%s", err, output)
|
|
||||||
}
|
|
||||||
|
|
||||||
return outputPath
|
|
||||||
}
|
|
||||||
|
|
||||||
func repositoryRoot(t *testing.T) string {
|
|
||||||
t.Helper()
|
|
||||||
|
|
||||||
_, file, _, ok := runtime.Caller(0)
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("resolve repository root: runtime caller unavailable")
|
|
||||||
}
|
|
||||||
|
|
||||||
return filepath.Clean(filepath.Join(filepath.Dir(file), ".."))
|
|
||||||
}
|
|
||||||
|
|
||||||
func mergeEnvironment(base []string, overrides map[string]string) []string {
|
|
||||||
values := make(map[string]string, len(base)+len(overrides))
|
|
||||||
for _, entry := range base {
|
|
||||||
name, value, ok := strings.Cut(entry, "=")
|
|
||||||
if ok {
|
|
||||||
values[name] = value
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for name, value := range overrides {
|
|
||||||
values[name] = value
|
|
||||||
}
|
|
||||||
|
|
||||||
merged := make([]string, 0, len(values))
|
|
||||||
for name, value := range values {
|
|
||||||
merged = append(merged, fmt.Sprintf("%s=%s", name, value))
|
|
||||||
}
|
|
||||||
return merged
|
|
||||||
}
|
|
||||||
|
|
||||||
var _ io.Writer = (*bytes.Buffer)(nil)
|
|
||||||
+2
-2
@@ -38,8 +38,8 @@ require (
|
|||||||
github.com/srwiley/rasterx v0.0.0-20220730225603-2ab79fcdd4ef // indirect
|
github.com/srwiley/rasterx v0.0.0-20220730225603-2ab79fcdd4ef // indirect
|
||||||
github.com/yuin/goldmark v1.7.16 // indirect
|
github.com/yuin/goldmark v1.7.16 // indirect
|
||||||
golang.org/x/image v0.36.0 // indirect
|
golang.org/x/image v0.36.0 // indirect
|
||||||
golang.org/x/net v0.52.0 // indirect
|
golang.org/x/net v0.53.0 // indirect
|
||||||
golang.org/x/sys v0.42.0 // indirect
|
golang.org/x/sys v0.43.0 // indirect
|
||||||
golang.org/x/text v0.36.0 // indirect
|
golang.org/x/text v0.36.0 // indirect
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
|
||||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||||
|
|||||||
+2
-2
@@ -72,8 +72,8 @@ go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
|
|||||||
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
||||||
golang.org/x/image v0.36.0 h1:Iknbfm1afbgtwPTmHnS2gTM/6PPZfH+z2EFuOkSbqwc=
|
golang.org/x/image v0.36.0 h1:Iknbfm1afbgtwPTmHnS2gTM/6PPZfH+z2EFuOkSbqwc=
|
||||||
golang.org/x/image v0.36.0/go.mod h1:YsWD2TyyGKiIX1kZlu9QfKIsQ4nAAK9bdgdrIsE7xy4=
|
golang.org/x/image v0.36.0/go.mod h1:YsWD2TyyGKiIX1kZlu9QfKIsQ4nAAK9bdgdrIsE7xy4=
|
||||||
golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
|
golang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=
|
||||||
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
|
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
|
||||||
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||||
|
|||||||
+4
-4
@@ -24,7 +24,7 @@ require (
|
|||||||
github.com/json-iterator/go v1.1.12 // indirect
|
github.com/json-iterator/go v1.1.12 // indirect
|
||||||
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
||||||
github.com/leodido/go-urn v1.4.0 // indirect
|
github.com/leodido/go-urn v1.4.0 // indirect
|
||||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
github.com/mattn/go-isatty v0.0.21 // indirect
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||||
github.com/pelletier/go-toml/v2 v2.3.0 // indirect
|
github.com/pelletier/go-toml/v2 v2.3.0 // indirect
|
||||||
@@ -36,9 +36,9 @@ require (
|
|||||||
github.com/ugorji/go/codec v1.3.1 // indirect
|
github.com/ugorji/go/codec v1.3.1 // indirect
|
||||||
go.mongodb.org/mongo-driver/v2 v2.5.0 // indirect
|
go.mongodb.org/mongo-driver/v2 v2.5.0 // indirect
|
||||||
golang.org/x/arch v0.25.0 // indirect
|
golang.org/x/arch v0.25.0 // indirect
|
||||||
golang.org/x/crypto v0.49.0 // indirect
|
golang.org/x/crypto v0.50.0 // indirect
|
||||||
golang.org/x/net v0.52.0 // indirect
|
golang.org/x/net v0.53.0 // indirect
|
||||||
golang.org/x/sys v0.42.0 // indirect
|
golang.org/x/sys v0.43.0 // indirect
|
||||||
golang.org/x/text v0.36.0 // indirect
|
golang.org/x/text v0.36.0 // indirect
|
||||||
google.golang.org/protobuf v1.36.11 // indirect
|
google.golang.org/protobuf v1.36.11 // indirect
|
||||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||||
|
|||||||
+4
-6
@@ -36,8 +36,7 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
|||||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||||
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
|
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
|
||||||
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
||||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
github.com/mattn/go-isatty v0.0.21 h1:xYae+lCNBP7QuW4PUnNG61ffM4hVIfm+zUzDuSzYLGs=
|
||||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
|
||||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
@@ -70,10 +69,9 @@ go.mongodb.org/mongo-driver/v2 v2.5.0 h1:yXUhImUjjAInNcpTcAlPHiT7bIXhshCTL3jVBkF
|
|||||||
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
|
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
|
||||||
go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=
|
go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=
|
||||||
golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE=
|
golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE=
|
||||||
golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
|
golang.org/x/crypto v0.50.0 h1:zO47/JPrL6vsNkINmLoo/PH1gcxpls50DNogFvB5ZGI=
|
||||||
golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
|
golang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=
|
||||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
|
||||||
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
|
|
||||||
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
||||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||||
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
# Required startup settings.
|
# Required startup settings.
|
||||||
GATEWAY_SESSION_CACHE_REDIS_ADDR=127.0.0.1:6379
|
GATEWAY_REDIS_MASTER_ADDR=127.0.0.1:6379
|
||||||
|
GATEWAY_REDIS_PASSWORD=changeme
|
||||||
GATEWAY_SESSION_EVENTS_REDIS_STREAM=gateway:session_events
|
GATEWAY_SESSION_EVENTS_REDIS_STREAM=gateway:session_events
|
||||||
GATEWAY_CLIENT_EVENTS_REDIS_STREAM=gateway:client_events
|
GATEWAY_CLIENT_EVENTS_REDIS_STREAM=gateway:client_events
|
||||||
GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH=./secrets/response-signer.pem
|
GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH=./secrets/response-signer.pem
|
||||||
@@ -11,11 +12,14 @@ GATEWAY_AUTHENTICATED_GRPC_ADDR=127.0.0.1:9090
|
|||||||
# Optional admin listener.
|
# Optional admin listener.
|
||||||
# GATEWAY_ADMIN_HTTP_ADDR=127.0.0.1:9091
|
# GATEWAY_ADMIN_HTTP_ADDR=127.0.0.1:9091
|
||||||
|
|
||||||
# Optional Redis tuning.
|
# Optional Redis tuning. The legacy GATEWAY_REDIS_TLS_ENABLED and
|
||||||
# GATEWAY_SESSION_CACHE_REDIS_DB=0
|
# GATEWAY_REDIS_USERNAME variables are no longer accepted; see
|
||||||
|
# docs/redis-config.md.
|
||||||
|
# GATEWAY_REDIS_REPLICA_ADDRS=127.0.0.1:6479,127.0.0.1:6480
|
||||||
|
# GATEWAY_REDIS_DB=0
|
||||||
|
# GATEWAY_REDIS_OPERATION_TIMEOUT=250ms
|
||||||
# GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX=gateway:session:
|
# GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX=gateway:session:
|
||||||
# GATEWAY_REPLAY_REDIS_KEY_PREFIX=gateway:replay:
|
# GATEWAY_REPLAY_REDIS_KEY_PREFIX=gateway:replay:
|
||||||
# GATEWAY_SESSION_CACHE_REDIS_TLS_ENABLED=false
|
|
||||||
|
|
||||||
# Optional public-auth integration. Without a configured Auth / Session Service
|
# Optional public-auth integration. Without a configured Auth / Session Service
|
||||||
# base URL the routes stay mounted and return 503 service_unavailable.
|
# base URL the routes stay mounted and return 503 service_unavailable.
|
||||||
|
|||||||
+35
-12
@@ -13,7 +13,8 @@
|
|||||||
|
|
||||||
Required startup environment variables:
|
Required startup environment variables:
|
||||||
|
|
||||||
- `GATEWAY_SESSION_CACHE_REDIS_ADDR`
|
- `GATEWAY_REDIS_MASTER_ADDR`
|
||||||
|
- `GATEWAY_REDIS_PASSWORD`
|
||||||
- `GATEWAY_SESSION_EVENTS_REDIS_STREAM`
|
- `GATEWAY_SESSION_EVENTS_REDIS_STREAM`
|
||||||
- `GATEWAY_CLIENT_EVENTS_REDIS_STREAM`
|
- `GATEWAY_CLIENT_EVENTS_REDIS_STREAM`
|
||||||
- `GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH`
|
- `GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH`
|
||||||
@@ -609,23 +610,45 @@ eviction policy. Session lifecycle events are the authoritative mechanism for
|
|||||||
keeping the hot path current, while Redis fallback remains the safety net for
|
keeping the hot path current, while Redis fallback remains the safety net for
|
||||||
cold misses and process restarts.
|
cold misses and process restarts.
|
||||||
|
|
||||||
The Redis fallback implementation uses `go-redis/v9`.
|
The Redis fallback implementation uses `go-redis/v9`. `cmd/gateway` opens one
|
||||||
`cmd/gateway` requires the Redis fallback backend during startup, issues a
|
shared `*redis.Client` via `pkg/redisconn` (instrumented with OpenTelemetry
|
||||||
bounded `PING`, and refuses to start when Redis is misconfigured or
|
tracing and metrics), issues a single bounded `PING` on startup, and refuses
|
||||||
unavailable.
|
to start when Redis is misconfigured or unavailable. The session cache,
|
||||||
|
replay store, session-events subscriber, and client-events subscriber all
|
||||||
|
use that shared client. See `docs/redis-config.md` for the rationale behind
|
||||||
|
the shape and the project-wide rules in
|
||||||
|
`ARCHITECTURE.md §Persistence Backends`.
|
||||||
|
|
||||||
Required environment variable:
|
Required Redis connection variables:
|
||||||
|
|
||||||
- `GATEWAY_SESSION_CACHE_REDIS_ADDR`
|
- `GATEWAY_REDIS_MASTER_ADDR`
|
||||||
|
- `GATEWAY_REDIS_PASSWORD`
|
||||||
|
|
||||||
Optional environment variables:
|
Optional Redis connection variables:
|
||||||
|
|
||||||
|
- `GATEWAY_REDIS_REPLICA_ADDRS` (comma-separated; reserved for future
|
||||||
|
read-routing — currently unused)
|
||||||
|
- `GATEWAY_REDIS_DB` with default `0`
|
||||||
|
- `GATEWAY_REDIS_OPERATION_TIMEOUT` with default `250ms`
|
||||||
|
|
||||||
|
> Removed: `GATEWAY_SESSION_CACHE_REDIS_ADDR`,
|
||||||
|
> `GATEWAY_SESSION_CACHE_REDIS_USERNAME`,
|
||||||
|
> `GATEWAY_SESSION_CACHE_REDIS_PASSWORD`,
|
||||||
|
> `GATEWAY_SESSION_CACHE_REDIS_DB`,
|
||||||
|
> `GATEWAY_SESSION_CACHE_REDIS_TLS_ENABLED`. `pkg/redisconn.LoadFromEnv`
|
||||||
|
> rejects the deprecated `GATEWAY_REDIS_TLS_ENABLED` and
|
||||||
|
> `GATEWAY_REDIS_USERNAME` variables at startup.
|
||||||
|
|
||||||
|
Per-subsystem Redis behavior variables (namespace, stream, timeouts):
|
||||||
|
|
||||||
- `GATEWAY_SESSION_CACHE_REDIS_USERNAME`
|
|
||||||
- `GATEWAY_SESSION_CACHE_REDIS_PASSWORD`
|
|
||||||
- `GATEWAY_SESSION_CACHE_REDIS_DB` with default `0`
|
|
||||||
- `GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX` with default `gateway:session:`
|
- `GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX` with default `gateway:session:`
|
||||||
- `GATEWAY_SESSION_CACHE_REDIS_LOOKUP_TIMEOUT` with default `250ms`
|
- `GATEWAY_SESSION_CACHE_REDIS_LOOKUP_TIMEOUT` with default `250ms`
|
||||||
- `GATEWAY_SESSION_CACHE_REDIS_TLS_ENABLED` with default `false`
|
- `GATEWAY_REPLAY_REDIS_KEY_PREFIX` with default `gateway:replay:`
|
||||||
|
- `GATEWAY_REPLAY_REDIS_RESERVE_TIMEOUT` with default `250ms`
|
||||||
|
- `GATEWAY_SESSION_EVENTS_REDIS_STREAM`
|
||||||
|
- `GATEWAY_SESSION_EVENTS_REDIS_READ_BLOCK_TIMEOUT` with default `1s`
|
||||||
|
- `GATEWAY_CLIENT_EVENTS_REDIS_STREAM`
|
||||||
|
- `GATEWAY_CLIENT_EVENTS_REDIS_READ_BLOCK_TIMEOUT` with default `1s`
|
||||||
|
|
||||||
The Redis key format is:
|
The Redis key format is:
|
||||||
|
|
||||||
|
|||||||
+42
-69
@@ -18,11 +18,13 @@ import (
|
|||||||
"galaxy/gateway/internal/grpcapi"
|
"galaxy/gateway/internal/grpcapi"
|
||||||
"galaxy/gateway/internal/logging"
|
"galaxy/gateway/internal/logging"
|
||||||
"galaxy/gateway/internal/push"
|
"galaxy/gateway/internal/push"
|
||||||
|
"galaxy/gateway/internal/redisclient"
|
||||||
"galaxy/gateway/internal/replay"
|
"galaxy/gateway/internal/replay"
|
||||||
"galaxy/gateway/internal/restapi"
|
"galaxy/gateway/internal/restapi"
|
||||||
"galaxy/gateway/internal/session"
|
"galaxy/gateway/internal/session"
|
||||||
"galaxy/gateway/internal/telemetry"
|
"galaxy/gateway/internal/telemetry"
|
||||||
|
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
"go.uber.org/zap"
|
"go.uber.org/zap"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -132,112 +134,83 @@ func newAuthenticatedGRPCDependencies(ctx context.Context, cfg config.Config, lo
|
|||||||
return grpcapi.ServerDependencies{}, nil, nil, fmt.Errorf("build authenticated grpc dependencies: load response signer: %w", err)
|
return grpcapi.ServerDependencies{}, nil, nil, fmt.Errorf("build authenticated grpc dependencies: load response signer: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
fallbackSessionCache, err := session.NewRedisCache(cfg.SessionCacheRedis)
|
redisClient := redisclient.NewClient(cfg.Redis)
|
||||||
if err != nil {
|
if err := redisclient.InstrumentClient(redisClient, telemetryRuntime); err != nil {
|
||||||
return grpcapi.ServerDependencies{}, nil, nil, fmt.Errorf("build authenticated grpc dependencies: %w", err)
|
closeErr := redisClient.Close()
|
||||||
}
|
|
||||||
|
|
||||||
replayStore, err := replay.NewRedisStore(cfg.SessionCacheRedis, cfg.ReplayRedis)
|
|
||||||
if err != nil {
|
|
||||||
closeErr := fallbackSessionCache.Close()
|
|
||||||
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
||||||
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
||||||
closeErr,
|
closeErr,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
closeRedisClient := func() error {
|
||||||
|
err := redisClient.Close()
|
||||||
|
if errors.Is(err, redis.ErrClosed) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := redisclient.Ping(ctx, cfg.Redis, redisClient); err != nil {
|
||||||
|
closeErr := closeRedisClient()
|
||||||
|
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
||||||
|
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
||||||
|
closeErr,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
fallbackSessionCache, err := session.NewRedisCache(redisClient, cfg.SessionCacheRedis)
|
||||||
|
if err != nil {
|
||||||
|
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
||||||
|
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
||||||
|
closeRedisClient(),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
replayStore, err := replay.NewRedisStore(redisClient, cfg.ReplayRedis)
|
||||||
|
if err != nil {
|
||||||
|
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
||||||
|
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
||||||
|
closeRedisClient(),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
localSessionCache := session.NewMemoryCache()
|
localSessionCache := session.NewMemoryCache()
|
||||||
sessionCache, err := session.NewReadThroughCache(localSessionCache, fallbackSessionCache)
|
sessionCache, err := session.NewReadThroughCache(localSessionCache, fallbackSessionCache)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
closeErr := errors.Join(
|
|
||||||
fallbackSessionCache.Close(),
|
|
||||||
replayStore.Close(),
|
|
||||||
)
|
|
||||||
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
||||||
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
||||||
closeErr,
|
closeRedisClient(),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
pushHub := push.NewHubWithObserver(0, telemetry.NewPushObserver(telemetryRuntime))
|
pushHub := push.NewHubWithObserver(0, telemetry.NewPushObserver(telemetryRuntime))
|
||||||
sessionSubscriber, err := events.NewRedisSessionSubscriberWithObservability(cfg.SessionCacheRedis, cfg.SessionEventsRedis, localSessionCache, pushHub, logger, telemetryRuntime)
|
sessionSubscriber, err := events.NewRedisSessionSubscriberWithObservability(redisClient, cfg.SessionCacheRedis, cfg.SessionEventsRedis, localSessionCache, pushHub, logger, telemetryRuntime)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
closeErr := errors.Join(
|
|
||||||
fallbackSessionCache.Close(),
|
|
||||||
replayStore.Close(),
|
|
||||||
)
|
|
||||||
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
||||||
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
||||||
closeErr,
|
closeRedisClient(),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
clientEventSubscriber, err := events.NewRedisClientEventSubscriberWithObservability(cfg.SessionCacheRedis, cfg.ClientEventsRedis, pushHub, logger, telemetryRuntime)
|
clientEventSubscriber, err := events.NewRedisClientEventSubscriberWithObservability(redisClient, cfg.SessionCacheRedis, cfg.ClientEventsRedis, pushHub, logger, telemetryRuntime)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
closeErr := errors.Join(
|
|
||||||
fallbackSessionCache.Close(),
|
|
||||||
replayStore.Close(),
|
|
||||||
sessionSubscriber.Close(),
|
|
||||||
)
|
|
||||||
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
||||||
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
||||||
closeErr,
|
closeRedisClient(),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
userRoutes, closeUserServiceRoutes, err := userservice.NewRoutes(cfg.UserService.BaseURL)
|
userRoutes, closeUserServiceRoutes, err := userservice.NewRoutes(cfg.UserService.BaseURL)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
closeErr := errors.Join(
|
|
||||||
fallbackSessionCache.Close(),
|
|
||||||
replayStore.Close(),
|
|
||||||
sessionSubscriber.Close(),
|
|
||||||
clientEventSubscriber.Close(),
|
|
||||||
)
|
|
||||||
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
||||||
fmt.Errorf("build authenticated grpc dependencies: user service routes: %w", err),
|
fmt.Errorf("build authenticated grpc dependencies: user service routes: %w", err),
|
||||||
closeErr,
|
closeRedisClient(),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
cleanup := func() error {
|
cleanup := func() error {
|
||||||
return errors.Join(
|
return errors.Join(
|
||||||
fallbackSessionCache.Close(),
|
|
||||||
replayStore.Close(),
|
|
||||||
sessionSubscriber.Close(),
|
|
||||||
clientEventSubscriber.Close(),
|
|
||||||
closeUserServiceRoutes(),
|
closeUserServiceRoutes(),
|
||||||
)
|
closeRedisClient(),
|
||||||
}
|
|
||||||
|
|
||||||
if err := fallbackSessionCache.Ping(ctx); err != nil {
|
|
||||||
closeErr := cleanup()
|
|
||||||
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
|
||||||
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
|
||||||
closeErr,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := replayStore.Ping(ctx); err != nil {
|
|
||||||
closeErr := cleanup()
|
|
||||||
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
|
||||||
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
|
||||||
closeErr,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := sessionSubscriber.Ping(ctx); err != nil {
|
|
||||||
closeErr := cleanup()
|
|
||||||
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
|
||||||
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
|
||||||
closeErr,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := clientEventSubscriber.Ping(ctx); err != nil {
|
|
||||||
closeErr := cleanup()
|
|
||||||
return grpcapi.ServerDependencies{}, nil, nil, errors.Join(
|
|
||||||
fmt.Errorf("build authenticated grpc dependencies: %w", err),
|
|
||||||
closeErr,
|
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -15,6 +15,7 @@ import (
|
|||||||
|
|
||||||
"galaxy/gateway/internal/config"
|
"galaxy/gateway/internal/config"
|
||||||
"galaxy/gateway/internal/restapi"
|
"galaxy/gateway/internal/restapi"
|
||||||
|
"galaxy/redisconn"
|
||||||
|
|
||||||
"github.com/alicebob/miniredis/v2"
|
"github.com/alicebob/miniredis/v2"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
@@ -22,6 +23,16 @@ import (
|
|||||||
"go.uber.org/zap"
|
"go.uber.org/zap"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func testRedisConn(masterAddr string, opTimeout time.Duration) redisconn.Config {
|
||||||
|
cfg := redisconn.DefaultConfig()
|
||||||
|
cfg.MasterAddr = masterAddr
|
||||||
|
cfg.Password = "integration"
|
||||||
|
if opTimeout > 0 {
|
||||||
|
cfg.OperationTimeout = opTimeout
|
||||||
|
}
|
||||||
|
return cfg
|
||||||
|
}
|
||||||
|
|
||||||
func TestNewPublicRESTDependencies(t *testing.T) {
|
func TestNewPublicRESTDependencies(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
@@ -102,8 +113,8 @@ func TestNewAuthenticatedGRPCDependencies(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "success",
|
name: "success",
|
||||||
cfg: config.Config{
|
cfg: config.Config{
|
||||||
|
Redis: testRedisConn(server.Addr(), 250*time.Millisecond),
|
||||||
SessionCacheRedis: config.SessionCacheRedisConfig{
|
SessionCacheRedis: config.SessionCacheRedisConfig{
|
||||||
Addr: server.Addr(),
|
|
||||||
KeyPrefix: "gateway:session:",
|
KeyPrefix: "gateway:session:",
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
LookupTimeout: 250 * time.Millisecond,
|
||||||
},
|
},
|
||||||
@@ -125,8 +136,9 @@ func TestNewAuthenticatedGRPCDependencies(t *testing.T) {
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "invalid redis config",
|
name: "invalid session cache key prefix",
|
||||||
cfg: config.Config{
|
cfg: config.Config{
|
||||||
|
Redis: testRedisConn(server.Addr(), 250*time.Millisecond),
|
||||||
SessionCacheRedis: config.SessionCacheRedisConfig{
|
SessionCacheRedis: config.SessionCacheRedisConfig{
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
LookupTimeout: 250 * time.Millisecond,
|
||||||
},
|
},
|
||||||
@@ -146,13 +158,13 @@ func TestNewAuthenticatedGRPCDependencies(t *testing.T) {
|
|||||||
PrivateKeyPEMPath: responseSignerPEMPath,
|
PrivateKeyPEMPath: responseSignerPEMPath,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
wantErr: "redis addr must not be empty",
|
wantErr: "redis key prefix must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "startup ping failure",
|
name: "startup ping failure",
|
||||||
cfg: config.Config{
|
cfg: config.Config{
|
||||||
|
Redis: testRedisConn(unusedTCPAddr(t), 100*time.Millisecond),
|
||||||
SessionCacheRedis: config.SessionCacheRedisConfig{
|
SessionCacheRedis: config.SessionCacheRedisConfig{
|
||||||
Addr: unusedTCPAddr(t),
|
|
||||||
KeyPrefix: "gateway:session:",
|
KeyPrefix: "gateway:session:",
|
||||||
LookupTimeout: 100 * time.Millisecond,
|
LookupTimeout: 100 * time.Millisecond,
|
||||||
},
|
},
|
||||||
@@ -172,13 +184,13 @@ func TestNewAuthenticatedGRPCDependencies(t *testing.T) {
|
|||||||
PrivateKeyPEMPath: responseSignerPEMPath,
|
PrivateKeyPEMPath: responseSignerPEMPath,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
wantErr: "ping redis session cache",
|
wantErr: "ping redis",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "invalid replay config",
|
name: "invalid replay config",
|
||||||
cfg: config.Config{
|
cfg: config.Config{
|
||||||
|
Redis: testRedisConn(server.Addr(), 250*time.Millisecond),
|
||||||
SessionCacheRedis: config.SessionCacheRedisConfig{
|
SessionCacheRedis: config.SessionCacheRedisConfig{
|
||||||
Addr: server.Addr(),
|
|
||||||
KeyPrefix: "gateway:session:",
|
KeyPrefix: "gateway:session:",
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
LookupTimeout: 250 * time.Millisecond,
|
||||||
},
|
},
|
||||||
@@ -202,8 +214,8 @@ func TestNewAuthenticatedGRPCDependencies(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "invalid client event config",
|
name: "invalid client event config",
|
||||||
cfg: config.Config{
|
cfg: config.Config{
|
||||||
|
Redis: testRedisConn(server.Addr(), 250*time.Millisecond),
|
||||||
SessionCacheRedis: config.SessionCacheRedisConfig{
|
SessionCacheRedis: config.SessionCacheRedisConfig{
|
||||||
Addr: server.Addr(),
|
|
||||||
KeyPrefix: "gateway:session:",
|
KeyPrefix: "gateway:session:",
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
LookupTimeout: 250 * time.Millisecond,
|
||||||
},
|
},
|
||||||
@@ -227,8 +239,8 @@ func TestNewAuthenticatedGRPCDependencies(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "missing response signer path",
|
name: "missing response signer path",
|
||||||
cfg: config.Config{
|
cfg: config.Config{
|
||||||
|
Redis: testRedisConn(server.Addr(), 250*time.Millisecond),
|
||||||
SessionCacheRedis: config.SessionCacheRedisConfig{
|
SessionCacheRedis: config.SessionCacheRedisConfig{
|
||||||
Addr: server.Addr(),
|
|
||||||
KeyPrefix: "gateway:session:",
|
KeyPrefix: "gateway:session:",
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
LookupTimeout: 250 * time.Millisecond,
|
||||||
},
|
},
|
||||||
@@ -250,8 +262,8 @@ func TestNewAuthenticatedGRPCDependencies(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "invalid response signer pem",
|
name: "invalid response signer pem",
|
||||||
cfg: config.Config{
|
cfg: config.Config{
|
||||||
|
Redis: testRedisConn(server.Addr(), 250*time.Millisecond),
|
||||||
SessionCacheRedis: config.SessionCacheRedisConfig{
|
SessionCacheRedis: config.SessionCacheRedisConfig{
|
||||||
Addr: server.Addr(),
|
|
||||||
KeyPrefix: "gateway:session:",
|
KeyPrefix: "gateway:session:",
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
LookupTimeout: 250 * time.Millisecond,
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -0,0 +1,109 @@
|
|||||||
|
# Decision: Redis configuration shape
|
||||||
|
|
||||||
|
PG_PLAN.md §7. Captures the standing rules adopted by Edge Gateway when it
|
||||||
|
joined the project-wide Redis topology defined in
|
||||||
|
`ARCHITECTURE.md §Persistence Backends`.
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
Gateway intentionally stays Redis-only. All gateway state Redis serves is
|
||||||
|
TTL-bounded or runtime-coordination state:
|
||||||
|
|
||||||
|
- the session cache is a read-through projection of authsession's
|
||||||
|
source-of-truth session records (rebuildable via re-authentication);
|
||||||
|
- the replay store is a short-lived `SETNX` reservation namespace per
|
||||||
|
authenticated request (`GATEWAY_REPLAY_REDIS_RESERVE_TIMEOUT`);
|
||||||
|
- the session-events stream is a runtime fan-out of session lifecycle
|
||||||
|
updates;
|
||||||
|
- the client-events stream is a runtime push fan-out.
|
||||||
|
|
||||||
|
Stage 7 brought gateway in line with the steady-state rules established in
|
||||||
|
Stage 0: every Galaxy service uses one master plus zero-or-more replicas
|
||||||
|
with a mandatory password, no TLS, and no Redis ACL username; the connection
|
||||||
|
is configured by the shared `pkg/redisconn` helper.
|
||||||
|
|
||||||
|
## Decisions
|
||||||
|
|
||||||
|
### One shared `*redis.Client` owned by the runtime
|
||||||
|
|
||||||
|
`cmd/gateway/main.go` constructs a single `*redis.Client` via
|
||||||
|
`internal/redisclient.NewClient`, attaches OpenTelemetry tracing and metrics
|
||||||
|
via `internal/redisclient.InstrumentClient`, performs one bounded `PING`
|
||||||
|
via `internal/redisclient.Ping`, and registers `client.Close` for shutdown.
|
||||||
|
The session cache, replay store, session-events subscriber, and
|
||||||
|
client-events subscriber all receive this same client.
|
||||||
|
|
||||||
|
Adapters no longer build or own a Redis client. Their `Config` structs hold
|
||||||
|
only behavior settings (key prefix, stream name, per-subsystem timeouts).
|
||||||
|
Adapter constructors take `(*redis.Client, …)`. The stream subscribers'
|
||||||
|
`Close`/`Shutdown` methods became no-ops; the runtime's context cancellation
|
||||||
|
unblocks the `XRead` loop and the runtime closes the shared client.
|
||||||
|
|
||||||
|
### One env-var prefix for the connection
|
||||||
|
|
||||||
|
Connection topology is loaded from a single `GATEWAY_REDIS_*` group via
|
||||||
|
`redisconn.LoadFromEnv("GATEWAY")`:
|
||||||
|
|
||||||
|
- `GATEWAY_REDIS_MASTER_ADDR` (required)
|
||||||
|
- `GATEWAY_REDIS_REPLICA_ADDRS` (optional, comma-separated; currently
|
||||||
|
unused, reserved for future read-routing)
|
||||||
|
- `GATEWAY_REDIS_PASSWORD` (required)
|
||||||
|
- `GATEWAY_REDIS_DB` (default `0`)
|
||||||
|
- `GATEWAY_REDIS_OPERATION_TIMEOUT` (default `250ms`)
|
||||||
|
|
||||||
|
Per-subsystem behavior env vars keep their existing prefixes — they do not
|
||||||
|
describe connection topology, only namespace and timing:
|
||||||
|
|
||||||
|
- `GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX`,
|
||||||
|
`GATEWAY_SESSION_CACHE_REDIS_LOOKUP_TIMEOUT`
|
||||||
|
- `GATEWAY_REPLAY_REDIS_KEY_PREFIX`,
|
||||||
|
`GATEWAY_REPLAY_REDIS_RESERVE_TIMEOUT`
|
||||||
|
- `GATEWAY_SESSION_EVENTS_REDIS_STREAM`,
|
||||||
|
`GATEWAY_SESSION_EVENTS_REDIS_READ_BLOCK_TIMEOUT`
|
||||||
|
- `GATEWAY_CLIENT_EVENTS_REDIS_STREAM`,
|
||||||
|
`GATEWAY_CLIENT_EVENTS_REDIS_READ_BLOCK_TIMEOUT`
|
||||||
|
|
||||||
|
### Retired env vars (hard removal)
|
||||||
|
|
||||||
|
The following variables are no longer read or honored:
|
||||||
|
|
||||||
|
- `GATEWAY_SESSION_CACHE_REDIS_ADDR` — replaced by
|
||||||
|
`GATEWAY_REDIS_MASTER_ADDR`.
|
||||||
|
- `GATEWAY_SESSION_CACHE_REDIS_USERNAME` — Redis ACL not used.
|
||||||
|
- `GATEWAY_SESSION_CACHE_REDIS_PASSWORD` — replaced by
|
||||||
|
`GATEWAY_REDIS_PASSWORD`.
|
||||||
|
- `GATEWAY_SESSION_CACHE_REDIS_DB` — replaced by `GATEWAY_REDIS_DB`.
|
||||||
|
- `GATEWAY_SESSION_CACHE_REDIS_TLS_ENABLED` — TLS disabled by policy.
|
||||||
|
|
||||||
|
`pkg/redisconn.LoadFromEnv` rejects `GATEWAY_REDIS_TLS_ENABLED` and
|
||||||
|
`GATEWAY_REDIS_USERNAME` at startup with a clear error pointing to
|
||||||
|
`ARCHITECTURE.md §Persistence Backends`.
|
||||||
|
|
||||||
|
> **Compound legacy prefixes (`GATEWAY_SESSION_CACHE_REDIS_USERNAME` etc.)
|
||||||
|
> are not actively rejected.** `pkg/redisconn`'s deprecated-env detector
|
||||||
|
> only watches the canonical `GATEWAY_REDIS_*` form. The compound legacy
|
||||||
|
> vars become silently inert. The architecture rule explicitly accepts this
|
||||||
|
> ("no backward-compat shim — fresh project, no production deploys to
|
||||||
|
> migrate"); operators upgrading should remove the variables from their
|
||||||
|
> deployment manifests.
|
||||||
|
|
||||||
|
### Telemetry
|
||||||
|
|
||||||
|
`redisconn.Instrument` wires `redisotel.InstrumentTracing` (with
|
||||||
|
`WithDBStatement(false)`) and `redisotel.InstrumentMetrics`. This is the
|
||||||
|
first gateway release that emits Redis tracing and connection-pool metrics;
|
||||||
|
downstream dashboards will start populating without further changes.
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
- Gateway test code that previously constructed a Redis client per adapter
|
||||||
|
must now construct one client and pass it to every adapter under test
|
||||||
|
(see `internal/session/redis_test.go`, `internal/replay/redis_test.go`,
|
||||||
|
`internal/events/subscriber_test.go`,
|
||||||
|
`internal/events/client_subscriber_test.go`).
|
||||||
|
- Operators must set `GATEWAY_REDIS_PASSWORD`. A passwordless local Redis
|
||||||
|
is still acceptable as long as a placeholder password is supplied to the
|
||||||
|
binary; Redis without `requirepass` accepts AUTH unconditionally.
|
||||||
|
- The integration test harness passes `GATEWAY_REDIS_PASSWORD =
|
||||||
|
"integration"` alongside `GATEWAY_REDIS_MASTER_ADDR` (see
|
||||||
|
`integration/internal/harness/gatewayservice.go`).
|
||||||
+17
-14
@@ -7,25 +7,28 @@ readiness, shutdown, and push or revoke incidents.
|
|||||||
|
|
||||||
Before starting the process, confirm:
|
Before starting the process, confirm:
|
||||||
|
|
||||||
- `GATEWAY_SESSION_CACHE_REDIS_ADDR` points to the Redis deployment used for
|
- `GATEWAY_REDIS_MASTER_ADDR` and `GATEWAY_REDIS_PASSWORD` point to the Redis
|
||||||
session lookup and both internal event streams.
|
deployment used for session lookup, replay reservations, session-events
|
||||||
|
consumption, and client-events fan-out. Optional read replicas may be
|
||||||
|
listed in `GATEWAY_REDIS_REPLICA_ADDRS` (currently unused; reserved for
|
||||||
|
future read-routing).
|
||||||
- `GATEWAY_SESSION_EVENTS_REDIS_STREAM` and
|
- `GATEWAY_SESSION_EVENTS_REDIS_STREAM` and
|
||||||
`GATEWAY_CLIENT_EVENTS_REDIS_STREAM` reference existing Redis Stream keys or
|
`GATEWAY_CLIENT_EVENTS_REDIS_STREAM` reference existing Redis Stream keys
|
||||||
the names publishers will use.
|
or the names publishers will use.
|
||||||
- `GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH` points to a readable PKCS#8
|
- `GATEWAY_RESPONSE_SIGNER_PRIVATE_KEY_PEM_PATH` points to a readable PKCS#8
|
||||||
PEM-encoded Ed25519 private key.
|
PEM-encoded Ed25519 private key.
|
||||||
- the configured Redis ACL, DB, TLS, and key-prefix settings match the target
|
- the configured Redis DB and key-prefix settings match the target
|
||||||
environment.
|
environment. Per `ARCHITECTURE.md §Persistence Backends`, Redis traffic is
|
||||||
|
password-protected and TLS is disabled by policy; the deprecated
|
||||||
|
`GATEWAY_REDIS_TLS_ENABLED` and `GATEWAY_REDIS_USERNAME` variables are no
|
||||||
|
longer accepted and cause a hard fail at startup.
|
||||||
|
|
||||||
At startup the process performs bounded `PING` checks for:
|
At startup the process opens one shared `*redis.Client` (instrumented via
|
||||||
|
OpenTelemetry tracing and metrics) and performs one bounded `PING`. The
|
||||||
|
session cache, replay store, session-events subscriber, and client-events
|
||||||
|
subscriber all use that client.
|
||||||
|
|
||||||
- the Redis-backed session cache adapter;
|
Startup fails fast if the ping fails or if the signer key cannot be loaded.
|
||||||
- the replay store;
|
|
||||||
- the session event subscriber;
|
|
||||||
- the client event subscriber.
|
|
||||||
|
|
||||||
Startup fails fast if any of those checks fail or if the signer key cannot be
|
|
||||||
loaded.
|
|
||||||
|
|
||||||
Expected listener state after a healthy start:
|
Expected listener state after a healthy start:
|
||||||
|
|
||||||
|
|||||||
+13
-8
@@ -1,10 +1,11 @@
|
|||||||
module galaxy/gateway
|
module galaxy/gateway
|
||||||
|
|
||||||
go 1.26.0
|
go 1.26.1
|
||||||
|
|
||||||
require (
|
require (
|
||||||
buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.11-20260209202127-80ab13bee0bf.1
|
buf.build/gen/go/bufbuild/protovalidate/protocolbuffers/go v1.36.11-20260209202127-80ab13bee0bf.1
|
||||||
buf.build/go/protovalidate v1.1.3
|
buf.build/go/protovalidate v1.1.3
|
||||||
|
galaxy/redisconn v0.0.0-00010101000000-000000000000
|
||||||
github.com/alicebob/miniredis/v2 v2.37.0
|
github.com/alicebob/miniredis/v2 v2.37.0
|
||||||
github.com/getkin/kin-openapi v0.135.0
|
github.com/getkin/kin-openapi v0.135.0
|
||||||
github.com/gin-gonic/gin v1.12.0
|
github.com/gin-gonic/gin v1.12.0
|
||||||
@@ -61,7 +62,7 @@ require (
|
|||||||
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
||||||
github.com/leodido/go-urn v1.4.0 // indirect
|
github.com/leodido/go-urn v1.4.0 // indirect
|
||||||
github.com/mailru/easyjson v0.7.7 // indirect
|
github.com/mailru/easyjson v0.7.7 // indirect
|
||||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
github.com/mattn/go-isatty v0.0.21 // indirect
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
|
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
|
||||||
@@ -77,6 +78,8 @@ require (
|
|||||||
github.com/prometheus/procfs v0.20.1 // indirect
|
github.com/prometheus/procfs v0.20.1 // indirect
|
||||||
github.com/quic-go/qpack v0.6.0 // indirect
|
github.com/quic-go/qpack v0.6.0 // indirect
|
||||||
github.com/quic-go/quic-go v0.59.0 // indirect
|
github.com/quic-go/quic-go v0.59.0 // indirect
|
||||||
|
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 // indirect
|
||||||
|
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0 // indirect
|
||||||
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
||||||
github.com/ugorji/go/codec v1.3.1 // indirect
|
github.com/ugorji/go/codec v1.3.1 // indirect
|
||||||
github.com/woodsbury/decimal128 v1.3.0 // indirect
|
github.com/woodsbury/decimal128 v1.3.0 // indirect
|
||||||
@@ -86,14 +89,16 @@ require (
|
|||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect
|
||||||
go.opentelemetry.io/proto/otlp v1.10.0 // indirect
|
go.opentelemetry.io/proto/otlp v1.10.0 // indirect
|
||||||
go.uber.org/atomic v1.11.0 // indirect
|
go.uber.org/atomic v1.11.0 // indirect
|
||||||
go.uber.org/multierr v1.10.0 // indirect
|
go.uber.org/multierr v1.11.0 // indirect
|
||||||
go.yaml.in/yaml/v2 v2.4.4 // indirect
|
go.yaml.in/yaml/v2 v2.4.4 // indirect
|
||||||
golang.org/x/arch v0.25.0 // indirect
|
golang.org/x/arch v0.25.0 // indirect
|
||||||
golang.org/x/crypto v0.49.0 // indirect
|
golang.org/x/crypto v0.50.0 // indirect
|
||||||
golang.org/x/exp v0.0.0-20250813145105-42675adae3e6 // indirect
|
golang.org/x/exp v0.0.0-20260410095643-746e56fc9e2f // indirect
|
||||||
golang.org/x/net v0.52.0 // indirect
|
golang.org/x/net v0.53.0 // indirect
|
||||||
golang.org/x/sys v0.42.0 // indirect
|
golang.org/x/sys v0.43.0 // indirect
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 // indirect
|
||||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||||
)
|
)
|
||||||
|
|
||||||
|
replace galaxy/redisconn => ../pkg/redisconn
|
||||||
|
|||||||
+20
-15
@@ -83,6 +83,7 @@ github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFF
|
|||||||
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
||||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||||
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
|
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
|
||||||
|
github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ=
|
||||||
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
||||||
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
||||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||||
@@ -95,8 +96,8 @@ github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
|
|||||||
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
||||||
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
||||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
github.com/mattn/go-isatty v0.0.21 h1:xYae+lCNBP7QuW4PUnNG61ffM4hVIfm+zUzDuSzYLGs=
|
||||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
github.com/mattn/go-isatty v0.0.21/go.mod h1:ZXfXG4SQHsB/w3ZeOYbR0PrPwLy+n6xiMrJlRFqopa4=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
@@ -131,6 +132,10 @@ github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8=
|
|||||||
github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII=
|
github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII=
|
||||||
github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw=
|
github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw=
|
||||||
github.com/quic-go/quic-go v0.59.0/go.mod h1:upnsH4Ju1YkqpLXC305eW3yDZ4NfnNbmQRCMWS58IKU=
|
github.com/quic-go/quic-go v0.59.0/go.mod h1:upnsH4Ju1YkqpLXC305eW3yDZ4NfnNbmQRCMWS58IKU=
|
||||||
|
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 h1:QY4nmPHLFAJjtT5O4OMUEOxP8WVaRNOFpcbmxT2NLZU=
|
||||||
|
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0/go.mod h1:WH8cY/0fT41Bsf341qzo8v4nx0GCE8FykAA23IVbVmo=
|
||||||
|
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0 h1:2dKdoEYBJ0CZCLPiCdvvc7luz3DPwY6hKdzjL6m1eHE=
|
||||||
|
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0/go.mod h1:WzkrVG9ro9BwCQD0eJOWn6AGL4Z1CleGflM45w1hu10=
|
||||||
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
||||||
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
||||||
github.com/rodaine/protogofakeit v0.1.1 h1:ZKouljuRM3A+TArppfBqnH8tGZHOwM/pjvtXe9DaXH8=
|
github.com/rodaine/protogofakeit v0.1.1 h1:ZKouljuRM3A+TArppfBqnH8tGZHOwM/pjvtXe9DaXH8=
|
||||||
@@ -196,8 +201,8 @@ go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
|||||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||||
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
|
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
|
||||||
go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=
|
go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=
|
||||||
go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ=
|
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||||
go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||||
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
|
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
|
||||||
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
||||||
go.yaml.in/yaml/v2 v2.4.4 h1:tuyd0P+2Ont/d6e2rl3be67goVK4R6deVxCUX5vyPaQ=
|
go.yaml.in/yaml/v2 v2.4.4 h1:tuyd0P+2Ont/d6e2rl3be67goVK4R6deVxCUX5vyPaQ=
|
||||||
@@ -206,24 +211,24 @@ go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
|
|||||||
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
||||||
golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE=
|
golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE=
|
||||||
golang.org/x/arch v0.25.0/go.mod h1:0X+GdSIP+kL5wPmpK7sdkEVTt2XoYP0cSjQSbZBwOi8=
|
golang.org/x/arch v0.25.0/go.mod h1:0X+GdSIP+kL5wPmpK7sdkEVTt2XoYP0cSjQSbZBwOi8=
|
||||||
golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
|
golang.org/x/crypto v0.50.0 h1:zO47/JPrL6vsNkINmLoo/PH1gcxpls50DNogFvB5ZGI=
|
||||||
golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
|
golang.org/x/crypto v0.50.0/go.mod h1:3muZ7vA7PBCE6xgPX7nkzzjiUq87kRItoJQM1Yo8S+Q=
|
||||||
golang.org/x/exp v0.0.0-20250813145105-42675adae3e6 h1:SbTAbRFnd5kjQXbczszQ0hdk3ctwYf3qBNH9jIsGclE=
|
golang.org/x/exp v0.0.0-20260410095643-746e56fc9e2f h1:W3F4c+6OLc6H2lb//N1q4WpJkhzJCK5J6kUi1NTVXfM=
|
||||||
golang.org/x/exp v0.0.0-20250813145105-42675adae3e6/go.mod h1:4QTo5u+SEIbbKW1RacMZq1YEfOBqeXa19JeshGi+zc4=
|
golang.org/x/exp v0.0.0-20260410095643-746e56fc9e2f/go.mod h1:J1xhfL/vlindoeF/aINzNzt2Bket5bjo9sdOYzOsU80=
|
||||||
golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
|
golang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=
|
||||||
golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw=
|
golang.org/x/net v0.53.0/go.mod h1:JvMuJH7rrdiCfbeHoo3fCQU24Lf5JJwT9W3sJFulfgs=
|
||||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
|
||||||
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
|
golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
||||||
golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
|
||||||
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
||||||
|
golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
|
||||||
golang.org/x/time v0.15.0 h1:bbrp8t3bGUeFOx08pvsMYRTCVSMk89u4tKbNOZbp88U=
|
golang.org/x/time v0.15.0 h1:bbrp8t3bGUeFOx08pvsMYRTCVSMk89u4tKbNOZbp88U=
|
||||||
golang.org/x/time v0.15.0/go.mod h1:Y4YMaQmXwGQZoFaVFk4YpCt4FLQMYKZe9oeV/f4MSno=
|
golang.org/x/time v0.15.0/go.mod h1:Y4YMaQmXwGQZoFaVFk4YpCt4FLQMYKZe9oeV/f4MSno=
|
||||||
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
||||||
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA=
|
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA=
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M=
|
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M=
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg=
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 h1:XF8+t6QQiS0o9ArVan/HW8Q7cycNPGsJf6GA2nXxYAg=
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
||||||
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
||||||
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
||||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||||
|
|||||||
@@ -9,8 +9,12 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"galaxy/redisconn"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const gatewayRedisEnvPrefix = "GATEWAY"
|
||||||
|
|
||||||
const (
|
const (
|
||||||
// shutdownTimeoutEnvVar names the environment variable that controls the
|
// shutdownTimeoutEnvVar names the environment variable that controls the
|
||||||
// maximum time granted to each component shutdown call.
|
// maximum time granted to each component shutdown call.
|
||||||
@@ -143,35 +147,14 @@ const (
|
|||||||
// rate-limit burst.
|
// rate-limit burst.
|
||||||
authenticatedGRPCMessageClassRateLimitBurstEnvVar = "GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_MESSAGE_CLASS_RATE_LIMIT_BURST"
|
authenticatedGRPCMessageClassRateLimitBurstEnvVar = "GATEWAY_AUTHENTICATED_GRPC_ANTI_ABUSE_MESSAGE_CLASS_RATE_LIMIT_BURST"
|
||||||
|
|
||||||
// sessionCacheRedisAddrEnvVar names the environment variable that configures
|
|
||||||
// the Redis address used for SessionCache lookups.
|
|
||||||
sessionCacheRedisAddrEnvVar = "GATEWAY_SESSION_CACHE_REDIS_ADDR"
|
|
||||||
|
|
||||||
// sessionCacheRedisUsernameEnvVar names the environment variable that
|
|
||||||
// configures the Redis username used for SessionCache lookups.
|
|
||||||
sessionCacheRedisUsernameEnvVar = "GATEWAY_SESSION_CACHE_REDIS_USERNAME"
|
|
||||||
|
|
||||||
// sessionCacheRedisPasswordEnvVar names the environment variable that
|
|
||||||
// configures the Redis password used for SessionCache lookups.
|
|
||||||
sessionCacheRedisPasswordEnvVar = "GATEWAY_SESSION_CACHE_REDIS_PASSWORD"
|
|
||||||
|
|
||||||
// sessionCacheRedisDBEnvVar names the environment variable that configures
|
|
||||||
// the Redis logical database used for SessionCache lookups.
|
|
||||||
sessionCacheRedisDBEnvVar = "GATEWAY_SESSION_CACHE_REDIS_DB"
|
|
||||||
|
|
||||||
// sessionCacheRedisKeyPrefixEnvVar names the environment variable that
|
// sessionCacheRedisKeyPrefixEnvVar names the environment variable that
|
||||||
// configures the Redis key prefix used for SessionCache records.
|
// configures the Redis key prefix used for SessionCache records.
|
||||||
sessionCacheRedisKeyPrefixEnvVar = "GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX"
|
sessionCacheRedisKeyPrefixEnvVar = "GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX"
|
||||||
|
|
||||||
// sessionCacheRedisLookupTimeoutEnvVar names the environment variable that
|
// sessionCacheRedisLookupTimeoutEnvVar names the environment variable that
|
||||||
// configures the timeout used for SessionCache Redis lookups and startup
|
// configures the timeout used for SessionCache Redis lookups.
|
||||||
// connectivity checks.
|
|
||||||
sessionCacheRedisLookupTimeoutEnvVar = "GATEWAY_SESSION_CACHE_REDIS_LOOKUP_TIMEOUT"
|
sessionCacheRedisLookupTimeoutEnvVar = "GATEWAY_SESSION_CACHE_REDIS_LOOKUP_TIMEOUT"
|
||||||
|
|
||||||
// sessionCacheRedisTLSEnabledEnvVar names the environment variable that
|
|
||||||
// configures whether SessionCache Redis connections use TLS.
|
|
||||||
sessionCacheRedisTLSEnabledEnvVar = "GATEWAY_SESSION_CACHE_REDIS_TLS_ENABLED"
|
|
||||||
|
|
||||||
// replayRedisKeyPrefixEnvVar names the environment variable that configures
|
// replayRedisKeyPrefixEnvVar names the environment variable that configures
|
||||||
// the Redis key prefix used for authenticated replay reservations.
|
// the Redis key prefix used for authenticated replay reservations.
|
||||||
replayRedisKeyPrefixEnvVar = "GATEWAY_REPLAY_REDIS_KEY_PREFIX"
|
replayRedisKeyPrefixEnvVar = "GATEWAY_REPLAY_REDIS_KEY_PREFIX"
|
||||||
@@ -333,7 +316,6 @@ const (
|
|||||||
defaultAuthenticatedGRPCMessageClassRateLimitRequests = 60
|
defaultAuthenticatedGRPCMessageClassRateLimitRequests = 60
|
||||||
defaultAuthenticatedGRPCMessageClassRateLimitBurst = 20
|
defaultAuthenticatedGRPCMessageClassRateLimitBurst = 20
|
||||||
|
|
||||||
defaultSessionCacheRedisDB = 0
|
|
||||||
defaultSessionCacheRedisKeyPrefix = "gateway:session:"
|
defaultSessionCacheRedisKeyPrefix = "gateway:session:"
|
||||||
defaultSessionCacheRedisLookupTimeout = 250 * time.Millisecond
|
defaultSessionCacheRedisLookupTimeout = 250 * time.Millisecond
|
||||||
|
|
||||||
@@ -535,29 +517,16 @@ type AuthenticatedGRPCConfig struct {
|
|||||||
AntiAbuse AuthenticatedGRPCAntiAbuseConfig
|
AntiAbuse AuthenticatedGRPCAntiAbuseConfig
|
||||||
}
|
}
|
||||||
|
|
||||||
// SessionCacheRedisConfig describes the Redis connection used for authenticated
|
// SessionCacheRedisConfig describes the namespace and timeout used for
|
||||||
// SessionCache lookups.
|
// authenticated SessionCache lookups. Connection topology is shared with the
|
||||||
|
// other Redis-backed gateway components and lives on Config.Redis (see
|
||||||
|
// `pkg/redisconn`).
|
||||||
type SessionCacheRedisConfig struct {
|
type SessionCacheRedisConfig struct {
|
||||||
// Addr is the Redis endpoint used for SessionCache requests.
|
|
||||||
Addr string
|
|
||||||
|
|
||||||
// Username is the optional Redis ACL username used for authentication.
|
|
||||||
Username string
|
|
||||||
|
|
||||||
// Password is the optional Redis password used for authentication.
|
|
||||||
Password string
|
|
||||||
|
|
||||||
// DB is the Redis logical database number used for SessionCache keys.
|
|
||||||
DB int
|
|
||||||
|
|
||||||
// KeyPrefix is prepended to every SessionCache Redis key.
|
// KeyPrefix is prepended to every SessionCache Redis key.
|
||||||
KeyPrefix string
|
KeyPrefix string
|
||||||
|
|
||||||
// LookupTimeout bounds individual SessionCache Redis operations.
|
// LookupTimeout bounds individual SessionCache Redis operations.
|
||||||
LookupTimeout time.Duration
|
LookupTimeout time.Duration
|
||||||
|
|
||||||
// TLSEnabled reports whether SessionCache Redis connections should use TLS.
|
|
||||||
TLSEnabled bool
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ReplayRedisConfig describes the Redis namespace and timeout used for
|
// ReplayRedisConfig describes the Redis namespace and timeout used for
|
||||||
@@ -635,6 +604,11 @@ type Config struct {
|
|||||||
// AuthenticatedGRPC configures the authenticated gRPC listener.
|
// AuthenticatedGRPC configures the authenticated gRPC listener.
|
||||||
AuthenticatedGRPC AuthenticatedGRPCConfig
|
AuthenticatedGRPC AuthenticatedGRPCConfig
|
||||||
|
|
||||||
|
// Redis carries the master/replica/password connection topology shared by
|
||||||
|
// every gateway Redis component, sourced from the GATEWAY_REDIS_*
|
||||||
|
// environment variables managed by `pkg/redisconn`.
|
||||||
|
Redis redisconn.Config
|
||||||
|
|
||||||
// SessionCacheRedis configures the Redis-backed authenticated SessionCache.
|
// SessionCacheRedis configures the Redis-backed authenticated SessionCache.
|
||||||
SessionCacheRedis SessionCacheRedisConfig
|
SessionCacheRedis SessionCacheRedisConfig
|
||||||
|
|
||||||
@@ -759,12 +733,10 @@ func DefaultLoggingConfig() LoggingConfig {
|
|||||||
return LoggingConfig{Level: defaultLogLevel}
|
return LoggingConfig{Level: defaultLogLevel}
|
||||||
}
|
}
|
||||||
|
|
||||||
// DefaultSessionCacheRedisConfig returns the default optional settings for the
|
// DefaultSessionCacheRedisConfig returns the default optional namespace and
|
||||||
// Redis-backed authenticated SessionCache. Addr remains empty and must be
|
// timeout settings for the Redis-backed authenticated SessionCache.
|
||||||
// supplied explicitly.
|
|
||||||
func DefaultSessionCacheRedisConfig() SessionCacheRedisConfig {
|
func DefaultSessionCacheRedisConfig() SessionCacheRedisConfig {
|
||||||
return SessionCacheRedisConfig{
|
return SessionCacheRedisConfig{
|
||||||
DB: defaultSessionCacheRedisDB,
|
|
||||||
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
||||||
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
||||||
}
|
}
|
||||||
@@ -827,6 +799,7 @@ func LoadFromEnv() (Config, error) {
|
|||||||
UserService: DefaultUserServiceConfig(),
|
UserService: DefaultUserServiceConfig(),
|
||||||
AdminHTTP: DefaultAdminHTTPConfig(),
|
AdminHTTP: DefaultAdminHTTPConfig(),
|
||||||
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
||||||
|
Redis: redisconn.DefaultConfig(),
|
||||||
SessionCacheRedis: DefaultSessionCacheRedisConfig(),
|
SessionCacheRedis: DefaultSessionCacheRedisConfig(),
|
||||||
ReplayRedis: DefaultReplayRedisConfig(),
|
ReplayRedis: DefaultReplayRedisConfig(),
|
||||||
SessionEventsRedis: DefaultSessionEventsRedisConfig(),
|
SessionEventsRedis: DefaultSessionEventsRedisConfig(),
|
||||||
@@ -977,26 +950,11 @@ func LoadFromEnv() (Config, error) {
|
|||||||
}
|
}
|
||||||
cfg.AuthenticatedGRPC.AntiAbuse.MessageClass = messageClassRateLimit
|
cfg.AuthenticatedGRPC.AntiAbuse.MessageClass = messageClassRateLimit
|
||||||
|
|
||||||
rawSessionCacheRedisAddr, ok := os.LookupEnv(sessionCacheRedisAddrEnvVar)
|
redisConn, err := redisconn.LoadFromEnv(gatewayRedisEnvPrefix)
|
||||||
if ok {
|
|
||||||
cfg.SessionCacheRedis.Addr = rawSessionCacheRedisAddr
|
|
||||||
}
|
|
||||||
|
|
||||||
rawSessionCacheRedisUsername, ok := os.LookupEnv(sessionCacheRedisUsernameEnvVar)
|
|
||||||
if ok {
|
|
||||||
cfg.SessionCacheRedis.Username = rawSessionCacheRedisUsername
|
|
||||||
}
|
|
||||||
|
|
||||||
rawSessionCacheRedisPassword, ok := os.LookupEnv(sessionCacheRedisPasswordEnvVar)
|
|
||||||
if ok {
|
|
||||||
cfg.SessionCacheRedis.Password = rawSessionCacheRedisPassword
|
|
||||||
}
|
|
||||||
|
|
||||||
sessionCacheRedisDB, err := loadIntEnvWithDefault(sessionCacheRedisDBEnvVar, cfg.SessionCacheRedis.DB)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return Config{}, err
|
return Config{}, err
|
||||||
}
|
}
|
||||||
cfg.SessionCacheRedis.DB = sessionCacheRedisDB
|
cfg.Redis = redisConn
|
||||||
|
|
||||||
rawSessionCacheRedisKeyPrefix, ok := os.LookupEnv(sessionCacheRedisKeyPrefixEnvVar)
|
rawSessionCacheRedisKeyPrefix, ok := os.LookupEnv(sessionCacheRedisKeyPrefixEnvVar)
|
||||||
if ok {
|
if ok {
|
||||||
@@ -1009,12 +967,6 @@ func LoadFromEnv() (Config, error) {
|
|||||||
}
|
}
|
||||||
cfg.SessionCacheRedis.LookupTimeout = sessionCacheRedisLookupTimeout
|
cfg.SessionCacheRedis.LookupTimeout = sessionCacheRedisLookupTimeout
|
||||||
|
|
||||||
sessionCacheRedisTLSEnabled, err := loadBoolEnvWithDefault(sessionCacheRedisTLSEnabledEnvVar, cfg.SessionCacheRedis.TLSEnabled)
|
|
||||||
if err != nil {
|
|
||||||
return Config{}, err
|
|
||||||
}
|
|
||||||
cfg.SessionCacheRedis.TLSEnabled = sessionCacheRedisTLSEnabled
|
|
||||||
|
|
||||||
rawReplayRedisKeyPrefix, ok := os.LookupEnv(replayRedisKeyPrefixEnvVar)
|
rawReplayRedisKeyPrefix, ok := os.LookupEnv(replayRedisKeyPrefixEnvVar)
|
||||||
if ok {
|
if ok {
|
||||||
cfg.ReplayRedis.KeyPrefix = rawReplayRedisKeyPrefix
|
cfg.ReplayRedis.KeyPrefix = rawReplayRedisKeyPrefix
|
||||||
@@ -1222,11 +1174,11 @@ func LoadFromEnv() (Config, error) {
|
|||||||
); err != nil {
|
); err != nil {
|
||||||
return Config{}, err
|
return Config{}, err
|
||||||
}
|
}
|
||||||
if strings.TrimSpace(cfg.SessionCacheRedis.Addr) == "" {
|
if err := cfg.Redis.Validate(); err != nil {
|
||||||
return Config{}, fmt.Errorf("load gateway config: %s must not be empty", sessionCacheRedisAddrEnvVar)
|
return Config{}, fmt.Errorf("load gateway config: redis: %w", err)
|
||||||
}
|
}
|
||||||
if cfg.SessionCacheRedis.DB < 0 {
|
if strings.TrimSpace(cfg.SessionCacheRedis.KeyPrefix) == "" {
|
||||||
return Config{}, fmt.Errorf("load gateway config: %s must not be negative", sessionCacheRedisDBEnvVar)
|
return Config{}, fmt.Errorf("load gateway config: %s must not be empty", sessionCacheRedisKeyPrefixEnvVar)
|
||||||
}
|
}
|
||||||
if cfg.SessionCacheRedis.LookupTimeout <= 0 {
|
if cfg.SessionCacheRedis.LookupTimeout <= 0 {
|
||||||
return Config{}, fmt.Errorf("load gateway config: %s must be positive", sessionCacheRedisLookupTimeoutEnvVar)
|
return Config{}, fmt.Errorf("load gateway config: %s must be positive", sessionCacheRedisLookupTimeoutEnvVar)
|
||||||
|
|||||||
@@ -11,12 +11,36 @@ import (
|
|||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"galaxy/redisconn"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
var configEnvMu sync.Mutex
|
var configEnvMu sync.Mutex
|
||||||
|
|
||||||
|
const (
|
||||||
|
gatewayRedisMasterAddrEnvVar = "GATEWAY_REDIS_MASTER_ADDR"
|
||||||
|
gatewayRedisPasswordEnvVar = "GATEWAY_REDIS_PASSWORD"
|
||||||
|
gatewayRedisReplicaAddrsEnvVar = "GATEWAY_REDIS_REPLICA_ADDRS"
|
||||||
|
gatewayRedisDBEnvVar = "GATEWAY_REDIS_DB"
|
||||||
|
gatewayRedisOpTimeoutEnvVar = "GATEWAY_REDIS_OPERATION_TIMEOUT"
|
||||||
|
gatewayRedisTLSEnabledEnvVar = "GATEWAY_REDIS_TLS_ENABLED"
|
||||||
|
gatewayRedisUsernameEnvVar = "GATEWAY_REDIS_USERNAME"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
defaultTestRedisMasterAddrValue = "127.0.0.1:6379"
|
||||||
|
defaultTestRedisPasswordValue = "secret"
|
||||||
|
)
|
||||||
|
|
||||||
|
func defaultRedisConnConfigForTest() redisconn.Config {
|
||||||
|
cfg := redisconn.DefaultConfig()
|
||||||
|
cfg.MasterAddr = defaultTestRedisMasterAddrValue
|
||||||
|
cfg.Password = defaultTestRedisPasswordValue
|
||||||
|
return cfg
|
||||||
|
}
|
||||||
|
|
||||||
func TestLoadFromEnv(t *testing.T) {
|
func TestLoadFromEnv(t *testing.T) {
|
||||||
customResponseSignerPrivateKeyPEMPath := new(string)
|
customResponseSignerPrivateKeyPEMPath := new(string)
|
||||||
*customResponseSignerPrivateKeyPEMPath = writeTestResponseSignerPEMFile(t)
|
*customResponseSignerPrivateKeyPEMPath = writeTestResponseSignerPEMFile(t)
|
||||||
@@ -90,6 +114,7 @@ func TestLoadFromEnv(t *testing.T) {
|
|||||||
authenticatedGRPCAddr *string
|
authenticatedGRPCAddr *string
|
||||||
authenticatedGRPCFreshnessWindow *string
|
authenticatedGRPCFreshnessWindow *string
|
||||||
sessionCacheRedisAddr *string
|
sessionCacheRedisAddr *string
|
||||||
|
skipRedis bool
|
||||||
responseSignerPrivateKeyPEMPath *string
|
responseSignerPrivateKeyPEMPath *string
|
||||||
want Config
|
want Config
|
||||||
wantErr string
|
wantErr string
|
||||||
@@ -104,9 +129,8 @@ func TestLoadFromEnv(t *testing.T) {
|
|||||||
PublicHTTP: DefaultPublicHTTPConfig(),
|
PublicHTTP: DefaultPublicHTTPConfig(),
|
||||||
AdminHTTP: DefaultAdminHTTPConfig(),
|
AdminHTTP: DefaultAdminHTTPConfig(),
|
||||||
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
||||||
|
Redis: defaultRedisConnConfigForTest(),
|
||||||
SessionCacheRedis: SessionCacheRedisConfig{
|
SessionCacheRedis: SessionCacheRedisConfig{
|
||||||
Addr: "127.0.0.1:6379",
|
|
||||||
DB: defaultSessionCacheRedisDB,
|
|
||||||
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
||||||
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
||||||
},
|
},
|
||||||
@@ -135,9 +159,8 @@ func TestLoadFromEnv(t *testing.T) {
|
|||||||
PublicHTTP: DefaultPublicHTTPConfig(),
|
PublicHTTP: DefaultPublicHTTPConfig(),
|
||||||
AdminHTTP: DefaultAdminHTTPConfig(),
|
AdminHTTP: DefaultAdminHTTPConfig(),
|
||||||
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
||||||
|
Redis: defaultRedisConnConfigForTest(),
|
||||||
SessionCacheRedis: SessionCacheRedisConfig{
|
SessionCacheRedis: SessionCacheRedisConfig{
|
||||||
Addr: "127.0.0.1:6379",
|
|
||||||
DB: defaultSessionCacheRedisDB,
|
|
||||||
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
||||||
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
||||||
},
|
},
|
||||||
@@ -170,9 +193,8 @@ func TestLoadFromEnv(t *testing.T) {
|
|||||||
}(),
|
}(),
|
||||||
AdminHTTP: DefaultAdminHTTPConfig(),
|
AdminHTTP: DefaultAdminHTTPConfig(),
|
||||||
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
||||||
|
Redis: defaultRedisConnConfigForTest(),
|
||||||
SessionCacheRedis: SessionCacheRedisConfig{
|
SessionCacheRedis: SessionCacheRedisConfig{
|
||||||
Addr: "127.0.0.1:6379",
|
|
||||||
DB: defaultSessionCacheRedisDB,
|
|
||||||
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
||||||
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
||||||
},
|
},
|
||||||
@@ -204,9 +226,8 @@ func TestLoadFromEnv(t *testing.T) {
|
|||||||
},
|
},
|
||||||
AdminHTTP: DefaultAdminHTTPConfig(),
|
AdminHTTP: DefaultAdminHTTPConfig(),
|
||||||
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
||||||
|
Redis: defaultRedisConnConfigForTest(),
|
||||||
SessionCacheRedis: SessionCacheRedisConfig{
|
SessionCacheRedis: SessionCacheRedisConfig{
|
||||||
Addr: "127.0.0.1:6379",
|
|
||||||
DB: defaultSessionCacheRedisDB,
|
|
||||||
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
||||||
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
||||||
},
|
},
|
||||||
@@ -238,9 +259,8 @@ func TestLoadFromEnv(t *testing.T) {
|
|||||||
},
|
},
|
||||||
AdminHTTP: DefaultAdminHTTPConfig(),
|
AdminHTTP: DefaultAdminHTTPConfig(),
|
||||||
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
AuthenticatedGRPC: DefaultAuthenticatedGRPCConfig(),
|
||||||
|
Redis: defaultRedisConnConfigForTest(),
|
||||||
SessionCacheRedis: SessionCacheRedisConfig{
|
SessionCacheRedis: SessionCacheRedisConfig{
|
||||||
Addr: "127.0.0.1:6379",
|
|
||||||
DB: defaultSessionCacheRedisDB,
|
|
||||||
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
||||||
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
||||||
},
|
},
|
||||||
@@ -273,9 +293,8 @@ func TestLoadFromEnv(t *testing.T) {
|
|||||||
cfg.Addr = "127.0.0.1:9191"
|
cfg.Addr = "127.0.0.1:9191"
|
||||||
return cfg
|
return cfg
|
||||||
}(),
|
}(),
|
||||||
|
Redis: defaultRedisConnConfigForTest(),
|
||||||
SessionCacheRedis: SessionCacheRedisConfig{
|
SessionCacheRedis: SessionCacheRedisConfig{
|
||||||
Addr: "127.0.0.1:6379",
|
|
||||||
DB: defaultSessionCacheRedisDB,
|
|
||||||
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
||||||
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
||||||
},
|
},
|
||||||
@@ -308,9 +327,8 @@ func TestLoadFromEnv(t *testing.T) {
|
|||||||
cfg.FreshnessWindow = 90 * time.Second
|
cfg.FreshnessWindow = 90 * time.Second
|
||||||
return cfg
|
return cfg
|
||||||
}(),
|
}(),
|
||||||
|
Redis: defaultRedisConnConfigForTest(),
|
||||||
SessionCacheRedis: SessionCacheRedisConfig{
|
SessionCacheRedis: SessionCacheRedisConfig{
|
||||||
Addr: "127.0.0.1:6379",
|
|
||||||
DB: defaultSessionCacheRedisDB,
|
|
||||||
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
KeyPrefix: defaultSessionCacheRedisKeyPrefix,
|
||||||
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
LookupTimeout: defaultSessionCacheRedisLookupTimeout,
|
||||||
},
|
},
|
||||||
@@ -378,21 +396,10 @@ func TestLoadFromEnv(t *testing.T) {
|
|||||||
wantErr: "parse " + authenticatedGRPCFreshnessWindowEnvVar,
|
wantErr: "parse " + authenticatedGRPCFreshnessWindowEnvVar,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "missing session cache redis address",
|
name: "missing redis master addr",
|
||||||
responseSignerPrivateKeyPEMPath: customResponseSignerPrivateKeyPEMPath,
|
responseSignerPrivateKeyPEMPath: customResponseSignerPrivateKeyPEMPath,
|
||||||
wantErr: "GATEWAY_SESSION_CACHE_REDIS_ADDR must not be empty",
|
skipRedis: true,
|
||||||
},
|
wantErr: "GATEWAY_REDIS_MASTER_ADDR must be set",
|
||||||
{
|
|
||||||
name: "empty session cache redis address",
|
|
||||||
sessionCacheRedisAddr: emptySessionCacheRedisAddr,
|
|
||||||
responseSignerPrivateKeyPEMPath: customResponseSignerPrivateKeyPEMPath,
|
|
||||||
wantErr: "GATEWAY_SESSION_CACHE_REDIS_ADDR must not be empty",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "whitespace session cache redis address",
|
|
||||||
sessionCacheRedisAddr: whitespaceSessionCacheRedisAddr,
|
|
||||||
responseSignerPrivateKeyPEMPath: customResponseSignerPrivateKeyPEMPath,
|
|
||||||
wantErr: "GATEWAY_SESSION_CACHE_REDIS_ADDR must not be empty",
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "missing response signer private key path",
|
name: "missing response signer private key path",
|
||||||
@@ -412,7 +419,8 @@ func TestLoadFromEnv(t *testing.T) {
|
|||||||
userServiceBaseURLEnvVar,
|
userServiceBaseURLEnvVar,
|
||||||
authenticatedGRPCAddrEnvVar,
|
authenticatedGRPCAddrEnvVar,
|
||||||
authenticatedGRPCFreshnessWindowEnvVar,
|
authenticatedGRPCFreshnessWindowEnvVar,
|
||||||
sessionCacheRedisAddrEnvVar,
|
gatewayRedisMasterAddrEnvVar,
|
||||||
|
gatewayRedisPasswordEnvVar,
|
||||||
sessionEventsRedisStreamEnvVar,
|
sessionEventsRedisStreamEnvVar,
|
||||||
clientEventsRedisStreamEnvVar,
|
clientEventsRedisStreamEnvVar,
|
||||||
responseSignerPrivateKeyPEMPathEnvVar,
|
responseSignerPrivateKeyPEMPathEnvVar,
|
||||||
@@ -424,7 +432,14 @@ func TestLoadFromEnv(t *testing.T) {
|
|||||||
setEnvValue(t, userServiceBaseURLEnvVar, tt.userServiceBaseURL)
|
setEnvValue(t, userServiceBaseURLEnvVar, tt.userServiceBaseURL)
|
||||||
setEnvValue(t, authenticatedGRPCAddrEnvVar, tt.authenticatedGRPCAddr)
|
setEnvValue(t, authenticatedGRPCAddrEnvVar, tt.authenticatedGRPCAddr)
|
||||||
setEnvValue(t, authenticatedGRPCFreshnessWindowEnvVar, tt.authenticatedGRPCFreshnessWindow)
|
setEnvValue(t, authenticatedGRPCFreshnessWindowEnvVar, tt.authenticatedGRPCFreshnessWindow)
|
||||||
setEnvValue(t, sessionCacheRedisAddrEnvVar, tt.sessionCacheRedisAddr)
|
redisAddr := tt.sessionCacheRedisAddr
|
||||||
|
if !tt.skipRedis && redisAddr == nil {
|
||||||
|
redisAddr = customSessionCacheRedisAddr
|
||||||
|
}
|
||||||
|
setEnvValue(t, gatewayRedisMasterAddrEnvVar, redisAddr)
|
||||||
|
if !tt.skipRedis {
|
||||||
|
setEnvValue(t, gatewayRedisPasswordEnvVar, &defaultTestRedisPasswordValue)
|
||||||
|
}
|
||||||
setEnvValue(t, sessionEventsRedisStreamEnvVar, customSessionEventsRedisStream)
|
setEnvValue(t, sessionEventsRedisStreamEnvVar, customSessionEventsRedisStream)
|
||||||
setEnvValue(t, clientEventsRedisStreamEnvVar, customClientEventsRedisStream)
|
setEnvValue(t, clientEventsRedisStreamEnvVar, customClientEventsRedisStream)
|
||||||
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, tt.responseSignerPrivateKeyPEMPath)
|
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, tt.responseSignerPrivateKeyPEMPath)
|
||||||
@@ -490,7 +505,7 @@ func TestLoadFromEnvOperationalSettings(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "custom operational settings",
|
name: "custom operational settings",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
sessionEventsRedisStreamEnvVar: customSessionEventsRedisStream,
|
sessionEventsRedisStreamEnvVar: customSessionEventsRedisStream,
|
||||||
clientEventsRedisStreamEnvVar: customClientEventsRedisStream,
|
clientEventsRedisStreamEnvVar: customClientEventsRedisStream,
|
||||||
responseSignerPrivateKeyPEMPathEnvVar: customResponseSignerPrivateKeyPEMPath,
|
responseSignerPrivateKeyPEMPathEnvVar: customResponseSignerPrivateKeyPEMPath,
|
||||||
@@ -516,7 +531,7 @@ func TestLoadFromEnvOperationalSettings(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "invalid log level",
|
name: "invalid log level",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
sessionEventsRedisStreamEnvVar: customSessionEventsRedisStream,
|
sessionEventsRedisStreamEnvVar: customSessionEventsRedisStream,
|
||||||
clientEventsRedisStreamEnvVar: customClientEventsRedisStream,
|
clientEventsRedisStreamEnvVar: customClientEventsRedisStream,
|
||||||
responseSignerPrivateKeyPEMPathEnvVar: customResponseSignerPrivateKeyPEMPath,
|
responseSignerPrivateKeyPEMPathEnvVar: customResponseSignerPrivateKeyPEMPath,
|
||||||
@@ -608,13 +623,14 @@ func TestLoadFromEnvAuthService(t *testing.T) {
|
|||||||
authServiceBaseURLEnvVar,
|
authServiceBaseURLEnvVar,
|
||||||
userServiceBaseURLEnvVar,
|
userServiceBaseURLEnvVar,
|
||||||
logLevelEnvVar,
|
logLevelEnvVar,
|
||||||
sessionCacheRedisAddrEnvVar,
|
gatewayRedisMasterAddrEnvVar,
|
||||||
sessionEventsRedisStreamEnvVar,
|
sessionEventsRedisStreamEnvVar,
|
||||||
clientEventsRedisStreamEnvVar,
|
clientEventsRedisStreamEnvVar,
|
||||||
responseSignerPrivateKeyPEMPathEnvVar,
|
responseSignerPrivateKeyPEMPathEnvVar,
|
||||||
)
|
)
|
||||||
setEnvValue(t, authServiceBaseURLEnvVar, tt.value)
|
setEnvValue(t, authServiceBaseURLEnvVar, tt.value)
|
||||||
setEnvValue(t, sessionCacheRedisAddrEnvVar, customSessionCacheRedisAddr)
|
setEnvValue(t, gatewayRedisMasterAddrEnvVar, customSessionCacheRedisAddr)
|
||||||
|
setEnvValue(t, gatewayRedisPasswordEnvVar, &defaultTestRedisPasswordValue)
|
||||||
setEnvValue(t, sessionEventsRedisStreamEnvVar, customSessionEventsRedisStream)
|
setEnvValue(t, sessionEventsRedisStreamEnvVar, customSessionEventsRedisStream)
|
||||||
setEnvValue(t, clientEventsRedisStreamEnvVar, customClientEventsRedisStream)
|
setEnvValue(t, clientEventsRedisStreamEnvVar, customClientEventsRedisStream)
|
||||||
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, customResponseSignerPrivateKeyPEMPath)
|
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, customResponseSignerPrivateKeyPEMPath)
|
||||||
@@ -674,13 +690,14 @@ func TestLoadFromEnvUserService(t *testing.T) {
|
|||||||
authServiceBaseURLEnvVar,
|
authServiceBaseURLEnvVar,
|
||||||
userServiceBaseURLEnvVar,
|
userServiceBaseURLEnvVar,
|
||||||
logLevelEnvVar,
|
logLevelEnvVar,
|
||||||
sessionCacheRedisAddrEnvVar,
|
gatewayRedisMasterAddrEnvVar,
|
||||||
sessionEventsRedisStreamEnvVar,
|
sessionEventsRedisStreamEnvVar,
|
||||||
clientEventsRedisStreamEnvVar,
|
clientEventsRedisStreamEnvVar,
|
||||||
responseSignerPrivateKeyPEMPathEnvVar,
|
responseSignerPrivateKeyPEMPathEnvVar,
|
||||||
)
|
)
|
||||||
setEnvValue(t, userServiceBaseURLEnvVar, tt.value)
|
setEnvValue(t, userServiceBaseURLEnvVar, tt.value)
|
||||||
setEnvValue(t, sessionCacheRedisAddrEnvVar, customSessionCacheRedisAddr)
|
setEnvValue(t, gatewayRedisMasterAddrEnvVar, customSessionCacheRedisAddr)
|
||||||
|
setEnvValue(t, gatewayRedisPasswordEnvVar, &defaultTestRedisPasswordValue)
|
||||||
setEnvValue(t, sessionEventsRedisStreamEnvVar, customSessionEventsRedisStream)
|
setEnvValue(t, sessionEventsRedisStreamEnvVar, customSessionEventsRedisStream)
|
||||||
setEnvValue(t, clientEventsRedisStreamEnvVar, customClientEventsRedisStream)
|
setEnvValue(t, clientEventsRedisStreamEnvVar, customClientEventsRedisStream)
|
||||||
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, customResponseSignerPrivateKeyPEMPath)
|
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, customResponseSignerPrivateKeyPEMPath)
|
||||||
@@ -811,7 +828,7 @@ func TestLoadFromEnvAuthenticatedGRPCAntiAbuse(t *testing.T) {
|
|||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
restoreEnvs(
|
restoreEnvs(
|
||||||
t,
|
t,
|
||||||
sessionCacheRedisAddrEnvVar,
|
gatewayRedisMasterAddrEnvVar,
|
||||||
authenticatedGRPCIPRateLimitRequestsEnvVar,
|
authenticatedGRPCIPRateLimitRequestsEnvVar,
|
||||||
authenticatedGRPCIPRateLimitWindowEnvVar,
|
authenticatedGRPCIPRateLimitWindowEnvVar,
|
||||||
authenticatedGRPCIPRateLimitBurstEnvVar,
|
authenticatedGRPCIPRateLimitBurstEnvVar,
|
||||||
@@ -829,7 +846,8 @@ func TestLoadFromEnvAuthenticatedGRPCAntiAbuse(t *testing.T) {
|
|||||||
responseSignerPrivateKeyPEMPathEnvVar,
|
responseSignerPrivateKeyPEMPathEnvVar,
|
||||||
)
|
)
|
||||||
|
|
||||||
setEnvValue(t, sessionCacheRedisAddrEnvVar, customSessionCacheRedisAddr)
|
setEnvValue(t, gatewayRedisMasterAddrEnvVar, customSessionCacheRedisAddr)
|
||||||
|
setEnvValue(t, gatewayRedisPasswordEnvVar, &defaultTestRedisPasswordValue)
|
||||||
setEnvValue(t, sessionEventsRedisStreamEnvVar, customSessionEventsRedisStream)
|
setEnvValue(t, sessionEventsRedisStreamEnvVar, customSessionEventsRedisStream)
|
||||||
setEnvValue(t, clientEventsRedisStreamEnvVar, customClientEventsRedisStream)
|
setEnvValue(t, clientEventsRedisStreamEnvVar, customClientEventsRedisStream)
|
||||||
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, customResponseSignerPrivateKeyPEMPath)
|
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, customResponseSignerPrivateKeyPEMPath)
|
||||||
@@ -859,7 +877,7 @@ func TestLoadFromEnvAuthenticatedGRPCAntiAbuse(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestLoadFromEnvSessionCacheRedis(t *testing.T) {
|
func TestLoadFromEnvRedis(t *testing.T) {
|
||||||
customResponseSignerPrivateKeyPEMPath := new(string)
|
customResponseSignerPrivateKeyPEMPath := new(string)
|
||||||
*customResponseSignerPrivateKeyPEMPath = writeTestResponseSignerPEMFile(t)
|
*customResponseSignerPrivateKeyPEMPath = writeTestResponseSignerPEMFile(t)
|
||||||
|
|
||||||
@@ -872,8 +890,8 @@ func TestLoadFromEnvSessionCacheRedis(t *testing.T) {
|
|||||||
customRedisAddr := new(string)
|
customRedisAddr := new(string)
|
||||||
*customRedisAddr = "127.0.0.1:6380"
|
*customRedisAddr = "127.0.0.1:6380"
|
||||||
|
|
||||||
customRedisUsername := new(string)
|
customRedisReplicas := new(string)
|
||||||
*customRedisUsername = "gateway"
|
*customRedisReplicas = "127.0.0.1:6481,127.0.0.1:6482"
|
||||||
|
|
||||||
customRedisPassword := new(string)
|
customRedisPassword := new(string)
|
||||||
*customRedisPassword = "secret"
|
*customRedisPassword = "secret"
|
||||||
@@ -881,14 +899,14 @@ func TestLoadFromEnvSessionCacheRedis(t *testing.T) {
|
|||||||
customRedisDB := new(string)
|
customRedisDB := new(string)
|
||||||
*customRedisDB = "7"
|
*customRedisDB = "7"
|
||||||
|
|
||||||
|
customRedisOpTimeout := new(string)
|
||||||
|
*customRedisOpTimeout = "750ms"
|
||||||
|
|
||||||
customRedisKeyPrefix := new(string)
|
customRedisKeyPrefix := new(string)
|
||||||
*customRedisKeyPrefix = "edge:session:"
|
*customRedisKeyPrefix = "edge:session:"
|
||||||
|
|
||||||
customRedisLookupTimeout := new(string)
|
customRedisLookupTimeout := new(string)
|
||||||
*customRedisLookupTimeout = "750ms"
|
*customRedisLookupTimeout = "950ms"
|
||||||
|
|
||||||
customRedisTLSEnabled := new(string)
|
|
||||||
*customRedisTLSEnabled = "true"
|
|
||||||
|
|
||||||
negativeRedisDB := new(string)
|
negativeRedisDB := new(string)
|
||||||
*negativeRedisDB = "-1"
|
*negativeRedisDB = "-1"
|
||||||
@@ -896,67 +914,100 @@ func TestLoadFromEnvSessionCacheRedis(t *testing.T) {
|
|||||||
invalidRedisLookupTimeout := new(string)
|
invalidRedisLookupTimeout := new(string)
|
||||||
*invalidRedisLookupTimeout = "later"
|
*invalidRedisLookupTimeout = "later"
|
||||||
|
|
||||||
invalidRedisTLSEnabled := new(string)
|
deprecatedTLSEnabled := new(string)
|
||||||
*invalidRedisTLSEnabled = "maybe"
|
*deprecatedTLSEnabled = "true"
|
||||||
|
|
||||||
|
deprecatedUsername := new(string)
|
||||||
|
*deprecatedUsername = "gateway"
|
||||||
|
|
||||||
|
type want struct {
|
||||||
|
conn redisconn.Config
|
||||||
|
sessionRedis SessionCacheRedisConfig
|
||||||
|
}
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
envs map[string]*string
|
envs map[string]*string
|
||||||
want SessionCacheRedisConfig
|
want *want
|
||||||
wantErr string
|
wantErr string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
name: "custom redis config",
|
name: "custom redis config",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customRedisAddr,
|
||||||
sessionCacheRedisUsernameEnvVar: customRedisUsername,
|
gatewayRedisReplicaAddrsEnvVar: customRedisReplicas,
|
||||||
sessionCacheRedisPasswordEnvVar: customRedisPassword,
|
gatewayRedisPasswordEnvVar: customRedisPassword,
|
||||||
sessionCacheRedisDBEnvVar: customRedisDB,
|
gatewayRedisDBEnvVar: customRedisDB,
|
||||||
|
gatewayRedisOpTimeoutEnvVar: customRedisOpTimeout,
|
||||||
sessionCacheRedisKeyPrefixEnvVar: customRedisKeyPrefix,
|
sessionCacheRedisKeyPrefixEnvVar: customRedisKeyPrefix,
|
||||||
sessionCacheRedisLookupTimeoutEnvVar: customRedisLookupTimeout,
|
sessionCacheRedisLookupTimeoutEnvVar: customRedisLookupTimeout,
|
||||||
sessionCacheRedisTLSEnabledEnvVar: customRedisTLSEnabled,
|
|
||||||
},
|
},
|
||||||
want: SessionCacheRedisConfig{
|
want: &want{
|
||||||
Addr: "127.0.0.1:6380",
|
conn: redisconn.Config{
|
||||||
Username: "gateway",
|
MasterAddr: "127.0.0.1:6380",
|
||||||
Password: "secret",
|
ReplicaAddrs: []string{"127.0.0.1:6481", "127.0.0.1:6482"},
|
||||||
DB: 7,
|
Password: "secret",
|
||||||
KeyPrefix: "edge:session:",
|
DB: 7,
|
||||||
LookupTimeout: 750 * time.Millisecond,
|
OperationTimeout: 750 * time.Millisecond,
|
||||||
TLSEnabled: true,
|
},
|
||||||
|
sessionRedis: SessionCacheRedisConfig{
|
||||||
|
KeyPrefix: "edge:session:",
|
||||||
|
LookupTimeout: 950 * time.Millisecond,
|
||||||
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "negative redis db",
|
name: "negative redis db rejected by pkg/redisconn",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customRedisAddr,
|
||||||
sessionCacheRedisDBEnvVar: negativeRedisDB,
|
gatewayRedisPasswordEnvVar: customRedisPassword,
|
||||||
|
gatewayRedisDBEnvVar: negativeRedisDB,
|
||||||
},
|
},
|
||||||
wantErr: sessionCacheRedisDBEnvVar + " must not be negative",
|
wantErr: "redis db must not be negative",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "invalid redis lookup timeout",
|
name: "invalid session cache lookup timeout",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customRedisAddr,
|
||||||
|
gatewayRedisPasswordEnvVar: customRedisPassword,
|
||||||
sessionCacheRedisLookupTimeoutEnvVar: invalidRedisLookupTimeout,
|
sessionCacheRedisLookupTimeoutEnvVar: invalidRedisLookupTimeout,
|
||||||
},
|
},
|
||||||
wantErr: "parse " + sessionCacheRedisLookupTimeoutEnvVar,
|
wantErr: "parse " + sessionCacheRedisLookupTimeoutEnvVar,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "invalid redis tls flag",
|
name: "missing redis password rejected",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customRedisAddr,
|
||||||
sessionCacheRedisTLSEnabledEnvVar: invalidRedisTLSEnabled,
|
|
||||||
},
|
},
|
||||||
wantErr: "parse " + sessionCacheRedisTLSEnabledEnvVar,
|
wantErr: gatewayRedisPasswordEnvVar + " must be set",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "deprecated tls enabled var rejected",
|
||||||
|
envs: map[string]*string{
|
||||||
|
gatewayRedisMasterAddrEnvVar: customRedisAddr,
|
||||||
|
gatewayRedisPasswordEnvVar: customRedisPassword,
|
||||||
|
gatewayRedisTLSEnabledEnvVar: deprecatedTLSEnabled,
|
||||||
|
},
|
||||||
|
wantErr: gatewayRedisTLSEnabledEnvVar,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "deprecated username var rejected",
|
||||||
|
envs: map[string]*string{
|
||||||
|
gatewayRedisMasterAddrEnvVar: customRedisAddr,
|
||||||
|
gatewayRedisPasswordEnvVar: customRedisPassword,
|
||||||
|
gatewayRedisUsernameEnvVar: deprecatedUsername,
|
||||||
|
},
|
||||||
|
wantErr: gatewayRedisUsernameEnvVar,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
restoreEnvs(t, append(append(append(sessionCacheRedisEnvVars(), sessionEventsRedisEnvVars()...), clientEventsRedisEnvVars()...), responseSignerPrivateKeyPEMPathEnvVar)...)
|
redisEnvVars := sessionCacheRedisEnvVars()
|
||||||
|
restoreEnvs(t, append(append(append(redisEnvVars, sessionEventsRedisEnvVars()...), clientEventsRedisEnvVars()...), responseSignerPrivateKeyPEMPathEnvVar)...)
|
||||||
|
for _, envVar := range redisEnvVars {
|
||||||
|
setEnvValue(t, envVar, nil)
|
||||||
|
}
|
||||||
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, customResponseSignerPrivateKeyPEMPath)
|
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, customResponseSignerPrivateKeyPEMPath)
|
||||||
setEnvValue(t, sessionEventsRedisStreamEnvVar, customSessionEventsRedisStream)
|
setEnvValue(t, sessionEventsRedisStreamEnvVar, customSessionEventsRedisStream)
|
||||||
setEnvValue(t, clientEventsRedisStreamEnvVar, customClientEventsRedisStream)
|
setEnvValue(t, clientEventsRedisStreamEnvVar, customClientEventsRedisStream)
|
||||||
@@ -973,7 +1024,9 @@ func TestLoadFromEnvSessionCacheRedis(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t, tt.want, cfg.SessionCacheRedis)
|
require.NotNil(t, tt.want)
|
||||||
|
assert.Equal(t, tt.want.conn, cfg.Redis)
|
||||||
|
assert.Equal(t, tt.want.sessionRedis, cfg.SessionCacheRedis)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1012,7 +1065,7 @@ func TestLoadFromEnvReplayRedis(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "custom replay redis config",
|
name: "custom replay redis config",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
replayRedisKeyPrefixEnvVar: customReplayRedisKeyPrefix,
|
replayRedisKeyPrefixEnvVar: customReplayRedisKeyPrefix,
|
||||||
replayRedisReserveTimeoutEnvVar: customReplayRedisReserveTimeout,
|
replayRedisReserveTimeoutEnvVar: customReplayRedisReserveTimeout,
|
||||||
},
|
},
|
||||||
@@ -1024,7 +1077,7 @@ func TestLoadFromEnvReplayRedis(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "empty replay redis key prefix",
|
name: "empty replay redis key prefix",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
replayRedisKeyPrefixEnvVar: emptyReplayRedisKeyPrefix,
|
replayRedisKeyPrefixEnvVar: emptyReplayRedisKeyPrefix,
|
||||||
},
|
},
|
||||||
wantErr: replayRedisKeyPrefixEnvVar + " must not be empty",
|
wantErr: replayRedisKeyPrefixEnvVar + " must not be empty",
|
||||||
@@ -1032,7 +1085,7 @@ func TestLoadFromEnvReplayRedis(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "invalid replay redis reserve timeout",
|
name: "invalid replay redis reserve timeout",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
replayRedisReserveTimeoutEnvVar: invalidReplayRedisReserveTimeout,
|
replayRedisReserveTimeoutEnvVar: invalidReplayRedisReserveTimeout,
|
||||||
},
|
},
|
||||||
wantErr: "parse " + replayRedisReserveTimeoutEnvVar,
|
wantErr: "parse " + replayRedisReserveTimeoutEnvVar,
|
||||||
@@ -1096,7 +1149,7 @@ func TestLoadFromEnvSessionEventsRedis(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "custom session events redis config",
|
name: "custom session events redis config",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
sessionEventsRedisStreamEnvVar: customStream,
|
sessionEventsRedisStreamEnvVar: customStream,
|
||||||
sessionEventsRedisReadBlockTimeoutEnvVar: customReadBlockTimeout,
|
sessionEventsRedisReadBlockTimeoutEnvVar: customReadBlockTimeout,
|
||||||
},
|
},
|
||||||
@@ -1108,14 +1161,14 @@ func TestLoadFromEnvSessionEventsRedis(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "missing session events redis stream",
|
name: "missing session events redis stream",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
},
|
},
|
||||||
wantErr: sessionEventsRedisStreamEnvVar + " must not be empty",
|
wantErr: sessionEventsRedisStreamEnvVar + " must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "empty session events redis stream",
|
name: "empty session events redis stream",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
sessionEventsRedisStreamEnvVar: emptyStream,
|
sessionEventsRedisStreamEnvVar: emptyStream,
|
||||||
},
|
},
|
||||||
wantErr: sessionEventsRedisStreamEnvVar + " must not be empty",
|
wantErr: sessionEventsRedisStreamEnvVar + " must not be empty",
|
||||||
@@ -1123,7 +1176,7 @@ func TestLoadFromEnvSessionEventsRedis(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "invalid session events read block timeout",
|
name: "invalid session events read block timeout",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
sessionEventsRedisStreamEnvVar: customStream,
|
sessionEventsRedisStreamEnvVar: customStream,
|
||||||
sessionEventsRedisReadBlockTimeoutEnvVar: invalidReadBlockTimeout,
|
sessionEventsRedisReadBlockTimeoutEnvVar: invalidReadBlockTimeout,
|
||||||
},
|
},
|
||||||
@@ -1187,7 +1240,7 @@ func TestLoadFromEnvClientEventsRedis(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "custom client events redis config",
|
name: "custom client events redis config",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
clientEventsRedisStreamEnvVar: customStream,
|
clientEventsRedisStreamEnvVar: customStream,
|
||||||
clientEventsRedisReadBlockTimeoutEnvVar: customReadBlockTimeout,
|
clientEventsRedisReadBlockTimeoutEnvVar: customReadBlockTimeout,
|
||||||
},
|
},
|
||||||
@@ -1199,14 +1252,14 @@ func TestLoadFromEnvClientEventsRedis(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "missing client events redis stream",
|
name: "missing client events redis stream",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
},
|
},
|
||||||
wantErr: clientEventsRedisStreamEnvVar + " must not be empty",
|
wantErr: clientEventsRedisStreamEnvVar + " must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "empty client events redis stream",
|
name: "empty client events redis stream",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
clientEventsRedisStreamEnvVar: emptyStream,
|
clientEventsRedisStreamEnvVar: emptyStream,
|
||||||
},
|
},
|
||||||
wantErr: clientEventsRedisStreamEnvVar + " must not be empty",
|
wantErr: clientEventsRedisStreamEnvVar + " must not be empty",
|
||||||
@@ -1214,7 +1267,7 @@ func TestLoadFromEnvClientEventsRedis(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "invalid client events read block timeout",
|
name: "invalid client events read block timeout",
|
||||||
envs: map[string]*string{
|
envs: map[string]*string{
|
||||||
sessionCacheRedisAddrEnvVar: customSessionCacheRedisAddr,
|
gatewayRedisMasterAddrEnvVar: customSessionCacheRedisAddr,
|
||||||
clientEventsRedisStreamEnvVar: customStream,
|
clientEventsRedisStreamEnvVar: customStream,
|
||||||
clientEventsRedisReadBlockTimeoutEnvVar: invalidReadBlockTimeout,
|
clientEventsRedisReadBlockTimeoutEnvVar: invalidReadBlockTimeout,
|
||||||
},
|
},
|
||||||
@@ -1331,8 +1384,9 @@ func TestLoadFromEnvPublicHTTPAntiAbuse(t *testing.T) {
|
|||||||
tt := tt
|
tt := tt
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
restoreEnvs(t, append(append(append(append(publicAntiAbuseEnvVars(), sessionCacheRedisAddrEnvVar), sessionEventsRedisEnvVars()...), clientEventsRedisEnvVars()...), responseSignerPrivateKeyPEMPathEnvVar)...)
|
restoreEnvs(t, append(append(append(append(publicAntiAbuseEnvVars(), gatewayRedisMasterAddrEnvVar), sessionEventsRedisEnvVars()...), clientEventsRedisEnvVars()...), responseSignerPrivateKeyPEMPathEnvVar)...)
|
||||||
setEnvValue(t, sessionCacheRedisAddrEnvVar, requiredSessionCacheRedisAddr)
|
setEnvValue(t, gatewayRedisMasterAddrEnvVar, requiredSessionCacheRedisAddr)
|
||||||
|
setEnvValue(t, gatewayRedisPasswordEnvVar, &defaultTestRedisPasswordValue)
|
||||||
setEnvValue(t, sessionEventsRedisStreamEnvVar, requiredSessionEventsRedisStream)
|
setEnvValue(t, sessionEventsRedisStreamEnvVar, requiredSessionEventsRedisStream)
|
||||||
setEnvValue(t, clientEventsRedisStreamEnvVar, requiredClientEventsRedisStream)
|
setEnvValue(t, clientEventsRedisStreamEnvVar, requiredClientEventsRedisStream)
|
||||||
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, requiredResponseSignerPrivateKeyPEMPath)
|
setEnvValue(t, responseSignerPrivateKeyPEMPathEnvVar, requiredResponseSignerPrivateKeyPEMPath)
|
||||||
@@ -1444,13 +1498,15 @@ func operationalEnvVars() []string {
|
|||||||
|
|
||||||
func sessionCacheRedisEnvVars() []string {
|
func sessionCacheRedisEnvVars() []string {
|
||||||
return []string{
|
return []string{
|
||||||
sessionCacheRedisAddrEnvVar,
|
gatewayRedisMasterAddrEnvVar,
|
||||||
sessionCacheRedisUsernameEnvVar,
|
gatewayRedisReplicaAddrsEnvVar,
|
||||||
sessionCacheRedisPasswordEnvVar,
|
gatewayRedisPasswordEnvVar,
|
||||||
sessionCacheRedisDBEnvVar,
|
gatewayRedisDBEnvVar,
|
||||||
|
gatewayRedisOpTimeoutEnvVar,
|
||||||
|
gatewayRedisTLSEnabledEnvVar,
|
||||||
|
gatewayRedisUsernameEnvVar,
|
||||||
sessionCacheRedisKeyPrefixEnvVar,
|
sessionCacheRedisKeyPrefixEnvVar,
|
||||||
sessionCacheRedisLookupTimeoutEnvVar,
|
sessionCacheRedisLookupTimeoutEnvVar,
|
||||||
sessionCacheRedisTLSEnabledEnvVar,
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ package events
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"strings"
|
"strings"
|
||||||
@@ -39,26 +38,23 @@ type RedisClientEventSubscriber struct {
|
|||||||
logger *zap.Logger
|
logger *zap.Logger
|
||||||
metrics *telemetry.Runtime
|
metrics *telemetry.Runtime
|
||||||
|
|
||||||
closeOnce sync.Once
|
|
||||||
startedOnce sync.Once
|
startedOnce sync.Once
|
||||||
started chan struct{}
|
started chan struct{}
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRedisClientEventSubscriber constructs a Redis Stream subscriber that
|
// NewRedisClientEventSubscriber constructs a Redis Stream subscriber that uses
|
||||||
// reuses the SessionCache Redis connection settings and forwards decoded
|
// client and forwards decoded client-facing events to publisher.
|
||||||
// client-facing events to publisher.
|
func NewRedisClientEventSubscriber(client *redis.Client, sessionCfg config.SessionCacheRedisConfig, eventsCfg config.ClientEventsRedisConfig, publisher ClientEventPublisher) (*RedisClientEventSubscriber, error) {
|
||||||
func NewRedisClientEventSubscriber(sessionCfg config.SessionCacheRedisConfig, eventsCfg config.ClientEventsRedisConfig, publisher ClientEventPublisher) (*RedisClientEventSubscriber, error) {
|
return NewRedisClientEventSubscriberWithObservability(client, sessionCfg, eventsCfg, publisher, nil, nil)
|
||||||
return NewRedisClientEventSubscriberWithObservability(sessionCfg, eventsCfg, publisher, nil, nil)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRedisClientEventSubscriberWithObservability constructs a Redis Stream
|
// NewRedisClientEventSubscriberWithObservability constructs a Redis Stream
|
||||||
// subscriber that also records malformed or dropped internal events.
|
// subscriber that also records malformed or dropped internal events. The
|
||||||
func NewRedisClientEventSubscriberWithObservability(sessionCfg config.SessionCacheRedisConfig, eventsCfg config.ClientEventsRedisConfig, publisher ClientEventPublisher, logger *zap.Logger, metrics *telemetry.Runtime) (*RedisClientEventSubscriber, error) {
|
// subscriber does not own the client; the runtime supplies a shared
|
||||||
if strings.TrimSpace(sessionCfg.Addr) == "" {
|
// *redis.Client.
|
||||||
return nil, errors.New("new redis client event subscriber: redis addr must not be empty")
|
func NewRedisClientEventSubscriberWithObservability(client *redis.Client, sessionCfg config.SessionCacheRedisConfig, eventsCfg config.ClientEventsRedisConfig, publisher ClientEventPublisher, logger *zap.Logger, metrics *telemetry.Runtime) (*RedisClientEventSubscriber, error) {
|
||||||
}
|
if client == nil {
|
||||||
if sessionCfg.DB < 0 {
|
return nil, errors.New("new redis client event subscriber: nil redis client")
|
||||||
return nil, errors.New("new redis client event subscriber: redis db must not be negative")
|
|
||||||
}
|
}
|
||||||
if sessionCfg.LookupTimeout <= 0 {
|
if sessionCfg.LookupTimeout <= 0 {
|
||||||
return nil, errors.New("new redis client event subscriber: lookup timeout must be positive")
|
return nil, errors.New("new redis client event subscriber: lookup timeout must be positive")
|
||||||
@@ -73,23 +69,12 @@ func NewRedisClientEventSubscriberWithObservability(sessionCfg config.SessionCac
|
|||||||
return nil, errors.New("new redis client event subscriber: nil publisher")
|
return nil, errors.New("new redis client event subscriber: nil publisher")
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &redis.Options{
|
|
||||||
Addr: sessionCfg.Addr,
|
|
||||||
Username: sessionCfg.Username,
|
|
||||||
Password: sessionCfg.Password,
|
|
||||||
DB: sessionCfg.DB,
|
|
||||||
Protocol: 2,
|
|
||||||
DisableIdentity: true,
|
|
||||||
}
|
|
||||||
if sessionCfg.TLSEnabled {
|
|
||||||
options.TLSConfig = &tls.Config{MinVersion: tls.VersionTLS12}
|
|
||||||
}
|
|
||||||
if logger == nil {
|
if logger == nil {
|
||||||
logger = zap.NewNop()
|
logger = zap.NewNop()
|
||||||
}
|
}
|
||||||
|
|
||||||
return &RedisClientEventSubscriber{
|
return &RedisClientEventSubscriber{
|
||||||
client: redis.NewClient(options),
|
client: client,
|
||||||
stream: eventsCfg.Stream,
|
stream: eventsCfg.Stream,
|
||||||
pingTimeout: sessionCfg.LookupTimeout,
|
pingTimeout: sessionCfg.LookupTimeout,
|
||||||
readBlockTimeout: eventsCfg.ReadBlockTimeout,
|
readBlockTimeout: eventsCfg.ReadBlockTimeout,
|
||||||
@@ -100,26 +85,6 @@ func NewRedisClientEventSubscriberWithObservability(sessionCfg config.SessionCac
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ping verifies that the Redis backend used for client-facing event fan-out is
|
|
||||||
// reachable within the configured timeout budget.
|
|
||||||
func (s *RedisClientEventSubscriber) Ping(ctx context.Context) error {
|
|
||||||
if s == nil || s.client == nil {
|
|
||||||
return errors.New("ping redis client event subscriber: nil subscriber")
|
|
||||||
}
|
|
||||||
if ctx == nil {
|
|
||||||
return errors.New("ping redis client event subscriber: nil context")
|
|
||||||
}
|
|
||||||
|
|
||||||
pingCtx, cancel := context.WithTimeout(ctx, s.pingTimeout)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := s.client.Ping(pingCtx).Err(); err != nil {
|
|
||||||
return fmt.Errorf("ping redis client event subscriber: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run consumes client-facing events until ctx is canceled or Redis returns an
|
// Run consumes client-facing events until ctx is canceled or Redis returns an
|
||||||
// unexpected error.
|
// unexpected error.
|
||||||
func (s *RedisClientEventSubscriber) Run(ctx context.Context) error {
|
func (s *RedisClientEventSubscriber) Run(ctx context.Context) error {
|
||||||
@@ -184,28 +149,21 @@ func (s *RedisClientEventSubscriber) resolveStartID(ctx context.Context) (string
|
|||||||
return messages[0].ID, nil
|
return messages[0].ID, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Shutdown closes the Redis client so a blocking stream read can terminate
|
// Shutdown is a no-op kept for App framework compatibility. The blocking
|
||||||
// promptly during gateway shutdown.
|
// XRead loop terminates when its context is cancelled by the parent runtime,
|
||||||
|
// which also owns and closes the shared Redis client.
|
||||||
func (s *RedisClientEventSubscriber) Shutdown(ctx context.Context) error {
|
func (s *RedisClientEventSubscriber) Shutdown(ctx context.Context) error {
|
||||||
if ctx == nil {
|
if ctx == nil {
|
||||||
return errors.New("shutdown redis client event subscriber: nil context")
|
return errors.New("shutdown redis client event subscriber: nil context")
|
||||||
}
|
}
|
||||||
|
|
||||||
return s.Close()
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close releases the underlying Redis client resources.
|
// Close is a no-op kept for backwards-compatible cleanup wiring; the
|
||||||
|
// subscriber does not own the shared Redis client.
|
||||||
func (s *RedisClientEventSubscriber) Close() error {
|
func (s *RedisClientEventSubscriber) Close() error {
|
||||||
if s == nil || s.client == nil {
|
return nil
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
var err error
|
|
||||||
s.closeOnce.Do(func() {
|
|
||||||
err = s.client.Close()
|
|
||||||
})
|
|
||||||
|
|
||||||
return err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *RedisClientEventSubscriber) signalStarted() {
|
func (s *RedisClientEventSubscriber) signalStarted() {
|
||||||
|
|||||||
@@ -153,8 +153,9 @@ func TestRedisClientEventSubscriberLogsAndCountsMalformedEvents(t *testing.T) {
|
|||||||
telemetryRuntime := testutil.NewTelemetryRuntime(t, logger)
|
telemetryRuntime := testutil.NewTelemetryRuntime(t, logger)
|
||||||
|
|
||||||
subscriber, err := NewRedisClientEventSubscriberWithObservability(
|
subscriber, err := NewRedisClientEventSubscriberWithObservability(
|
||||||
|
newTestRedisClient(t, server),
|
||||||
config.SessionCacheRedisConfig{
|
config.SessionCacheRedisConfig{
|
||||||
Addr: server.Addr(),
|
KeyPrefix: "gateway:session:",
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
LookupTimeout: 250 * time.Millisecond,
|
||||||
},
|
},
|
||||||
config.ClientEventsRedisConfig{
|
config.ClientEventsRedisConfig{
|
||||||
@@ -166,9 +167,6 @@ func TestRedisClientEventSubscriberLogsAndCountsMalformedEvents(t *testing.T) {
|
|||||||
telemetryRuntime,
|
telemetryRuntime,
|
||||||
)
|
)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, subscriber.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
running := runTestClientEventSubscriber(t, subscriber)
|
running := runTestClientEventSubscriber(t, subscriber)
|
||||||
defer running.stop(t)
|
defer running.stop(t)
|
||||||
@@ -195,8 +193,9 @@ func newTestRedisClientEventSubscriber(t *testing.T, server *miniredis.Miniredis
|
|||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
subscriber, err := NewRedisClientEventSubscriber(
|
subscriber, err := NewRedisClientEventSubscriber(
|
||||||
|
newTestRedisClient(t, server),
|
||||||
config.SessionCacheRedisConfig{
|
config.SessionCacheRedisConfig{
|
||||||
Addr: server.Addr(),
|
KeyPrefix: "gateway:session:",
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
LookupTimeout: 250 * time.Millisecond,
|
||||||
},
|
},
|
||||||
config.ClientEventsRedisConfig{
|
config.ClientEventsRedisConfig{
|
||||||
@@ -207,10 +206,6 @@ func newTestRedisClientEventSubscriber(t *testing.T, server *miniredis.Miniredis
|
|||||||
)
|
)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, subscriber.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
return subscriber
|
return subscriber
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ package events
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"strconv"
|
"strconv"
|
||||||
@@ -43,33 +42,30 @@ type RedisSessionSubscriber struct {
|
|||||||
logger *zap.Logger
|
logger *zap.Logger
|
||||||
metrics *telemetry.Runtime
|
metrics *telemetry.Runtime
|
||||||
|
|
||||||
closeOnce sync.Once
|
|
||||||
startedOnce sync.Once
|
startedOnce sync.Once
|
||||||
started chan struct{}
|
started chan struct{}
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRedisSessionSubscriber constructs a Redis Stream subscriber that reuses
|
// NewRedisSessionSubscriber constructs a Redis Stream subscriber that uses
|
||||||
// the SessionCache Redis connection settings and applies updates to store.
|
// client and applies updates to store.
|
||||||
func NewRedisSessionSubscriber(sessionCfg config.SessionCacheRedisConfig, eventsCfg config.SessionEventsRedisConfig, store session.SnapshotStore) (*RedisSessionSubscriber, error) {
|
func NewRedisSessionSubscriber(client *redis.Client, sessionCfg config.SessionCacheRedisConfig, eventsCfg config.SessionEventsRedisConfig, store session.SnapshotStore) (*RedisSessionSubscriber, error) {
|
||||||
return NewRedisSessionSubscriberWithObservability(sessionCfg, eventsCfg, store, nil, nil, nil)
|
return NewRedisSessionSubscriberWithObservability(client, sessionCfg, eventsCfg, store, nil, nil, nil)
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRedisSessionSubscriberWithRevocationHandler constructs a Redis Stream
|
// NewRedisSessionSubscriberWithRevocationHandler constructs a Redis Stream
|
||||||
// subscriber that reuses the SessionCache Redis connection settings, applies
|
// subscriber that uses client, applies updates to store, and optionally tears
|
||||||
// updates to store, and optionally tears down active resources for revoked
|
// down active resources for revoked sessions.
|
||||||
// sessions.
|
func NewRedisSessionSubscriberWithRevocationHandler(client *redis.Client, sessionCfg config.SessionCacheRedisConfig, eventsCfg config.SessionEventsRedisConfig, store session.SnapshotStore, revocationHandler SessionRevocationHandler) (*RedisSessionSubscriber, error) {
|
||||||
func NewRedisSessionSubscriberWithRevocationHandler(sessionCfg config.SessionCacheRedisConfig, eventsCfg config.SessionEventsRedisConfig, store session.SnapshotStore, revocationHandler SessionRevocationHandler) (*RedisSessionSubscriber, error) {
|
return NewRedisSessionSubscriberWithObservability(client, sessionCfg, eventsCfg, store, revocationHandler, nil, nil)
|
||||||
return NewRedisSessionSubscriberWithObservability(sessionCfg, eventsCfg, store, revocationHandler, nil, nil)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRedisSessionSubscriberWithObservability constructs a Redis Stream
|
// NewRedisSessionSubscriberWithObservability constructs a Redis Stream
|
||||||
// subscriber that also logs and counts malformed internal session events.
|
// subscriber that also logs and counts malformed internal session events. The
|
||||||
func NewRedisSessionSubscriberWithObservability(sessionCfg config.SessionCacheRedisConfig, eventsCfg config.SessionEventsRedisConfig, store session.SnapshotStore, revocationHandler SessionRevocationHandler, logger *zap.Logger, metrics *telemetry.Runtime) (*RedisSessionSubscriber, error) {
|
// subscriber does not own the client; the runtime supplies a shared
|
||||||
if strings.TrimSpace(sessionCfg.Addr) == "" {
|
// *redis.Client.
|
||||||
return nil, errors.New("new redis session subscriber: redis addr must not be empty")
|
func NewRedisSessionSubscriberWithObservability(client *redis.Client, sessionCfg config.SessionCacheRedisConfig, eventsCfg config.SessionEventsRedisConfig, store session.SnapshotStore, revocationHandler SessionRevocationHandler, logger *zap.Logger, metrics *telemetry.Runtime) (*RedisSessionSubscriber, error) {
|
||||||
}
|
if client == nil {
|
||||||
if sessionCfg.DB < 0 {
|
return nil, errors.New("new redis session subscriber: nil redis client")
|
||||||
return nil, errors.New("new redis session subscriber: redis db must not be negative")
|
|
||||||
}
|
}
|
||||||
if sessionCfg.LookupTimeout <= 0 {
|
if sessionCfg.LookupTimeout <= 0 {
|
||||||
return nil, errors.New("new redis session subscriber: lookup timeout must be positive")
|
return nil, errors.New("new redis session subscriber: lookup timeout must be positive")
|
||||||
@@ -84,23 +80,12 @@ func NewRedisSessionSubscriberWithObservability(sessionCfg config.SessionCacheRe
|
|||||||
return nil, errors.New("new redis session subscriber: nil session snapshot store")
|
return nil, errors.New("new redis session subscriber: nil session snapshot store")
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &redis.Options{
|
|
||||||
Addr: sessionCfg.Addr,
|
|
||||||
Username: sessionCfg.Username,
|
|
||||||
Password: sessionCfg.Password,
|
|
||||||
DB: sessionCfg.DB,
|
|
||||||
Protocol: 2,
|
|
||||||
DisableIdentity: true,
|
|
||||||
}
|
|
||||||
if sessionCfg.TLSEnabled {
|
|
||||||
options.TLSConfig = &tls.Config{MinVersion: tls.VersionTLS12}
|
|
||||||
}
|
|
||||||
if logger == nil {
|
if logger == nil {
|
||||||
logger = zap.NewNop()
|
logger = zap.NewNop()
|
||||||
}
|
}
|
||||||
|
|
||||||
return &RedisSessionSubscriber{
|
return &RedisSessionSubscriber{
|
||||||
client: redis.NewClient(options),
|
client: client,
|
||||||
stream: eventsCfg.Stream,
|
stream: eventsCfg.Stream,
|
||||||
pingTimeout: sessionCfg.LookupTimeout,
|
pingTimeout: sessionCfg.LookupTimeout,
|
||||||
readBlockTimeout: eventsCfg.ReadBlockTimeout,
|
readBlockTimeout: eventsCfg.ReadBlockTimeout,
|
||||||
@@ -112,26 +97,6 @@ func NewRedisSessionSubscriberWithObservability(sessionCfg config.SessionCacheRe
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ping verifies that the Redis backend used for session lifecycle events is
|
|
||||||
// reachable within the configured timeout budget.
|
|
||||||
func (s *RedisSessionSubscriber) Ping(ctx context.Context) error {
|
|
||||||
if s == nil || s.client == nil {
|
|
||||||
return errors.New("ping redis session subscriber: nil subscriber")
|
|
||||||
}
|
|
||||||
if ctx == nil {
|
|
||||||
return errors.New("ping redis session subscriber: nil context")
|
|
||||||
}
|
|
||||||
|
|
||||||
pingCtx, cancel := context.WithTimeout(ctx, s.pingTimeout)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := s.client.Ping(pingCtx).Err(); err != nil {
|
|
||||||
return fmt.Errorf("ping redis session subscriber: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run consumes session lifecycle events until ctx is canceled or Redis returns
|
// Run consumes session lifecycle events until ctx is canceled or Redis returns
|
||||||
// an unexpected error.
|
// an unexpected error.
|
||||||
func (s *RedisSessionSubscriber) Run(ctx context.Context) error {
|
func (s *RedisSessionSubscriber) Run(ctx context.Context) error {
|
||||||
@@ -196,28 +161,21 @@ func (s *RedisSessionSubscriber) resolveStartID(ctx context.Context) (string, er
|
|||||||
return messages[0].ID, nil
|
return messages[0].ID, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Shutdown closes the Redis client so a blocking stream read can terminate
|
// Shutdown is a no-op kept for App framework compatibility. The blocking
|
||||||
// promptly during gateway shutdown.
|
// XRead loop terminates when its context is cancelled by the parent runtime,
|
||||||
|
// which also owns and closes the shared Redis client.
|
||||||
func (s *RedisSessionSubscriber) Shutdown(ctx context.Context) error {
|
func (s *RedisSessionSubscriber) Shutdown(ctx context.Context) error {
|
||||||
if ctx == nil {
|
if ctx == nil {
|
||||||
return errors.New("shutdown redis session subscriber: nil context")
|
return errors.New("shutdown redis session subscriber: nil context")
|
||||||
}
|
}
|
||||||
|
|
||||||
return s.Close()
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close releases the underlying Redis client resources.
|
// Close is a no-op kept for backwards-compatible cleanup wiring; the
|
||||||
|
// subscriber does not own the shared Redis client.
|
||||||
func (s *RedisSessionSubscriber) Close() error {
|
func (s *RedisSessionSubscriber) Close() error {
|
||||||
if s == nil || s.client == nil {
|
return nil
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
var err error
|
|
||||||
s.closeOnce.Do(func() {
|
|
||||||
err = s.client.Close()
|
|
||||||
})
|
|
||||||
|
|
||||||
return err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *RedisSessionSubscriber) signalStarted() {
|
func (s *RedisSessionSubscriber) signalStarted() {
|
||||||
|
|||||||
@@ -10,6 +10,7 @@ import (
|
|||||||
"galaxy/gateway/internal/session"
|
"galaxy/gateway/internal/session"
|
||||||
|
|
||||||
"github.com/alicebob/miniredis/v2"
|
"github.com/alicebob/miniredis/v2"
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
@@ -262,9 +263,12 @@ func newTestRedisSessionSubscriber(t *testing.T, server *miniredis.Miniredis, st
|
|||||||
func newTestRedisSessionSubscriberWithRevocationHandler(t *testing.T, server *miniredis.Miniredis, store session.SnapshotStore, revocationHandler SessionRevocationHandler) *RedisSessionSubscriber {
|
func newTestRedisSessionSubscriberWithRevocationHandler(t *testing.T, server *miniredis.Miniredis, store session.SnapshotStore, revocationHandler SessionRevocationHandler) *RedisSessionSubscriber {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
|
client := newTestRedisClient(t, server)
|
||||||
|
|
||||||
subscriber, err := NewRedisSessionSubscriberWithRevocationHandler(
|
subscriber, err := NewRedisSessionSubscriberWithRevocationHandler(
|
||||||
|
client,
|
||||||
config.SessionCacheRedisConfig{
|
config.SessionCacheRedisConfig{
|
||||||
Addr: server.Addr(),
|
KeyPrefix: "gateway:session:",
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
LookupTimeout: 250 * time.Millisecond,
|
||||||
},
|
},
|
||||||
config.SessionEventsRedisConfig{
|
config.SessionEventsRedisConfig{
|
||||||
@@ -276,11 +280,22 @@ func newTestRedisSessionSubscriberWithRevocationHandler(t *testing.T, server *mi
|
|||||||
)
|
)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
return subscriber
|
||||||
|
}
|
||||||
|
|
||||||
|
func newTestRedisClient(t *testing.T, server *miniredis.Miniredis) *redis.Client {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
client := redis.NewClient(&redis.Options{
|
||||||
|
Addr: server.Addr(),
|
||||||
|
Protocol: 2,
|
||||||
|
DisableIdentity: true,
|
||||||
|
})
|
||||||
t.Cleanup(func() {
|
t.Cleanup(func() {
|
||||||
assert.NoError(t, subscriber.Close())
|
assert.NoError(t, client.Close())
|
||||||
})
|
})
|
||||||
|
|
||||||
return subscriber
|
return client
|
||||||
}
|
}
|
||||||
|
|
||||||
type recordingSessionRevocationHandler struct {
|
type recordingSessionRevocationHandler struct {
|
||||||
|
|||||||
@@ -0,0 +1,55 @@
|
|||||||
|
// Package redisclient provides the Redis client helpers used by Gateway
|
||||||
|
// runtime wiring. The helpers wrap `pkg/redisconn` so the runtime keeps the
|
||||||
|
// same construction surface as the other Galaxy services.
|
||||||
|
package redisclient
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"galaxy/gateway/internal/telemetry"
|
||||||
|
"galaxy/redisconn"
|
||||||
|
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewClient constructs one Redis client from cfg using the shared
|
||||||
|
// `pkg/redisconn` helper, which enforces the master/replica/password env-var
|
||||||
|
// shape.
|
||||||
|
func NewClient(cfg redisconn.Config) *redis.Client {
|
||||||
|
return redisconn.NewMasterClient(cfg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// InstrumentClient attaches Redis tracing and metrics exporters to client
|
||||||
|
// when telemetryRuntime is available.
|
||||||
|
func InstrumentClient(client *redis.Client, telemetryRuntime *telemetry.Runtime) error {
|
||||||
|
if client == nil {
|
||||||
|
return fmt.Errorf("instrument redis client: nil client")
|
||||||
|
}
|
||||||
|
if telemetryRuntime == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return redisconn.Instrument(
|
||||||
|
client,
|
||||||
|
redisconn.WithTracerProvider(telemetryRuntime.TracerProvider()),
|
||||||
|
redisconn.WithMeterProvider(telemetryRuntime.MeterProvider()),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ping performs the startup Redis connectivity check bounded by
|
||||||
|
// cfg.OperationTimeout.
|
||||||
|
func Ping(ctx context.Context, cfg redisconn.Config, client *redis.Client) error {
|
||||||
|
if client == nil {
|
||||||
|
return fmt.Errorf("ping redis: nil client")
|
||||||
|
}
|
||||||
|
|
||||||
|
pingCtx, cancel := context.WithTimeout(ctx, cfg.OperationTimeout)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
if err := client.Ping(pingCtx).Err(); err != nil {
|
||||||
|
return fmt.Errorf("ping redis: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -2,7 +2,6 @@ package replay
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
@@ -22,15 +21,13 @@ type RedisStore struct {
|
|||||||
reserveTimeout time.Duration
|
reserveTimeout time.Duration
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRedisStore constructs a Redis-backed replay store that reuses the
|
// NewRedisStore constructs a Redis-backed replay store that uses client and
|
||||||
// SessionCache Redis deployment settings and applies the replay-specific key
|
// applies the replay-specific namespace and timeout controls from replayCfg.
|
||||||
// namespace and timeout controls from replayCfg.
|
// The store does not own the client; the runtime supplies a shared
|
||||||
func NewRedisStore(sessionCfg config.SessionCacheRedisConfig, replayCfg config.ReplayRedisConfig) (*RedisStore, error) {
|
// *redis.Client.
|
||||||
if strings.TrimSpace(sessionCfg.Addr) == "" {
|
func NewRedisStore(client *redis.Client, replayCfg config.ReplayRedisConfig) (*RedisStore, error) {
|
||||||
return nil, errors.New("new redis replay store: redis addr must not be empty")
|
if client == nil {
|
||||||
}
|
return nil, errors.New("new redis replay store: nil redis client")
|
||||||
if sessionCfg.DB < 0 {
|
|
||||||
return nil, errors.New("new redis replay store: redis db must not be negative")
|
|
||||||
}
|
}
|
||||||
if strings.TrimSpace(replayCfg.KeyPrefix) == "" {
|
if strings.TrimSpace(replayCfg.KeyPrefix) == "" {
|
||||||
return nil, errors.New("new redis replay store: replay key prefix must not be empty")
|
return nil, errors.New("new redis replay store: replay key prefix must not be empty")
|
||||||
@@ -39,54 +36,13 @@ func NewRedisStore(sessionCfg config.SessionCacheRedisConfig, replayCfg config.R
|
|||||||
return nil, errors.New("new redis replay store: reserve timeout must be positive")
|
return nil, errors.New("new redis replay store: reserve timeout must be positive")
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &redis.Options{
|
|
||||||
Addr: sessionCfg.Addr,
|
|
||||||
Username: sessionCfg.Username,
|
|
||||||
Password: sessionCfg.Password,
|
|
||||||
DB: sessionCfg.DB,
|
|
||||||
Protocol: 2,
|
|
||||||
DisableIdentity: true,
|
|
||||||
}
|
|
||||||
if sessionCfg.TLSEnabled {
|
|
||||||
options.TLSConfig = &tls.Config{MinVersion: tls.VersionTLS12}
|
|
||||||
}
|
|
||||||
|
|
||||||
return &RedisStore{
|
return &RedisStore{
|
||||||
client: redis.NewClient(options),
|
client: client,
|
||||||
keyPrefix: replayCfg.KeyPrefix,
|
keyPrefix: replayCfg.KeyPrefix,
|
||||||
reserveTimeout: replayCfg.ReserveTimeout,
|
reserveTimeout: replayCfg.ReserveTimeout,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close releases the underlying Redis client resources.
|
|
||||||
func (s *RedisStore) Close() error {
|
|
||||||
if s == nil || s.client == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.client.Close()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ping verifies that the configured Redis backend is reachable within the
|
|
||||||
// replay reserve timeout budget.
|
|
||||||
func (s *RedisStore) Ping(ctx context.Context) error {
|
|
||||||
if s == nil || s.client == nil {
|
|
||||||
return errors.New("ping redis replay store: nil store")
|
|
||||||
}
|
|
||||||
if ctx == nil {
|
|
||||||
return errors.New("ping redis replay store: nil context")
|
|
||||||
}
|
|
||||||
|
|
||||||
pingCtx, cancel := context.WithTimeout(ctx, s.reserveTimeout)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := s.client.Ping(pingCtx).Err(); err != nil {
|
|
||||||
return fmt.Errorf("ping redis replay store: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reserve records the authenticated deviceSessionID and requestID pair for
|
// Reserve records the authenticated deviceSessionID and requestID pair for
|
||||||
// ttl. It rejects duplicates while the reservation remains active.
|
// ttl. It rejects duplicates while the reservation remains active.
|
||||||
func (s *RedisStore) Reserve(ctx context.Context, deviceSessionID string, requestID string, ttl time.Duration) error {
|
func (s *RedisStore) Reserve(ctx context.Context, deviceSessionID string, requestID string, ttl time.Duration) error {
|
||||||
|
|||||||
@@ -10,81 +10,64 @@ import (
|
|||||||
"galaxy/gateway/internal/config"
|
"galaxy/gateway/internal/config"
|
||||||
|
|
||||||
"github.com/alicebob/miniredis/v2"
|
"github.com/alicebob/miniredis/v2"
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func newRedisClient(t *testing.T, addr string) *redis.Client {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
client := redis.NewClient(&redis.Options{
|
||||||
|
Addr: addr,
|
||||||
|
Protocol: 2,
|
||||||
|
DisableIdentity: true,
|
||||||
|
})
|
||||||
|
t.Cleanup(func() {
|
||||||
|
assert.NoError(t, client.Close())
|
||||||
|
})
|
||||||
|
|
||||||
|
return client
|
||||||
|
}
|
||||||
|
|
||||||
func TestNewRedisStore(t *testing.T) {
|
func TestNewRedisStore(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
server := miniredis.RunT(t)
|
||||||
|
client := newRedisClient(t, server.Addr())
|
||||||
|
|
||||||
|
validCfg := config.ReplayRedisConfig{
|
||||||
|
KeyPrefix: "gateway:replay:",
|
||||||
|
ReserveTimeout: 250 * time.Millisecond,
|
||||||
|
}
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
sessionCfg config.SessionCacheRedisConfig
|
client *redis.Client
|
||||||
replayCfg config.ReplayRedisConfig
|
cfg config.ReplayRedisConfig
|
||||||
wantErr string
|
wantErr string
|
||||||
}{
|
}{
|
||||||
|
{name: "valid config", client: client, cfg: validCfg},
|
||||||
|
{name: "nil client", client: nil, cfg: validCfg, wantErr: "nil redis client"},
|
||||||
{
|
{
|
||||||
name: "valid config",
|
name: "empty replay key prefix",
|
||||||
sessionCfg: config.SessionCacheRedisConfig{
|
client: client,
|
||||||
Addr: server.Addr(),
|
cfg: config.ReplayRedisConfig{ReserveTimeout: 250 * time.Millisecond},
|
||||||
DB: 2,
|
|
||||||
},
|
|
||||||
replayCfg: config.ReplayRedisConfig{
|
|
||||||
KeyPrefix: "gateway:replay:",
|
|
||||||
ReserveTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty redis addr",
|
|
||||||
replayCfg: config.ReplayRedisConfig{
|
|
||||||
KeyPrefix: "gateway:replay:",
|
|
||||||
ReserveTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis addr must not be empty",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "negative redis db",
|
|
||||||
sessionCfg: config.SessionCacheRedisConfig{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
DB: -1,
|
|
||||||
},
|
|
||||||
replayCfg: config.ReplayRedisConfig{
|
|
||||||
KeyPrefix: "gateway:replay:",
|
|
||||||
ReserveTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis db must not be negative",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty replay key prefix",
|
|
||||||
sessionCfg: config.SessionCacheRedisConfig{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
},
|
|
||||||
replayCfg: config.ReplayRedisConfig{
|
|
||||||
ReserveTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "replay key prefix must not be empty",
|
wantErr: "replay key prefix must not be empty",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "non-positive reserve timeout",
|
name: "non-positive reserve timeout",
|
||||||
sessionCfg: config.SessionCacheRedisConfig{
|
client: client,
|
||||||
Addr: server.Addr(),
|
cfg: config.ReplayRedisConfig{KeyPrefix: "gateway:replay:"},
|
||||||
},
|
|
||||||
replayCfg: config.ReplayRedisConfig{
|
|
||||||
KeyPrefix: "gateway:replay:",
|
|
||||||
},
|
|
||||||
wantErr: "reserve timeout must be positive",
|
wantErr: "reserve timeout must be positive",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
store, err := NewRedisStore(tt.sessionCfg, tt.replayCfg)
|
store, err := NewRedisStore(tt.client, tt.cfg)
|
||||||
if tt.wantErr != "" {
|
if tt.wantErr != "" {
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
require.ErrorContains(t, err, tt.wantErr)
|
require.ErrorContains(t, err, tt.wantErr)
|
||||||
@@ -92,28 +75,16 @@ func TestNewRedisStore(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
require.NotNil(t, store)
|
||||||
assert.NoError(t, store.Close())
|
|
||||||
})
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRedisStorePing(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
|
||||||
store := newTestRedisStore(t, server, config.SessionCacheRedisConfig{}, config.ReplayRedisConfig{})
|
|
||||||
|
|
||||||
require.NoError(t, store.Ping(context.Background()))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestRedisStoreReserve(t *testing.T) {
|
func TestRedisStoreReserve(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
sessionCfg config.SessionCacheRedisConfig
|
|
||||||
replayCfg config.ReplayRedisConfig
|
replayCfg config.ReplayRedisConfig
|
||||||
deviceSessionID string
|
deviceSessionID string
|
||||||
requestID string
|
requestID string
|
||||||
@@ -170,13 +141,11 @@ func TestRedisStoreReserve(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
server := miniredis.RunT(t)
|
||||||
store := newTestRedisStore(t, server, tt.sessionCfg, tt.replayCfg)
|
store := newTestRedisStore(t, server, tt.replayCfg)
|
||||||
|
|
||||||
err := store.Reserve(context.Background(), tt.deviceSessionID, tt.requestID, tt.ttl)
|
err := store.Reserve(context.Background(), tt.deviceSessionID, tt.requestID, tt.ttl)
|
||||||
if tt.wantErrIs != nil || tt.wantErrText != "" {
|
if tt.wantErrIs != nil || tt.wantErrText != "" {
|
||||||
@@ -201,17 +170,12 @@ func TestRedisStoreReserve(t *testing.T) {
|
|||||||
func TestRedisStoreReserveReturnsBackendError(t *testing.T) {
|
func TestRedisStoreReserveReturnsBackendError(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
store, err := NewRedisStore(
|
client := newRedisClient(t, unusedTCPAddr(t))
|
||||||
config.SessionCacheRedisConfig{Addr: unusedTCPAddr(t)},
|
store, err := NewRedisStore(client, config.ReplayRedisConfig{
|
||||||
config.ReplayRedisConfig{
|
KeyPrefix: "gateway:replay:",
|
||||||
KeyPrefix: "gateway:replay:",
|
ReserveTimeout: 100 * time.Millisecond,
|
||||||
ReserveTimeout: 100 * time.Millisecond,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
require.NoError(t, err)
|
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, store.Close())
|
|
||||||
})
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
err = store.Reserve(context.Background(), "device-session-123", "request-123", 5*time.Second)
|
err = store.Reserve(context.Background(), "device-session-123", "request-123", 5*time.Second)
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
@@ -219,12 +183,9 @@ func TestRedisStoreReserveReturnsBackendError(t *testing.T) {
|
|||||||
assert.ErrorContains(t, err, "reserve replay request in redis")
|
assert.ErrorContains(t, err, "reserve replay request in redis")
|
||||||
}
|
}
|
||||||
|
|
||||||
func newTestRedisStore(t *testing.T, server *miniredis.Miniredis, sessionCfg config.SessionCacheRedisConfig, replayCfg config.ReplayRedisConfig) *RedisStore {
|
func newTestRedisStore(t *testing.T, server *miniredis.Miniredis, replayCfg config.ReplayRedisConfig) *RedisStore {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
if sessionCfg.Addr == "" {
|
|
||||||
sessionCfg.Addr = server.Addr()
|
|
||||||
}
|
|
||||||
if replayCfg.KeyPrefix == "" {
|
if replayCfg.KeyPrefix == "" {
|
||||||
replayCfg.KeyPrefix = "gateway:replay:"
|
replayCfg.KeyPrefix = "gateway:replay:"
|
||||||
}
|
}
|
||||||
@@ -232,11 +193,8 @@ func newTestRedisStore(t *testing.T, server *miniredis.Miniredis, sessionCfg con
|
|||||||
replayCfg.ReserveTimeout = 250 * time.Millisecond
|
replayCfg.ReserveTimeout = 250 * time.Millisecond
|
||||||
}
|
}
|
||||||
|
|
||||||
store, err := NewRedisStore(sessionCfg, replayCfg)
|
store, err := NewRedisStore(newRedisClient(t, server.Addr()), replayCfg)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, store.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
return store
|
return store
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ package session
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
@@ -32,68 +31,27 @@ type redisRecord struct {
|
|||||||
RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"`
|
RevokedAtMS *int64 `json:"revoked_at_ms,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRedisCache constructs a Redis-backed SessionCache from cfg. The returned
|
// NewRedisCache constructs a Redis-backed SessionCache that uses client and
|
||||||
// cache is read-only from the gateway perspective and does not write or mutate
|
// applies the namespace and timeout settings from cfg. The cache does not own
|
||||||
// Redis state.
|
// the client; the runtime supplies a shared *redis.Client.
|
||||||
func NewRedisCache(cfg config.SessionCacheRedisConfig) (*RedisCache, error) {
|
func NewRedisCache(client *redis.Client, cfg config.SessionCacheRedisConfig) (*RedisCache, error) {
|
||||||
if strings.TrimSpace(cfg.Addr) == "" {
|
if client == nil {
|
||||||
return nil, errors.New("new redis session cache: redis addr must not be empty")
|
return nil, errors.New("new redis session cache: nil redis client")
|
||||||
}
|
}
|
||||||
if cfg.DB < 0 {
|
if strings.TrimSpace(cfg.KeyPrefix) == "" {
|
||||||
return nil, errors.New("new redis session cache: redis db must not be negative")
|
return nil, errors.New("new redis session cache: redis key prefix must not be empty")
|
||||||
}
|
}
|
||||||
if cfg.LookupTimeout <= 0 {
|
if cfg.LookupTimeout <= 0 {
|
||||||
return nil, errors.New("new redis session cache: lookup timeout must be positive")
|
return nil, errors.New("new redis session cache: lookup timeout must be positive")
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &redis.Options{
|
|
||||||
Addr: cfg.Addr,
|
|
||||||
Username: cfg.Username,
|
|
||||||
Password: cfg.Password,
|
|
||||||
DB: cfg.DB,
|
|
||||||
Protocol: 2,
|
|
||||||
DisableIdentity: true,
|
|
||||||
}
|
|
||||||
if cfg.TLSEnabled {
|
|
||||||
options.TLSConfig = &tls.Config{MinVersion: tls.VersionTLS12}
|
|
||||||
}
|
|
||||||
|
|
||||||
return &RedisCache{
|
return &RedisCache{
|
||||||
client: redis.NewClient(options),
|
client: client,
|
||||||
keyPrefix: cfg.KeyPrefix,
|
keyPrefix: cfg.KeyPrefix,
|
||||||
lookupTimeout: cfg.LookupTimeout,
|
lookupTimeout: cfg.LookupTimeout,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close releases the underlying Redis client resources.
|
|
||||||
func (c *RedisCache) Close() error {
|
|
||||||
if c == nil || c.client == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return c.client.Close()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ping verifies that the configured Redis backend is reachable within the
|
|
||||||
// cache lookup timeout budget.
|
|
||||||
func (c *RedisCache) Ping(ctx context.Context) error {
|
|
||||||
if c == nil || c.client == nil {
|
|
||||||
return errors.New("ping redis session cache: nil cache")
|
|
||||||
}
|
|
||||||
if ctx == nil {
|
|
||||||
return errors.New("ping redis session cache: nil context")
|
|
||||||
}
|
|
||||||
|
|
||||||
pingCtx, cancel := context.WithTimeout(ctx, c.lookupTimeout)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := c.client.Ping(pingCtx).Err(); err != nil {
|
|
||||||
return fmt.Errorf("ping redis session cache: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Lookup resolves deviceSessionID from Redis, validates the cached JSON
|
// Lookup resolves deviceSessionID from Redis, validates the cached JSON
|
||||||
// payload strictly, and returns the decoded session record.
|
// payload strictly, and returns the decoded session record.
|
||||||
func (c *RedisCache) Lookup(ctx context.Context, deviceSessionID string) (Record, error) {
|
func (c *RedisCache) Lookup(ctx context.Context, deviceSessionID string) (Record, error) {
|
||||||
|
|||||||
@@ -10,61 +10,64 @@ import (
|
|||||||
"galaxy/gateway/internal/config"
|
"galaxy/gateway/internal/config"
|
||||||
|
|
||||||
"github.com/alicebob/miniredis/v2"
|
"github.com/alicebob/miniredis/v2"
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func newRedisClient(t *testing.T, server *miniredis.Miniredis) *redis.Client {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
client := redis.NewClient(&redis.Options{
|
||||||
|
Addr: server.Addr(),
|
||||||
|
Protocol: 2,
|
||||||
|
DisableIdentity: true,
|
||||||
|
})
|
||||||
|
t.Cleanup(func() {
|
||||||
|
assert.NoError(t, client.Close())
|
||||||
|
})
|
||||||
|
|
||||||
|
return client
|
||||||
|
}
|
||||||
|
|
||||||
func TestNewRedisCache(t *testing.T) {
|
func TestNewRedisCache(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
server := miniredis.RunT(t)
|
||||||
|
client := newRedisClient(t, server)
|
||||||
|
|
||||||
|
validCfg := config.SessionCacheRedisConfig{
|
||||||
|
KeyPrefix: "gateway:session:",
|
||||||
|
LookupTimeout: 250 * time.Millisecond,
|
||||||
|
}
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
|
client *redis.Client
|
||||||
cfg config.SessionCacheRedisConfig
|
cfg config.SessionCacheRedisConfig
|
||||||
wantErr string
|
wantErr string
|
||||||
}{
|
}{
|
||||||
|
{name: "valid config", client: client, cfg: validCfg},
|
||||||
|
{name: "nil client", client: nil, cfg: validCfg, wantErr: "nil redis client"},
|
||||||
{
|
{
|
||||||
name: "valid config",
|
name: "empty key prefix",
|
||||||
cfg: config.SessionCacheRedisConfig{
|
client: client,
|
||||||
Addr: server.Addr(),
|
cfg: config.SessionCacheRedisConfig{LookupTimeout: 250 * time.Millisecond},
|
||||||
DB: 2,
|
wantErr: "redis key prefix must not be empty",
|
||||||
KeyPrefix: "gateway:session:",
|
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "empty addr",
|
name: "non-positive lookup timeout",
|
||||||
cfg: config.SessionCacheRedisConfig{
|
client: client,
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
cfg: config.SessionCacheRedisConfig{KeyPrefix: "gateway:session:"},
|
||||||
},
|
|
||||||
wantErr: "redis addr must not be empty",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "negative db",
|
|
||||||
cfg: config.SessionCacheRedisConfig{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
DB: -1,
|
|
||||||
LookupTimeout: 250 * time.Millisecond,
|
|
||||||
},
|
|
||||||
wantErr: "redis db must not be negative",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "non-positive lookup timeout",
|
|
||||||
cfg: config.SessionCacheRedisConfig{
|
|
||||||
Addr: server.Addr(),
|
|
||||||
},
|
|
||||||
wantErr: "lookup timeout must be positive",
|
wantErr: "lookup timeout must be positive",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
tt := tt
|
|
||||||
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
cache, err := NewRedisCache(tt.cfg)
|
cache, err := NewRedisCache(tt.client, tt.cfg)
|
||||||
if tt.wantErr != "" {
|
if tt.wantErr != "" {
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
require.ErrorContains(t, err, tt.wantErr)
|
require.ErrorContains(t, err, tt.wantErr)
|
||||||
@@ -72,22 +75,11 @@ func TestNewRedisCache(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
t.Cleanup(func() {
|
require.NotNil(t, cache)
|
||||||
assert.NoError(t, cache.Close())
|
|
||||||
})
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRedisCachePing(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
server := miniredis.RunT(t)
|
|
||||||
cache := newTestRedisCache(t, server, config.SessionCacheRedisConfig{})
|
|
||||||
|
|
||||||
require.NoError(t, cache.Ping(context.Background()))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestRedisCacheLookup(t *testing.T) {
|
func TestRedisCacheLookup(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
@@ -259,8 +251,6 @@ func TestRedisCacheLookup(t *testing.T) {
|
|||||||
server := miniredis.RunT(t)
|
server := miniredis.RunT(t)
|
||||||
|
|
||||||
cfg := tt.cfg
|
cfg := tt.cfg
|
||||||
cfg.Addr = server.Addr()
|
|
||||||
cfg.DB = 0
|
|
||||||
cfg.LookupTimeout = 250 * time.Millisecond
|
cfg.LookupTimeout = 250 * time.Millisecond
|
||||||
|
|
||||||
if tt.seed != nil {
|
if tt.seed != nil {
|
||||||
@@ -292,20 +282,16 @@ func TestRedisCacheLookup(t *testing.T) {
|
|||||||
func newTestRedisCache(t *testing.T, server *miniredis.Miniredis, cfg config.SessionCacheRedisConfig) *RedisCache {
|
func newTestRedisCache(t *testing.T, server *miniredis.Miniredis, cfg config.SessionCacheRedisConfig) *RedisCache {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
if cfg.Addr == "" {
|
if cfg.KeyPrefix == "" {
|
||||||
cfg.Addr = server.Addr()
|
cfg.KeyPrefix = "gateway:session:"
|
||||||
}
|
}
|
||||||
if cfg.LookupTimeout == 0 {
|
if cfg.LookupTimeout == 0 {
|
||||||
cfg.LookupTimeout = 250 * time.Millisecond
|
cfg.LookupTimeout = 250 * time.Millisecond
|
||||||
}
|
}
|
||||||
|
|
||||||
cache, err := NewRedisCache(cfg)
|
cache, err := NewRedisCache(newRedisClient(t, server), cfg)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
t.Cleanup(func() {
|
|
||||||
assert.NoError(t, cache.Close())
|
|
||||||
})
|
|
||||||
|
|
||||||
return cache
|
return cache
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ import (
|
|||||||
sdkmetric "go.opentelemetry.io/otel/sdk/metric"
|
sdkmetric "go.opentelemetry.io/otel/sdk/metric"
|
||||||
"go.opentelemetry.io/otel/sdk/resource"
|
"go.opentelemetry.io/otel/sdk/resource"
|
||||||
sdktrace "go.opentelemetry.io/otel/sdk/trace"
|
sdktrace "go.opentelemetry.io/otel/sdk/trace"
|
||||||
|
oteltrace "go.opentelemetry.io/otel/trace"
|
||||||
"go.uber.org/zap"
|
"go.uber.org/zap"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -149,6 +150,26 @@ func (r *Runtime) Handler() http.Handler {
|
|||||||
return r.promHandler
|
return r.promHandler
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TracerProvider returns the runtime tracer provider, falling back to the
|
||||||
|
// global one when r is not initialised.
|
||||||
|
func (r *Runtime) TracerProvider() oteltrace.TracerProvider {
|
||||||
|
if r == nil || r.tracerProvider == nil {
|
||||||
|
return otel.GetTracerProvider()
|
||||||
|
}
|
||||||
|
|
||||||
|
return r.tracerProvider
|
||||||
|
}
|
||||||
|
|
||||||
|
// MeterProvider returns the runtime meter provider, falling back to the
|
||||||
|
// global one when r is not initialised.
|
||||||
|
func (r *Runtime) MeterProvider() metric.MeterProvider {
|
||||||
|
if r == nil || r.meterProvider == nil {
|
||||||
|
return otel.GetMeterProvider()
|
||||||
|
}
|
||||||
|
|
||||||
|
return r.meterProvider
|
||||||
|
}
|
||||||
|
|
||||||
// Shutdown flushes the configured telemetry providers.
|
// Shutdown flushes the configured telemetry providers.
|
||||||
func (r *Runtime) Shutdown(ctx context.Context) error {
|
func (r *Runtime) Shutdown(ctx context.Context) error {
|
||||||
if r == nil {
|
if r == nil {
|
||||||
|
|||||||
@@ -13,6 +13,149 @@ Execution priorities:
|
|||||||
- Defer threshold tuning until after the basic data model is working.
|
- Defer threshold tuning until after the basic data model is working.
|
||||||
- Avoid unnecessary infrastructure on the first iteration.
|
- Avoid unnecessary infrastructure on the first iteration.
|
||||||
|
|
||||||
|
## Stage 00 — Persistence Stack and Backend Assignment
|
||||||
|
|
||||||
|
Goal:
|
||||||
|
|
||||||
|
- Pin the platform-wide persistence stack and the per-service backend
|
||||||
|
ownership before any feature stage begins, so that subsequent stages
|
||||||
|
design schemas, queries, and worker loops consistently with the
|
||||||
|
project-wide rules in
|
||||||
|
[`../ARCHITECTURE.md §Persistence Backends`](../ARCHITECTURE.md#persistence-backends)
|
||||||
|
and the staged migration plan in
|
||||||
|
[`../PG_PLAN.md`](../PG_PLAN.md).
|
||||||
|
|
||||||
|
This stage is documentation-only: no code exists in this service yet, and
|
||||||
|
this stage adds none. It is a prerequisite to every later stage and ships
|
||||||
|
as part of `PG_PLAN.md` Stage 8.
|
||||||
|
|
||||||
|
Tasks:
|
||||||
|
|
||||||
|
- Adopt the shared Postgres helper [`pkg/postgres`](../pkg/postgres) for
|
||||||
|
every durable storage path:
|
||||||
|
|
||||||
|
- driver `github.com/jackc/pgx/v5`, exposed as `*sql.DB` via
|
||||||
|
`github.com/jackc/pgx/v5/stdlib`;
|
||||||
|
- query layer `github.com/go-jet/jet/v2` (PostgreSQL dialect) with
|
||||||
|
generated code under `internal/adapters/postgres/jet/`, regenerated
|
||||||
|
by a per-service `make jet` target and committed to the repo;
|
||||||
|
- migrations via `github.com/pressly/goose/v3` library API embedded
|
||||||
|
with `//go:embed`, applied at service startup before any HTTP
|
||||||
|
listener becomes ready, with non-zero exit on failure;
|
||||||
|
- `github.com/testcontainers/testcontainers-go` (`modules/postgres`)
|
||||||
|
for unit tests and for hosting the transient instance used by
|
||||||
|
`make jet`.
|
||||||
|
- Adopt the shared Redis helper [`pkg/redisconn`](../pkg/redisconn) for
|
||||||
|
every Redis client:
|
||||||
|
|
||||||
|
- master/replica/password connection shape;
|
||||||
|
- mandatory password;
|
||||||
|
- no `TLS_ENABLED`, no `USERNAME` (rejected at startup with a clear
|
||||||
|
error from `pkg/redisconn.LoadFromEnv`).
|
||||||
|
- Own the `geoprofile` schema in the shared `galaxy` PostgreSQL database.
|
||||||
|
Connect with a dedicated `geoprofile` PG role whose grants are
|
||||||
|
restricted to its own schema (defense-in-depth, expressed in the
|
||||||
|
initial migration).
|
||||||
|
- Lay out the postgres-backed adapter directory consistently with the
|
||||||
|
PG-migrated services:
|
||||||
|
|
||||||
|
```text
|
||||||
|
geoprofile/
|
||||||
|
internal/
|
||||||
|
adapters/
|
||||||
|
postgres/
|
||||||
|
migrations/ # *.sql files + migrations.go (//go:embed)
|
||||||
|
jet/ # generated code, commit-checked
|
||||||
|
<storeName>/ # adapter implementations matching
|
||||||
|
# internal/ports
|
||||||
|
config/
|
||||||
|
config.go # Postgres + Redis schemas
|
||||||
|
Makefile # `jet` target: testcontainers + goose + jet
|
||||||
|
```
|
||||||
|
- Backend assignment for the entities listed in
|
||||||
|
[`README.md §Data Entities`](README.md#data-entities):
|
||||||
|
|
||||||
|
- PostgreSQL (`geoprofile` schema, source of truth):
|
||||||
|
|
||||||
|
- `country_observation` — durable observed-country fact rows.
|
||||||
|
- `device_session_country_score` — per-`device_session_id` weighted
|
||||||
|
country aggregates.
|
||||||
|
- `device_session_geo_state` — current `usual_connection_country`
|
||||||
|
per `device_session_id`.
|
||||||
|
- `user_review_state` — `country_review_recommended` flag and last
|
||||||
|
evaluation timestamp.
|
||||||
|
- `declared_country_version` — immutable history of approved
|
||||||
|
`declared_country` changes (with version status `recorded` /
|
||||||
|
`applied` / `sync_failed`).
|
||||||
|
- `session_block_action` — local audit of block-request outcomes.
|
||||||
|
- Ingest-queue lifecycle from §Stage 05 (`accepted` / `processing` /
|
||||||
|
`processed` / `failed`) is materialised as `status` /
|
||||||
|
`next_attempt_at` columns on the durable observation row, not as a
|
||||||
|
Redis ZSET. Workers select pending work via
|
||||||
|
`SELECT ... FOR UPDATE SKIP LOCKED`, mirroring the pattern already
|
||||||
|
in use by Mail and Notification.
|
||||||
|
- Redis (`pkg/redisconn`):
|
||||||
|
|
||||||
|
- only ephemeral runtime-coordination signals if any appear during
|
||||||
|
implementation — for example, transition-deduplication windows for
|
||||||
|
review-flag notifications, short worker leases on processing
|
||||||
|
claims. No durable business state.
|
||||||
|
- the `notification:intents` Redis Stream is used by this service
|
||||||
|
only as a producer to publish `geo.review_recommended` intents
|
||||||
|
(see §Stage 11 and `README.md §Integration with Notification
|
||||||
|
Service`); that connection is built via `pkg/redisconn`.
|
||||||
|
- **Idempotency**, if added for ingest deduplication, is a `UNIQUE`
|
||||||
|
constraint on the durable observation row, never a separate Redis kv.
|
||||||
|
**Retry scheduling**, if added for worker reprocessing or
|
||||||
|
`User Service` sync retries, is a column on the durable record, worked
|
||||||
|
off via `FOR UPDATE SKIP LOCKED`. Both rules align this service with
|
||||||
|
the platform-wide pattern.
|
||||||
|
- Time-valued columns are `timestamptz`. Adapters normalise every
|
||||||
|
`time.Time` value crossing the SQL boundary to `time.UTC` on bind and
|
||||||
|
scan, per
|
||||||
|
`../ARCHITECTURE.md §Persistence Backends — Timestamp handling`.
|
||||||
|
- Configuration (target):
|
||||||
|
|
||||||
|
- PostgreSQL knobs (loaded via
|
||||||
|
`pkg/postgres.LoadFromEnv("GEOPROFILE")`):
|
||||||
|
|
||||||
|
- `GEOPROFILE_POSTGRES_PRIMARY_DSN` (required;
|
||||||
|
`postgres://geoprofile:<pwd>@<host>:5432/galaxy?search_path=geoprofile&sslmode=disable`);
|
||||||
|
- `GEOPROFILE_POSTGRES_REPLICA_DSNS` (optional, comma-separated;
|
||||||
|
reserved for future read-routing, not consumed yet);
|
||||||
|
- `GEOPROFILE_POSTGRES_OPERATION_TIMEOUT`,
|
||||||
|
`GEOPROFILE_POSTGRES_MAX_OPEN_CONNS`,
|
||||||
|
`GEOPROFILE_POSTGRES_MAX_IDLE_CONNS`,
|
||||||
|
`GEOPROFILE_POSTGRES_CONN_MAX_LIFETIME`.
|
||||||
|
- Redis knobs (loaded via
|
||||||
|
`pkg/redisconn.LoadFromEnv("GEOPROFILE")`):
|
||||||
|
|
||||||
|
- `GEOPROFILE_REDIS_MASTER_ADDR` (required),
|
||||||
|
`GEOPROFILE_REDIS_REPLICA_ADDRS` (optional, comma-separated);
|
||||||
|
- `GEOPROFILE_REDIS_PASSWORD` (required);
|
||||||
|
- `GEOPROFILE_REDIS_DB`,
|
||||||
|
`GEOPROFILE_REDIS_OPERATION_TIMEOUT`.
|
||||||
|
- Per-service decision record `geoprofile/docs/postgres-migration.md`
|
||||||
|
is created by the stage that actually implements the service. It must
|
||||||
|
capture: schema and role grants, queue materialisation choice, retry
|
||||||
|
pattern, and any non-trivial deviation from the platform-wide rules
|
||||||
|
(analogous to
|
||||||
|
[`../user/docs/postgres-migration.md`](../user/docs/postgres-migration.md),
|
||||||
|
[`../mail/docs/postgres-migration.md`](../mail/docs/postgres-migration.md),
|
||||||
|
[`../notification/docs/postgres-migration.md`](../notification/docs/postgres-migration.md),
|
||||||
|
and [`../lobby/docs/postgres-migration.md`](../lobby/docs/postgres-migration.md)).
|
||||||
|
|
||||||
|
Exit criteria:
|
||||||
|
|
||||||
|
- The persistence stack and schema ownership are fixed and visible to
|
||||||
|
implementers.
|
||||||
|
- Every later stage (Stage 01+) designs schemas and queries on top of
|
||||||
|
the `geoprofile` Postgres schema, or — for any ephemeral signal — on
|
||||||
|
top of `pkg/redisconn`.
|
||||||
|
- `../ARCHITECTURE.md §Persistence Backends` and `../PG_PLAN.md` remain
|
||||||
|
the canonical references; this PLAN points at them rather than
|
||||||
|
duplicating their content.
|
||||||
|
|
||||||
## Stage 01 — Freeze Service Vocabulary and Contracts
|
## Stage 01 — Freeze Service Vocabulary and Contracts
|
||||||
|
|
||||||
Goal:
|
Goal:
|
||||||
@@ -643,6 +786,7 @@ Exit criteria:
|
|||||||
|
|
||||||
Recommended delivery order:
|
Recommended delivery order:
|
||||||
|
|
||||||
|
- Persistence stack and backend assignment
|
||||||
- Domain vocabulary and ownership
|
- Domain vocabulary and ownership
|
||||||
- Domain model
|
- Domain model
|
||||||
- FlatBuffers schema
|
- FlatBuffers schema
|
||||||
|
|||||||
@@ -137,6 +137,78 @@ To avoid divergence:
|
|||||||
- Geo Profile Service must then synchronously update the current value in `User Service`.
|
- Geo Profile Service must then synchronously update the current value in `User Service`.
|
||||||
- A version should become effective only after the `User Service` update succeeds.
|
- A version should become effective only after the `User Service` update succeeds.
|
||||||
|
|
||||||
|
## Persistence Backends
|
||||||
|
|
||||||
|
The service follows the platform-wide split described in
|
||||||
|
[`../ARCHITECTURE.md §Persistence Backends`](../ARCHITECTURE.md#persistence-backends);
|
||||||
|
the staged migration plan that established this split is
|
||||||
|
[`../PG_PLAN.md`](../PG_PLAN.md). Per-service decisions and any deviation
|
||||||
|
from the platform-wide rules will be captured in
|
||||||
|
`docs/postgres-migration.md` once implementation begins, in the same
|
||||||
|
shape as
|
||||||
|
[`../user/docs/postgres-migration.md`](../user/docs/postgres-migration.md),
|
||||||
|
[`../mail/docs/postgres-migration.md`](../mail/docs/postgres-migration.md),
|
||||||
|
[`../notification/docs/postgres-migration.md`](../notification/docs/postgres-migration.md),
|
||||||
|
and [`../lobby/docs/postgres-migration.md`](../lobby/docs/postgres-migration.md).
|
||||||
|
|
||||||
|
Geo Profile Service owns the `geoprofile` schema in the shared `galaxy`
|
||||||
|
PostgreSQL database. A dedicated `geoprofile` PG role connects with grants
|
||||||
|
restricted to its own schema (defense-in-depth, expressed in the initial
|
||||||
|
migration).
|
||||||
|
|
||||||
|
PostgreSQL is the source of truth for all durable
|
||||||
|
[§Data Entities](#data-entities) of the service:
|
||||||
|
|
||||||
|
- `country_observation` — durable observed-country fact rows.
|
||||||
|
- `device_session_country_score` — per-`device_session_id` weighted
|
||||||
|
ranking.
|
||||||
|
- `device_session_geo_state` — current `usual_connection_country` per
|
||||||
|
`device_session_id`.
|
||||||
|
- `user_review_state` — `country_review_recommended` plus last evaluation
|
||||||
|
timestamp.
|
||||||
|
- `declared_country_version` — immutable history of approved
|
||||||
|
`declared_country` changes (status `recorded` / `applied` /
|
||||||
|
`sync_failed`).
|
||||||
|
- `session_block_action` — local audit of block-request outcomes.
|
||||||
|
- Ingest-queue lifecycle (`accepted` / `processing` / `processed` /
|
||||||
|
`failed`, see [§Internal Queue and Worker Pipeline](#internal-queue-and-worker-pipeline))
|
||||||
|
is materialised as `status` / `next_attempt_at` columns on the durable
|
||||||
|
observation row and worked off via
|
||||||
|
`SELECT ... FOR UPDATE SKIP LOCKED` — the same pattern Mail and
|
||||||
|
Notification already use for their durable retry schedules.
|
||||||
|
|
||||||
|
Redis carries only ephemeral runtime-coordination signals if and when
|
||||||
|
they appear during implementation (short worker leases on processing
|
||||||
|
claims, transition-deduplication windows for review-flag notifications).
|
||||||
|
No durable business state lives on Redis. The `notification:intents`
|
||||||
|
Redis Stream is used solely as a producer channel through which this
|
||||||
|
service publishes `geo.review_recommended` intents (see
|
||||||
|
[§Integration with Notification Service](#integration-with-notification-service));
|
||||||
|
that connection is built via `pkg/redisconn`.
|
||||||
|
|
||||||
|
Stack:
|
||||||
|
|
||||||
|
- driver `github.com/jackc/pgx/v5`, exposed as `*sql.DB` via
|
||||||
|
`github.com/jackc/pgx/v5/stdlib`;
|
||||||
|
- query layer `github.com/go-jet/jet/v2` (PostgreSQL dialect) with
|
||||||
|
generated code committed under `internal/adapters/postgres/jet/` and
|
||||||
|
regenerated by `make jet`;
|
||||||
|
- migrations via `github.com/pressly/goose/v3` library API embedded with
|
||||||
|
`//go:embed`, applied at service startup before any listener becomes
|
||||||
|
ready (non-zero exit on failure);
|
||||||
|
- testcontainers-backed unit tests using
|
||||||
|
`github.com/testcontainers/testcontainers-go/modules/postgres`;
|
||||||
|
- all Postgres connections are opened through
|
||||||
|
[`pkg/postgres`](../pkg/postgres); all Redis connections through
|
||||||
|
[`pkg/redisconn`](../pkg/redisconn).
|
||||||
|
|
||||||
|
Every `time.Time` value crossing the SQL boundary is normalised to UTC
|
||||||
|
on bind and scan, per the platform-wide rule on `timestamptz` handling.
|
||||||
|
|
||||||
|
The full target environment-variable matrix
|
||||||
|
(`GEOPROFILE_POSTGRES_*`, `GEOPROFILE_REDIS_*`) is fixed in
|
||||||
|
[`PLAN.md` Stage 00](PLAN.md#stage-00--persistence-stack-and-backend-assignment).
|
||||||
|
|
||||||
## High-Level Architecture
|
## High-Level Architecture
|
||||||
|
|
||||||
```mermaid
|
```mermaid
|
||||||
|
|||||||
@@ -15,6 +15,8 @@ use (
|
|||||||
./pkg/geoip
|
./pkg/geoip
|
||||||
./pkg/model
|
./pkg/model
|
||||||
./pkg/notificationintent
|
./pkg/notificationintent
|
||||||
|
./pkg/postgres
|
||||||
|
./pkg/redisconn
|
||||||
./pkg/schema
|
./pkg/schema
|
||||||
./pkg/storage
|
./pkg/storage
|
||||||
./pkg/transcoder
|
./pkg/transcoder
|
||||||
@@ -29,6 +31,8 @@ replace (
|
|||||||
galaxy/geoip v0.0.0 => ./pkg/geoip
|
galaxy/geoip v0.0.0 => ./pkg/geoip
|
||||||
galaxy/model v0.0.0 => ./pkg/model
|
galaxy/model v0.0.0 => ./pkg/model
|
||||||
galaxy/notificationintent v0.0.0 => ./pkg/notificationintent
|
galaxy/notificationintent v0.0.0 => ./pkg/notificationintent
|
||||||
|
galaxy/postgres v0.0.0 => ./pkg/postgres
|
||||||
|
galaxy/redisconn v0.0.0 => ./pkg/redisconn
|
||||||
galaxy/schema v0.0.0 => ./pkg/schema
|
galaxy/schema v0.0.0 => ./pkg/schema
|
||||||
galaxy/storage v0.0.0 => ./pkg/storage
|
galaxy/storage v0.0.0 => ./pkg/storage
|
||||||
galaxy/transcoder v0.0.0 => ./pkg/transcoder
|
galaxy/transcoder v0.0.0 => ./pkg/transcoder
|
||||||
|
|||||||
+62
-1
@@ -1,32 +1,58 @@
|
|||||||
buf.build/go/hyperpb v0.1.3/go.mod h1:IHXAM5qnS0/Fsnd7/HGDghFNvUET646WoHmq1FDZXIE=
|
buf.build/go/hyperpb v0.1.3/go.mod h1:IHXAM5qnS0/Fsnd7/HGDghFNvUET646WoHmq1FDZXIE=
|
||||||
cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
|
cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
|
||||||
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
|
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
|
||||||
|
filippo.io/edwards25519 v1.2.0/go.mod h1:xzAOLCNug/yB62zG1bQ8uziwrIqIuxhctzJT18Q77mc=
|
||||||
|
github.com/ClickHouse/ch-go v0.71.0/go.mod h1:NwbNc+7jaqfY58dmdDUbG4Jl22vThgx1cYjBw0vtgXw=
|
||||||
|
github.com/ClickHouse/clickhouse-go/v2 v2.45.0/go.mod h1:giJfUVlMkcfUEPVfRpt51zZaGEx9i17gCos8gBl392c=
|
||||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.31.0/go.mod h1:P4WPRUkOhJC13W//jWpyfJNDAIpvRbAUIYLX/4jtlE0=
|
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.31.0/go.mod h1:P4WPRUkOhJC13W//jWpyfJNDAIpvRbAUIYLX/4jtlE0=
|
||||||
github.com/akavel/rsrc v0.10.2/go.mod h1:uLoCtb9J+EyAqh+26kdrTgmzRBFPGOolLWKpdxkKq+c=
|
github.com/akavel/rsrc v0.10.2/go.mod h1:uLoCtb9J+EyAqh+26kdrTgmzRBFPGOolLWKpdxkKq+c=
|
||||||
github.com/alecthomas/kingpin/v2 v2.4.0/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE=
|
github.com/alecthomas/kingpin/v2 v2.4.0/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE=
|
||||||
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b/go.mod h1:fvzegU4vN3H1qMT+8wDmzjAcDONcgo2/SZ/TyfdUOFs=
|
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b/go.mod h1:fvzegU4vN3H1qMT+8wDmzjAcDONcgo2/SZ/TyfdUOFs=
|
||||||
|
github.com/andybalholm/brotli v1.2.1/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
|
||||||
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
|
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
|
||||||
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||||
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
||||||
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
|
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
|
||||||
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
||||||
github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5/go.mod h1:KdCmV+x/BuvyMxRnYBlmVaq4OLiKW6iRQfvC62cvdkI=
|
github.com/cncf/xds/go v0.0.0-20251210132809-ee656c7534f5/go.mod h1:KdCmV+x/BuvyMxRnYBlmVaq4OLiKW6iRQfvC62cvdkI=
|
||||||
|
github.com/coder/websocket v1.8.14/go.mod h1:NX3SzP+inril6yawo5CQXx8+fk145lPDC6pumgx0mVg=
|
||||||
github.com/containerd/typeurl/v2 v2.2.0/go.mod h1:8XOOxnyatxSWuG8OfsZXVnAF4iZfedjS/8UHSPJnX4g=
|
github.com/containerd/typeurl/v2 v2.2.0/go.mod h1:8XOOxnyatxSWuG8OfsZXVnAF4iZfedjS/8UHSPJnX4g=
|
||||||
github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||||
|
github.com/elastic/go-sysinfo v1.15.4/go.mod h1:ZBVXmqS368dOn/jvijV/zHLfakWTYHBZPk3G244lHrU=
|
||||||
|
github.com/elastic/go-windows v1.0.2/go.mod h1:bGcDpBzXgYSqM0Gx3DM4+UxFj300SZLixie9u9ixLM8=
|
||||||
github.com/envoyproxy/go-control-plane v0.14.0/go.mod h1:NcS5X47pLl/hfqxU70yPwL9ZMkUlwlKxtAohpi2wBEU=
|
github.com/envoyproxy/go-control-plane v0.14.0/go.mod h1:NcS5X47pLl/hfqxU70yPwL9ZMkUlwlKxtAohpi2wBEU=
|
||||||
github.com/envoyproxy/go-control-plane/envoy v1.36.0/go.mod h1:ty89S1YCCVruQAm9OtKeEkQLTb+Lkz0k8v9W0Oxsv98=
|
github.com/envoyproxy/go-control-plane/envoy v1.36.0/go.mod h1:ty89S1YCCVruQAm9OtKeEkQLTb+Lkz0k8v9W0Oxsv98=
|
||||||
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4=
|
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4=
|
||||||
github.com/envoyproxy/protoc-gen-validate v1.3.0/go.mod h1:HvYl7zwPa5mffgyeTUHA9zHIH36nmrm7oCbo4YKoSWA=
|
github.com/envoyproxy/protoc-gen-validate v1.3.0/go.mod h1:HvYl7zwPa5mffgyeTUHA9zHIH36nmrm7oCbo4YKoSWA=
|
||||||
github.com/francoispqt/gojay v1.2.13/go.mod h1:ehT5mTG4ua4581f1++1WLG0vPdaA9HaiDsoyrBGkyDY=
|
github.com/francoispqt/gojay v1.2.13/go.mod h1:ehT5mTG4ua4581f1++1WLG0vPdaA9HaiDsoyrBGkyDY=
|
||||||
|
github.com/friendsofgo/errors v0.9.2/go.mod h1:yCvFW5AkDIL9qn7suHVLiI/gH228n7PC4Pn44IGoTOI=
|
||||||
|
github.com/go-faster/city v1.0.1/go.mod h1:jKcUJId49qdW3L1qKHH/3wPeUstCVpVSXTM6vO3VcTw=
|
||||||
|
github.com/go-faster/errors v0.7.1/go.mod h1:5ySTjWFiphBs07IKuiL69nxdfd5+fzh1u7FPGZP2quo=
|
||||||
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
|
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
|
||||||
|
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||||
|
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
|
||||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||||
|
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||||
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
|
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
|
||||||
|
github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
|
||||||
|
github.com/golang-sql/sqlexp v0.1.0/go.mod h1:J4ad9Vo8ZCWQ2GMrC4UCQy1JpCbwU9m3EOqtpKwwwHI=
|
||||||
github.com/golang/glog v1.2.5/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w=
|
github.com/golang/glog v1.2.5/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w=
|
||||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||||
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||||
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
|
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
|
||||||
|
github.com/jackc/chunkreader v1.0.0 h1:4s39bBR8ByfqH+DKm8rQA3E1LHZWB9XWcrz8fqaZbe0=
|
||||||
|
github.com/jackc/chunkreader/v2 v2.0.1/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
|
||||||
|
github.com/jackc/pgconn v1.14.3/go.mod h1:RZbme4uasqzybK2RK5c65VsHxoyaml09lx3tXOcO/VM=
|
||||||
|
github.com/jackc/pgio v1.0.0/go.mod h1:oP+2QK2wFfUWgr+gxjoBH9KGBb31Eio69xUb0w5bYf8=
|
||||||
|
github.com/jackc/pgproto3 v1.1.0 h1:FYYE4yRw+AgI8wXIinMlNjBbp/UitDJwfj5LqqewP1A=
|
||||||
|
github.com/jackc/pgproto3/v2 v2.3.3/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
|
||||||
|
github.com/jackc/pgtype v1.14.4/go.mod h1:aKeozOde08iifGosdJpz9MBZonJOUJxqNpPBcMJTlVA=
|
||||||
|
github.com/jackc/pgx/v4 v4.18.3/go.mod h1:Ey4Oru5tH5sB6tV7hDmfWFahwF15Eb7DNXlRKx2CkVw=
|
||||||
|
github.com/jackc/puddle v1.3.0 h1:eHK/5clGOatcjX3oWGBO/MpxpbHzSwud5EWTSCI+MX0=
|
||||||
github.com/jackmordaunt/icns/v2 v2.2.6/go.mod h1:DqlVnR5iafSphrId7aSD06r3jg0KRC9V6lEBBp504ZQ=
|
github.com/jackmordaunt/icns/v2 v2.2.6/go.mod h1:DqlVnR5iafSphrId7aSD06r3jg0KRC9V6lEBBp504ZQ=
|
||||||
|
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
|
||||||
|
github.com/jonboulle/clockwork v0.5.0/go.mod h1:3mZlmanh0g2NDKO5TWZVJAfofYk64M7XN3SzBPjZF60=
|
||||||
github.com/jordanlewis/gcassert v0.0.0-20250430164644-389ef753e22e/go.mod h1:ZybsQk6DWyN5t7An1MuPm1gtSZ1xDaTXS9ZjIOxvQrk=
|
github.com/jordanlewis/gcassert v0.0.0-20250430164644-389ef753e22e/go.mod h1:ZybsQk6DWyN5t7An1MuPm1gtSZ1xDaTXS9ZjIOxvQrk=
|
||||||
github.com/josephspurrier/goversioninfo v1.4.0/go.mod h1:JWzv5rKQr+MmW+LvM412ToT/IkYDZjaclF2pKDss8IY=
|
github.com/josephspurrier/goversioninfo v1.4.0/go.mod h1:JWzv5rKQr+MmW+LvM412ToT/IkYDZjaclF2pKDss8IY=
|
||||||
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
|
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
|
||||||
@@ -36,13 +62,18 @@ github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfn
|
|||||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||||
github.com/lucor/goinfo v0.9.0/go.mod h1:L6m6tN5Rlova5Z83h1ZaKsMP1iiaoZ9vGTNzu5QKOD4=
|
github.com/lucor/goinfo v0.9.0/go.mod h1:L6m6tN5Rlova5Z83h1ZaKsMP1iiaoZ9vGTNzu5QKOD4=
|
||||||
|
github.com/mattn/go-sqlite3 v1.14.28/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
|
||||||
github.com/mcuadros/go-version v0.0.0-20190830083331-035f6764e8d2/go.mod h1:76rfSfYPWj01Z85hUf/ituArm797mNKcvINh1OlsZKo=
|
github.com/mcuadros/go-version v0.0.0-20190830083331-035f6764e8d2/go.mod h1:76rfSfYPWj01Z85hUf/ituArm797mNKcvINh1OlsZKo=
|
||||||
|
github.com/mfridman/xflag v0.1.0/go.mod h1:/483ywM5ZO5SuMVjrIGquYNE5CzLrj5Ux/LxWWnjRaE=
|
||||||
|
github.com/microsoft/go-mssqldb v1.9.8/go.mod h1:eGSRSGAW4hKMy5YcAenhCDjIRm2rhqIdmmwgciMzLus=
|
||||||
github.com/moby/sys/mount v0.3.4/go.mod h1:KcQJMbQdJHPlq5lcYT+/CjatWM4PuxKe+XLSVS4J6Os=
|
github.com/moby/sys/mount v0.3.4/go.mod h1:KcQJMbQdJHPlq5lcYT+/CjatWM4PuxKe+XLSVS4J6Os=
|
||||||
github.com/moby/sys/mountinfo v0.7.2/go.mod h1:1YOa8w8Ih7uW0wALDUgT1dTTSBrZ+HiBLGws92L2RU4=
|
github.com/moby/sys/mountinfo v0.7.2/go.mod h1:1YOa8w8Ih7uW0wALDUgT1dTTSBrZ+HiBLGws92L2RU4=
|
||||||
github.com/moby/sys/reexec v0.1.0/go.mod h1:EqjBg8F3X7iZe5pU6nRZnYCMUTXoxsjiIfHup5wYIN8=
|
github.com/moby/sys/reexec v0.1.0/go.mod h1:EqjBg8F3X7iZe5pU6nRZnYCMUTXoxsjiIfHup5wYIN8=
|
||||||
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||||
github.com/natefinch/atomic v1.0.1/go.mod h1:N/D/ELrljoqDyT3rZrsUmtsuzvHkeB/wWjHV22AZRbM=
|
github.com/natefinch/atomic v1.0.1/go.mod h1:N/D/ELrljoqDyT3rZrsUmtsuzvHkeB/wWjHV22AZRbM=
|
||||||
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
|
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
|
||||||
|
github.com/paulmach/orb v0.13.0/go.mod h1:6scRWINywA2Jf05dcjOfLfxrUIMECvTSG2MVbRLxu/k=
|
||||||
|
github.com/pierrec/lz4/v4 v4.1.26/go.mod h1:EoQMVJgeeEOMsCqCzqFm2O0cJvljX2nGZjcRIPL34O4=
|
||||||
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
|
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
|
||||||
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
|
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
|
||||||
github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho=
|
github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho=
|
||||||
@@ -56,23 +87,41 @@ github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99
|
|||||||
github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY=
|
github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY=
|
||||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||||
github.com/santhosh-tekuri/jsonschema/v5 v5.3.1/go.mod h1:uToXkOrWAZ6/Oc07xWQrPOhJotwFIyu2bBVN41fcDUY=
|
github.com/santhosh-tekuri/jsonschema/v5 v5.3.1/go.mod h1:uToXkOrWAZ6/Oc07xWQrPOhJotwFIyu2bBVN41fcDUY=
|
||||||
|
github.com/segmentio/asm v1.2.1/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs=
|
||||||
|
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
|
||||||
github.com/spiffe/go-spiffe/v2 v2.6.0/go.mod h1:gm2SeUoMZEtpnzPNs2Csc0D/gX33k1xIx7lEzqblHEs=
|
github.com/spiffe/go-spiffe/v2 v2.6.0/go.mod h1:gm2SeUoMZEtpnzPNs2Csc0D/gX33k1xIx7lEzqblHEs=
|
||||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||||
github.com/stretchr/testify v1.11.0/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
github.com/stretchr/testify v1.11.0/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||||
github.com/timandy/routine v1.1.6/go.mod h1:kXslgIosdY8LW0byTyPnenDgn4/azt2euufAq9rK51w=
|
github.com/timandy/routine v1.1.6/go.mod h1:kXslgIosdY8LW0byTyPnenDgn4/azt2euufAq9rK51w=
|
||||||
|
github.com/tursodatabase/libsql-client-go v0.0.0-20251219100830-236aa1ff8acc/go.mod h1:08inkKyguB6CGGssc/JzhmQWwBgFQBgjlYFjxjRh7nU=
|
||||||
github.com/urfave/cli/v2 v2.4.0/go.mod h1:NX9W0zmTvedE5oDoOMs2RTC8RvdK98NTYZE5LbaEYPg=
|
github.com/urfave/cli/v2 v2.4.0/go.mod h1:NX9W0zmTvedE5oDoOMs2RTC8RvdK98NTYZE5LbaEYPg=
|
||||||
|
github.com/vertica/vertica-sql-go v1.3.6/go.mod h1:jnn2GFuv+O2Jcjktb7zyc4Utlbu9YVqpHH/lx63+1M4=
|
||||||
|
github.com/volatiletech/inflect v0.0.1/go.mod h1:IBti31tG6phkHitLlr5j7shC5SOo//x0AjDzaJU1PLA=
|
||||||
|
github.com/volatiletech/null/v8 v8.1.2/go.mod h1:98DbwNoKEpRrYtGjWFctievIfm4n4MxG0A6EBUcoS5g=
|
||||||
|
github.com/volatiletech/randomize v0.0.1/go.mod h1:GN3U0QYqfZ9FOJ67bzax1cqZ5q2xuj2mXrXBjWaRTlY=
|
||||||
|
github.com/volatiletech/strmangle v0.0.1/go.mod h1:F6RA6IkB5vq0yTG4GQ0UsbbRcl3ni9P76i+JrTBKFFg=
|
||||||
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
|
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
|
||||||
github.com/xdg-go/scram v1.2.0/go.mod h1:3dlrS0iBaWKYVt2ZfA4cj48umJZ+cAEbR6/SjLA88I8=
|
github.com/xdg-go/scram v1.2.0/go.mod h1:3dlrS0iBaWKYVt2ZfA4cj48umJZ+cAEbR6/SjLA88I8=
|
||||||
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
|
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
|
||||||
github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU=
|
github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU=
|
||||||
|
github.com/ydb-platform/ydb-go-genproto v0.0.0-20260311095541-ebbf792c1180/go.mod h1:Er+FePu1dNUieD+XTMDduGpQuCPssK5Q4BjF+IIXJ3I=
|
||||||
|
github.com/ydb-platform/ydb-go-sdk/v3 v3.135.0/go.mod h1:VYUUkRJkKuQPkIpgtZJj6+58Fa2g8ccAqdmaaK6HP5k=
|
||||||
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
|
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
|
||||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||||
|
github.com/ziutek/mymysql v1.5.4/go.mod h1:LMSpPZ6DbqWFxNCHW77HeMg9I646SAhApZ/wKdgO/C0=
|
||||||
|
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||||
go.opentelemetry.io/contrib/detectors/gcp v1.39.0/go.mod h1:t/OGqzHBa5v6RHZwrDBJ2OirWc+4q/w2fTbLZwAKjTk=
|
go.opentelemetry.io/contrib/detectors/gcp v1.39.0/go.mod h1:t/OGqzHBa5v6RHZwrDBJ2OirWc+4q/w2fTbLZwAKjTk=
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0/go.mod h1:69uWxva0WgAA/4bu2Yy70SLDBwZXuQ6PbBpbsa5iZrQ=
|
||||||
|
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
|
||||||
|
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
|
||||||
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
|
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
|
||||||
|
go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
|
||||||
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
|
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
|
||||||
go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE=
|
go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE=
|
||||||
go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew=
|
go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew=
|
||||||
|
go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc=
|
||||||
|
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
|
||||||
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
|
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
|
||||||
go.uber.org/mock v0.5.2/go.mod h1:wLlUxC2vVTPTaE3UD51E0BGOAElKrILxhVSDYQLld5o=
|
go.uber.org/mock v0.5.2/go.mod h1:wLlUxC2vVTPTaE3UD51E0BGOAElKrILxhVSDYQLld5o=
|
||||||
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
||||||
@@ -80,8 +129,10 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
|
|||||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||||
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
|
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
|
||||||
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
|
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
|
||||||
|
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
|
||||||
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
|
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
|
||||||
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
|
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
|
||||||
|
golang.org/x/exp v0.0.0-20260410095643-746e56fc9e2f/go.mod h1:J1xhfL/vlindoeF/aINzNzt2Bket5bjo9sdOYzOsU80=
|
||||||
golang.org/x/mobile v0.0.0-20231127183840-76ac6878050a/go.mod h1:Ede7gF0KGoHlj822RtphAHK1jLdrcuRBZg0sF1Q+SPc=
|
golang.org/x/mobile v0.0.0-20231127183840-76ac6878050a/go.mod h1:Ede7gF0KGoHlj822RtphAHK1jLdrcuRBZg0sF1Q+SPc=
|
||||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||||
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||||
@@ -99,6 +150,7 @@ golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
|||||||
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
||||||
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
|
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
|
||||||
golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY=
|
golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY=
|
||||||
|
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
|
||||||
golang.org/x/net v0.28.0/go.mod h1:yqtgsTWOOnlGLG9GFRrK3++bGOUEkNBoHZc8MEDWPNg=
|
golang.org/x/net v0.28.0/go.mod h1:yqtgsTWOOnlGLG9GFRrK3++bGOUEkNBoHZc8MEDWPNg=
|
||||||
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
|
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
|
||||||
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
|
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
|
||||||
@@ -114,7 +166,6 @@ golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
|
|||||||
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||||
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||||
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||||
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
|
|
||||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
@@ -122,12 +173,16 @@ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
|
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
|
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||||
|
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||||
golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||||
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||||
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
@@ -136,12 +191,14 @@ golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
|||||||
golang.org/x/telemetry v0.0.0-20240521205824-bda55230c457/go.mod h1:pRgIJT+bRLFKnoM1ldnzKoxTIn14Yxz928LQRYYgIN0=
|
golang.org/x/telemetry v0.0.0-20240521205824-bda55230c457/go.mod h1:pRgIJT+bRLFKnoM1ldnzKoxTIn14Yxz928LQRYYgIN0=
|
||||||
golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2/go.mod h1:b7fPSJ0pKZ3ccUh8gnTONJxhn3c/PS6tyzQvyqw4iA8=
|
golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2/go.mod h1:b7fPSJ0pKZ3ccUh8gnTONJxhn3c/PS6tyzQvyqw4iA8=
|
||||||
golang.org/x/telemetry v0.0.0-20260209163413-e7419c687ee4/go.mod h1:g5NllXBEermZrmR51cJDQxmJUHUOfRAaNyWBM+R+548=
|
golang.org/x/telemetry v0.0.0-20260209163413-e7419c687ee4/go.mod h1:g5NllXBEermZrmR51cJDQxmJUHUOfRAaNyWBM+R+548=
|
||||||
|
golang.org/x/telemetry v0.0.0-20260409153401-be6f6cb8b1fa/go.mod h1:kHjTxDEnAu6/Nl9lDkzjWpR+bmKfxeiRuSDlsMb70gE=
|
||||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||||
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||||
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
|
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
|
||||||
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
|
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
|
||||||
golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY=
|
golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY=
|
||||||
|
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
|
||||||
golang.org/x/term v0.33.0/go.mod h1:s18+ql9tYWp1IfpV9DmCtQDDSRBUjKaw9M1eAv5UeF0=
|
golang.org/x/term v0.33.0/go.mod h1:s18+ql9tYWp1IfpV9DmCtQDDSRBUjKaw9M1eAv5UeF0=
|
||||||
golang.org/x/term v0.40.0/go.mod h1:w2P8uVp06p2iyKKuvXIm7N/y0UCRt3UfJTfZ7oOpglM=
|
golang.org/x/term v0.40.0/go.mod h1:w2P8uVp06p2iyKKuvXIm7N/y0UCRt3UfJTfZ7oOpglM=
|
||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
@@ -155,6 +212,7 @@ golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
|
|||||||
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
|
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
|
||||||
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
|
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
|
||||||
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
|
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
|
||||||
|
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
|
||||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||||
@@ -167,6 +225,7 @@ golang.org/x/tools v0.40.0/go.mod h1:Ik/tzLRlbscWpqqMRjyWYDisX8bG13FrdXp3o4Sr9lc
|
|||||||
golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg=
|
golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg=
|
||||||
golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0=
|
golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0=
|
||||||
golang.org/x/tools v0.43.0/go.mod h1:uHkMso649BX2cZK6+RpuIPXS3ho2hZo4FVwfoy1vIk0=
|
golang.org/x/tools v0.43.0/go.mod h1:uHkMso649BX2cZK6+RpuIPXS3ho2hZo4FVwfoy1vIk0=
|
||||||
|
golang.org/x/tools v0.44.0/go.mod h1:KA0AfVErSdxRZIsOVipbv3rQhVXTnlU6UhKxHd1seDI=
|
||||||
golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY=
|
golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY=
|
||||||
golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8=
|
golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated/go.mod h1:RVAQXBGNv1ib0J382/DPCRS/BPnsGebyM1Gj5VSDpG8=
|
||||||
golang.org/x/tools/go/vcs v0.1.0-deprecated/go.mod h1:zUrvATBAvEI9535oC0yWYsLsHIV4Z7g63sNPVMtuBy8=
|
golang.org/x/tools/go/vcs v0.1.0-deprecated/go.mod h1:zUrvATBAvEI9535oC0yWYsLsHIV4Z7g63sNPVMtuBy8=
|
||||||
@@ -181,4 +240,6 @@ google.golang.org/grpc v1.79.2/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhH
|
|||||||
google.golang.org/grpc v1.79.3/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
|
google.golang.org/grpc v1.79.3/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
|
||||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||||
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
|
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
|
||||||
|
gopkg.in/guregu/null.v4 v4.0.0/go.mod h1:YoQhUrADuG3i9WqesrCmpNRwm1ypAgSHYqoOcTu/JrI=
|
||||||
|
howett.net/plist v1.0.1/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g=
|
||||||
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
|
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
|
||||||
|
|||||||
@@ -97,17 +97,15 @@ func newAuthsessionMailHarness(t *testing.T, opts authsessionMailHarnessOptions)
|
|||||||
opts.mailSMTPMode = "stub"
|
opts.mailSMTPMode = "stub"
|
||||||
}
|
}
|
||||||
|
|
||||||
mailEnv := map[string]string{
|
mailEnv := harness.StartMailServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"MAIL_LOG_LEVEL": "info",
|
mailEnv["MAIL_LOG_LEVEL"] = "info"
|
||||||
"MAIL_INTERNAL_HTTP_ADDR": mailInternalAddr,
|
mailEnv["MAIL_INTERNAL_HTTP_ADDR"] = mailInternalAddr
|
||||||
"MAIL_REDIS_ADDR": redisRuntime.Addr,
|
mailEnv["MAIL_TEMPLATE_DIR"] = moduleTemplateDir(t)
|
||||||
"MAIL_TEMPLATE_DIR": moduleTemplateDir(t),
|
mailEnv["MAIL_STREAM_BLOCK_TIMEOUT"] = "100ms"
|
||||||
"MAIL_STREAM_BLOCK_TIMEOUT": "100ms",
|
mailEnv["MAIL_OPERATOR_REQUEST_TIMEOUT"] = time.Second.String()
|
||||||
"MAIL_OPERATOR_REQUEST_TIMEOUT": time.Second.String(),
|
mailEnv["MAIL_SHUTDOWN_TIMEOUT"] = "2s"
|
||||||
"MAIL_SHUTDOWN_TIMEOUT": "2s",
|
mailEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
mailEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
|
||||||
}
|
|
||||||
|
|
||||||
var smtpCapture *harness.SMTPCapture
|
var smtpCapture *harness.SMTPCapture
|
||||||
switch opts.mailSMTPMode {
|
switch opts.mailSMTPMode {
|
||||||
@@ -135,7 +133,9 @@ func newAuthsessionMailHarness(t *testing.T, opts authsessionMailHarnessOptions)
|
|||||||
"AUTHSESSION_LOG_LEVEL": "info",
|
"AUTHSESSION_LOG_LEVEL": "info",
|
||||||
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
||||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||||
"AUTHSESSION_REDIS_ADDR": redisRuntime.Addr,
|
"AUTHSESSION_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||||
|
|
||||||
|
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||||
"AUTHSESSION_USER_SERVICE_BASE_URL": userStub.BaseURL(),
|
"AUTHSESSION_USER_SERVICE_BASE_URL": userStub.BaseURL(),
|
||||||
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
||||||
|
|||||||
@@ -43,13 +43,11 @@ func newAuthsessionUserHarness(t *testing.T) *authsessionUserHarness {
|
|||||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||||
|
|
||||||
userServiceEnv := map[string]string{
|
userServiceEnv := harness.StartUserServicePersistence(t, redisServer.Addr()).Env
|
||||||
"USERSERVICE_LOG_LEVEL": "info",
|
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||||
"USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr,
|
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||||
"USERSERVICE_REDIS_ADDR": redisServer.Addr(),
|
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
|
||||||
}
|
|
||||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||||
|
|
||||||
@@ -57,7 +55,9 @@ func newAuthsessionUserHarness(t *testing.T) *authsessionUserHarness {
|
|||||||
"AUTHSESSION_LOG_LEVEL": "info",
|
"AUTHSESSION_LOG_LEVEL": "info",
|
||||||
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
"AUTHSESSION_PUBLIC_HTTP_ADDR": authsessionPublicAddr,
|
||||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||||
"AUTHSESSION_REDIS_ADDR": redisServer.Addr(),
|
"AUTHSESSION_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||||
|
|
||||||
|
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||||
"AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
"AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||||
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
"AUTHSESSION_MAIL_SERVICE_MODE": "rest",
|
||||||
|
|||||||
@@ -98,7 +98,9 @@ func newGatewayAuthSessionHarness(t *testing.T, opts gatewayAuthSessionOptions)
|
|||||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": opts.authsessionPublicHTTPTimeout.String(),
|
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": opts.authsessionPublicHTTPTimeout.String(),
|
||||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": defaultAuthsessionInternalHTTPTimeout.String(),
|
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": defaultAuthsessionInternalHTTPTimeout.String(),
|
||||||
"AUTHSESSION_REDIS_ADDR": redisServer.Addr(),
|
"AUTHSESSION_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||||
|
|
||||||
|
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||||
"AUTHSESSION_USER_SERVICE_BASE_URL": userStub.BaseURL(),
|
"AUTHSESSION_USER_SERVICE_BASE_URL": userStub.BaseURL(),
|
||||||
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": defaultAuthsessionDependencyTimeout.String(),
|
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": defaultAuthsessionDependencyTimeout.String(),
|
||||||
@@ -118,7 +120,9 @@ func newGatewayAuthSessionHarness(t *testing.T, opts gatewayAuthSessionOptions)
|
|||||||
"GATEWAY_LOG_LEVEL": "info",
|
"GATEWAY_LOG_LEVEL": "info",
|
||||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_ADDR": redisServer.Addr(),
|
"GATEWAY_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||||
|
|
||||||
|
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||||
|
|||||||
@@ -126,18 +126,17 @@ func newGatewayAuthsessionMailHarness(t *testing.T) *gatewayAuthsessionMailHarne
|
|||||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||||
|
|
||||||
mailProcess := harness.StartProcess(t, "mail", mailBinary, map[string]string{
|
mailEnv := harness.StartMailServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"MAIL_LOG_LEVEL": "info",
|
mailEnv["MAIL_LOG_LEVEL"] = "info"
|
||||||
"MAIL_INTERNAL_HTTP_ADDR": mailInternalAddr,
|
mailEnv["MAIL_INTERNAL_HTTP_ADDR"] = mailInternalAddr
|
||||||
"MAIL_REDIS_ADDR": redisRuntime.Addr,
|
mailEnv["MAIL_TEMPLATE_DIR"] = moduleTemplateDir(t)
|
||||||
"MAIL_TEMPLATE_DIR": moduleTemplateDir(t),
|
mailEnv["MAIL_SMTP_MODE"] = "stub"
|
||||||
"MAIL_SMTP_MODE": "stub",
|
mailEnv["MAIL_STREAM_BLOCK_TIMEOUT"] = "100ms"
|
||||||
"MAIL_STREAM_BLOCK_TIMEOUT": "100ms",
|
mailEnv["MAIL_OPERATOR_REQUEST_TIMEOUT"] = time.Second.String()
|
||||||
"MAIL_OPERATOR_REQUEST_TIMEOUT": time.Second.String(),
|
mailEnv["MAIL_SHUTDOWN_TIMEOUT"] = "2s"
|
||||||
"MAIL_SHUTDOWN_TIMEOUT": "2s",
|
mailEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
mailEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
mailProcess := harness.StartProcess(t, "mail", mailBinary, mailEnv)
|
||||||
})
|
|
||||||
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
||||||
|
|
||||||
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, map[string]string{
|
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, map[string]string{
|
||||||
@@ -146,7 +145,9 @@ func newGatewayAuthsessionMailHarness(t *testing.T) *gatewayAuthsessionMailHarne
|
|||||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||||
"AUTHSESSION_REDIS_ADDR": redisRuntime.Addr,
|
"AUTHSESSION_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||||
|
|
||||||
|
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||||
"AUTHSESSION_USER_SERVICE_BASE_URL": userStub.BaseURL(),
|
"AUTHSESSION_USER_SERVICE_BASE_URL": userStub.BaseURL(),
|
||||||
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||||
@@ -164,7 +165,9 @@ func newGatewayAuthsessionMailHarness(t *testing.T) *gatewayAuthsessionMailHarne
|
|||||||
"GATEWAY_LOG_LEVEL": "info",
|
"GATEWAY_LOG_LEVEL": "info",
|
||||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_ADDR": redisRuntime.Addr,
|
"GATEWAY_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||||
|
|
||||||
|
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||||
|
|||||||
@@ -71,13 +71,11 @@ func newGatewayAuthsessionUserHarness(t *testing.T) *gatewayAuthsessionUserHarne
|
|||||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||||
|
|
||||||
userServiceEnv := map[string]string{
|
userServiceEnv := harness.StartUserServicePersistence(t, redisServer.Addr()).Env
|
||||||
"USERSERVICE_LOG_LEVEL": "info",
|
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||||
"USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr,
|
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||||
"USERSERVICE_REDIS_ADDR": redisServer.Addr(),
|
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
|
||||||
}
|
|
||||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||||
harness.WaitForHTTPStatus(t, userServiceProcess, "http://"+userServiceAddr+"/api/v1/internal/users/user-missing/exists", http.StatusOK)
|
harness.WaitForHTTPStatus(t, userServiceProcess, "http://"+userServiceAddr+"/api/v1/internal/users/user-missing/exists", http.StatusOK)
|
||||||
|
|
||||||
@@ -87,7 +85,9 @@ func newGatewayAuthsessionUserHarness(t *testing.T) *gatewayAuthsessionUserHarne
|
|||||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||||
"AUTHSESSION_REDIS_ADDR": redisServer.Addr(),
|
"AUTHSESSION_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||||
|
|
||||||
|
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||||
"AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
"AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||||
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||||
@@ -109,7 +109,9 @@ func newGatewayAuthsessionUserHarness(t *testing.T) *gatewayAuthsessionUserHarne
|
|||||||
"GATEWAY_AUTH_SERVICE_BASE_URL": "http://" + authsessionPublicAddr,
|
"GATEWAY_AUTH_SERVICE_BASE_URL": "http://" + authsessionPublicAddr,
|
||||||
"GATEWAY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
"GATEWAY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||||
"GATEWAY_PUBLIC_AUTH_UPSTREAM_TIMEOUT": (500 * time.Millisecond).String(),
|
"GATEWAY_PUBLIC_AUTH_UPSTREAM_TIMEOUT": (500 * time.Millisecond).String(),
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_ADDR": redisServer.Addr(),
|
"GATEWAY_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||||
|
|
||||||
|
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||||
|
|||||||
@@ -186,27 +186,25 @@ func newGatewayAuthsessionUserMailHarness(t *testing.T) *gatewayAuthsessionUserM
|
|||||||
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
authsessionBinary := harness.BuildBinary(t, "authsession", "./authsession/cmd/authsession")
|
||||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||||
|
|
||||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, map[string]string{
|
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"USERSERVICE_LOG_LEVEL": "info",
|
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||||
"USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr,
|
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||||
"USERSERVICE_REDIS_ADDR": redisRuntime.Addr,
|
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||||
})
|
|
||||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||||
|
|
||||||
mailProcess := harness.StartProcess(t, "mail", mailBinary, map[string]string{
|
mailEnv := harness.StartMailServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"MAIL_LOG_LEVEL": "info",
|
mailEnv["MAIL_LOG_LEVEL"] = "info"
|
||||||
"MAIL_INTERNAL_HTTP_ADDR": mailInternalAddr,
|
mailEnv["MAIL_INTERNAL_HTTP_ADDR"] = mailInternalAddr
|
||||||
"MAIL_REDIS_ADDR": redisRuntime.Addr,
|
mailEnv["MAIL_TEMPLATE_DIR"] = moduleTemplateDir(t)
|
||||||
"MAIL_TEMPLATE_DIR": moduleTemplateDir(t),
|
mailEnv["MAIL_SMTP_MODE"] = "stub"
|
||||||
"MAIL_SMTP_MODE": "stub",
|
mailEnv["MAIL_STREAM_BLOCK_TIMEOUT"] = "100ms"
|
||||||
"MAIL_STREAM_BLOCK_TIMEOUT": "100ms",
|
mailEnv["MAIL_OPERATOR_REQUEST_TIMEOUT"] = time.Second.String()
|
||||||
"MAIL_OPERATOR_REQUEST_TIMEOUT": time.Second.String(),
|
mailEnv["MAIL_SHUTDOWN_TIMEOUT"] = "2s"
|
||||||
"MAIL_SHUTDOWN_TIMEOUT": "2s",
|
mailEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
mailEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
mailProcess := harness.StartProcess(t, "mail", mailBinary, mailEnv)
|
||||||
})
|
|
||||||
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
||||||
|
|
||||||
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, map[string]string{
|
authsessionProcess := harness.StartProcess(t, "authsession", authsessionBinary, map[string]string{
|
||||||
@@ -215,7 +213,9 @@ func newGatewayAuthsessionUserMailHarness(t *testing.T) *gatewayAuthsessionUserM
|
|||||||
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
"AUTHSESSION_PUBLIC_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||||
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
"AUTHSESSION_INTERNAL_HTTP_ADDR": authsessionInternalAddr,
|
||||||
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
"AUTHSESSION_INTERNAL_HTTP_REQUEST_TIMEOUT": time.Second.String(),
|
||||||
"AUTHSESSION_REDIS_ADDR": redisRuntime.Addr,
|
"AUTHSESSION_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||||
|
|
||||||
|
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||||
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
"AUTHSESSION_USER_SERVICE_MODE": "rest",
|
||||||
"AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
"AUTHSESSION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||||
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
"AUTHSESSION_USER_SERVICE_REQUEST_TIMEOUT": time.Second.String(),
|
||||||
@@ -233,7 +233,9 @@ func newGatewayAuthsessionUserMailHarness(t *testing.T) *gatewayAuthsessionUserM
|
|||||||
"GATEWAY_LOG_LEVEL": "info",
|
"GATEWAY_LOG_LEVEL": "info",
|
||||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_ADDR": redisRuntime.Addr,
|
"GATEWAY_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||||
|
|
||||||
|
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||||
|
|||||||
@@ -63,13 +63,11 @@ func newGatewayUserHarness(t *testing.T) *gatewayUserHarness {
|
|||||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||||
|
|
||||||
userServiceEnv := map[string]string{
|
userServiceEnv := harness.StartUserServicePersistence(t, redisServer.Addr()).Env
|
||||||
"USERSERVICE_LOG_LEVEL": "info",
|
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||||
"USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr,
|
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||||
"USERSERVICE_REDIS_ADDR": redisServer.Addr(),
|
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
|
||||||
}
|
|
||||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||||
harness.WaitForHTTPStatus(t, userServiceProcess, "http://"+userServiceAddr+"/api/v1/internal/users/user-missing/exists", http.StatusOK)
|
harness.WaitForHTTPStatus(t, userServiceProcess, "http://"+userServiceAddr+"/api/v1/internal/users/user-missing/exists", http.StatusOK)
|
||||||
|
|
||||||
@@ -78,7 +76,9 @@ func newGatewayUserHarness(t *testing.T) *gatewayUserHarness {
|
|||||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||||
"GATEWAY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
"GATEWAY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_ADDR": redisServer.Addr(),
|
"GATEWAY_REDIS_MASTER_ADDR": redisServer.Addr(),
|
||||||
|
|
||||||
|
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": "gateway:client_events",
|
||||||
|
|||||||
+22
-8
@@ -1,12 +1,15 @@
|
|||||||
module galaxy/integration
|
module galaxy/integration
|
||||||
|
|
||||||
go 1.26.0
|
go 1.26.1
|
||||||
|
|
||||||
require (
|
require (
|
||||||
|
galaxy/postgres v0.0.0
|
||||||
github.com/alicebob/miniredis/v2 v2.37.0
|
github.com/alicebob/miniredis/v2 v2.37.0
|
||||||
|
github.com/jackc/pgx/v5 v5.9.2
|
||||||
github.com/redis/go-redis/v9 v9.18.0
|
github.com/redis/go-redis/v9 v9.18.0
|
||||||
github.com/stretchr/testify v1.11.1
|
github.com/stretchr/testify v1.11.1
|
||||||
github.com/testcontainers/testcontainers-go v0.42.0
|
github.com/testcontainers/testcontainers-go v0.42.0
|
||||||
|
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0
|
||||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0
|
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0
|
||||||
google.golang.org/grpc v1.80.0
|
google.golang.org/grpc v1.80.0
|
||||||
)
|
)
|
||||||
@@ -15,6 +18,7 @@ require (
|
|||||||
dario.cat/mergo v1.0.2 // indirect
|
dario.cat/mergo v1.0.2 // indirect
|
||||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
|
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
|
||||||
github.com/Microsoft/go-winio v0.6.2 // indirect
|
github.com/Microsoft/go-winio v0.6.2 // indirect
|
||||||
|
github.com/XSAM/otelsql v0.42.0 // indirect
|
||||||
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||||
github.com/containerd/errdefs v1.0.0 // indirect
|
github.com/containerd/errdefs v1.0.0 // indirect
|
||||||
@@ -25,7 +29,7 @@ require (
|
|||||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
|
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
|
||||||
github.com/distribution/reference v0.6.0 // indirect
|
github.com/distribution/reference v0.6.0 // indirect
|
||||||
github.com/docker/go-connections v0.6.0 // indirect
|
github.com/docker/go-connections v0.7.0 // indirect
|
||||||
github.com/docker/go-units v0.5.0 // indirect
|
github.com/docker/go-units v0.5.0 // indirect
|
||||||
github.com/ebitengine/purego v0.10.0 // indirect
|
github.com/ebitengine/purego v0.10.0 // indirect
|
||||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||||
@@ -33,15 +37,19 @@ require (
|
|||||||
github.com/go-logr/stdr v1.2.2 // indirect
|
github.com/go-logr/stdr v1.2.2 // indirect
|
||||||
github.com/go-ole/go-ole v1.2.6 // indirect
|
github.com/go-ole/go-ole v1.2.6 // indirect
|
||||||
github.com/google/uuid v1.6.0 // indirect
|
github.com/google/uuid v1.6.0 // indirect
|
||||||
|
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||||
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||||
|
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
||||||
github.com/klauspost/compress v1.18.5 // indirect
|
github.com/klauspost/compress v1.18.5 // indirect
|
||||||
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
||||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
|
||||||
github.com/magiconair/properties v1.8.10 // indirect
|
github.com/magiconair/properties v1.8.10 // indirect
|
||||||
github.com/mdelapenya/tlscert v0.2.0 // indirect
|
github.com/mdelapenya/tlscert v0.2.0 // indirect
|
||||||
|
github.com/mfridman/interpolate v0.0.2 // indirect
|
||||||
github.com/moby/docker-image-spec v1.3.1 // indirect
|
github.com/moby/docker-image-spec v1.3.1 // indirect
|
||||||
github.com/moby/go-archive v0.2.0 // indirect
|
github.com/moby/go-archive v0.2.0 // indirect
|
||||||
github.com/moby/moby/api v1.54.1 // indirect
|
github.com/moby/moby/api v1.54.2 // indirect
|
||||||
github.com/moby/moby/client v0.4.0 // indirect
|
github.com/moby/moby/client v0.4.1 // indirect
|
||||||
github.com/moby/patternmatcher v0.6.1 // indirect
|
github.com/moby/patternmatcher v0.6.1 // indirect
|
||||||
github.com/moby/sys/sequential v0.6.0 // indirect
|
github.com/moby/sys/sequential v0.6.0 // indirect
|
||||||
github.com/moby/sys/user v0.4.0 // indirect
|
github.com/moby/sys/user v0.4.0 // indirect
|
||||||
@@ -51,6 +59,8 @@ require (
|
|||||||
github.com/opencontainers/image-spec v1.1.1 // indirect
|
github.com/opencontainers/image-spec v1.1.1 // indirect
|
||||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
|
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
|
||||||
|
github.com/pressly/goose/v3 v3.27.1 // indirect
|
||||||
|
github.com/sethvargo/go-retry v0.3.0 // indirect
|
||||||
github.com/shirou/gopsutil/v4 v4.26.3 // indirect
|
github.com/shirou/gopsutil/v4 v4.26.3 // indirect
|
||||||
github.com/sirupsen/logrus v1.9.4 // indirect
|
github.com/sirupsen/logrus v1.9.4 // indirect
|
||||||
github.com/tklauser/go-sysconf v0.3.16 // indirect
|
github.com/tklauser/go-sysconf v0.3.16 // indirect
|
||||||
@@ -63,11 +73,15 @@ require (
|
|||||||
go.opentelemetry.io/otel/metric v1.43.0 // indirect
|
go.opentelemetry.io/otel/metric v1.43.0 // indirect
|
||||||
go.opentelemetry.io/otel/trace v1.43.0 // indirect
|
go.opentelemetry.io/otel/trace v1.43.0 // indirect
|
||||||
go.uber.org/atomic v1.11.0 // indirect
|
go.uber.org/atomic v1.11.0 // indirect
|
||||||
golang.org/x/crypto v0.49.0 // indirect
|
go.uber.org/multierr v1.11.0 // indirect
|
||||||
golang.org/x/net v0.52.0 // indirect
|
golang.org/x/crypto v0.50.0 // indirect
|
||||||
golang.org/x/sys v0.42.0 // indirect
|
golang.org/x/net v0.53.0 // indirect
|
||||||
|
golang.org/x/sync v0.20.0 // indirect
|
||||||
|
golang.org/x/sys v0.43.0 // indirect
|
||||||
golang.org/x/text v0.36.0 // indirect
|
golang.org/x/text v0.36.0 // indirect
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 // indirect
|
||||||
google.golang.org/protobuf v1.36.11 // indirect
|
google.golang.org/protobuf v1.36.11 // indirect
|
||||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||||
)
|
)
|
||||||
|
|
||||||
|
replace galaxy/postgres => ../pkg/postgres
|
||||||
|
|||||||
+63
-16
@@ -6,6 +6,8 @@ github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEK
|
|||||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
||||||
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
||||||
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
||||||
|
github.com/XSAM/otelsql v0.42.0 h1:Li0xF4eJUxG2e0x3D4rvRlys1f27yJKvjTh7ljkUP5o=
|
||||||
|
github.com/XSAM/otelsql v0.42.0/go.mod h1:4mOrEv+cS1KmKzrvTktvJnstr5GtKSAK+QHvFR9OcpI=
|
||||||
github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68=
|
github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68=
|
||||||
github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM=
|
github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM=
|
||||||
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
|
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
|
||||||
@@ -28,16 +30,19 @@ github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GK
|
|||||||
github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
|
github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
|
||||||
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
|
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
|
||||||
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
|
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
|
||||||
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||||
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
|
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
|
||||||
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
|
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
|
||||||
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
|
github.com/docker/go-connections v0.7.0 h1:6SsRfJddP22WMrCkj19x9WKjEDTB+ahsdiGYf0mN39c=
|
||||||
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
|
github.com/docker/go-connections v0.7.0/go.mod h1:no1qkHdjq7kLMGUXYAduOhYPSJxxvgWBh7ogVvptn3Q=
|
||||||
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
|
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
|
||||||
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||||
|
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||||
|
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||||
github.com/ebitengine/purego v0.10.0 h1:QIw4xfpWT6GWTzaW5XEKy3HXoqrJGx1ijYHzTF0/ISU=
|
github.com/ebitengine/purego v0.10.0 h1:QIw4xfpWT6GWTzaW5XEKy3HXoqrJGx1ijYHzTF0/ISU=
|
||||||
github.com/ebitengine/purego v0.10.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
|
github.com/ebitengine/purego v0.10.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
|
||||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||||
@@ -56,6 +61,14 @@ github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
|||||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
|
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||||
|
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
|
||||||
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
|
||||||
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
||||||
|
github.com/jackc/pgx/v5 v5.9.2 h1:3ZhOzMWnR4yJ+RW1XImIPsD1aNSz4T4fyP7zlQb56hw=
|
||||||
|
github.com/jackc/pgx/v5 v5.9.2/go.mod h1:mal1tBGAFfLHvZzaYh77YS/eC6IX9OWbRV1QIIM0Jn4=
|
||||||
|
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
|
||||||
|
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
||||||
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
|
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
|
||||||
github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ=
|
github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ=
|
||||||
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
||||||
@@ -64,20 +77,26 @@ github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
|||||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||||
|
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||||
|
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
|
||||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
|
||||||
github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE=
|
github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE=
|
||||||
github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
|
github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
|
||||||
|
github.com/mattn/go-isatty v0.0.21 h1:xYae+lCNBP7QuW4PUnNG61ffM4hVIfm+zUzDuSzYLGs=
|
||||||
|
github.com/mattn/go-isatty v0.0.21/go.mod h1:ZXfXG4SQHsB/w3ZeOYbR0PrPwLy+n6xiMrJlRFqopa4=
|
||||||
github.com/mdelapenya/tlscert v0.2.0 h1:7H81W6Z/4weDvZBNOfQte5GpIMo0lGYEeWbkGp5LJHI=
|
github.com/mdelapenya/tlscert v0.2.0 h1:7H81W6Z/4weDvZBNOfQte5GpIMo0lGYEeWbkGp5LJHI=
|
||||||
github.com/mdelapenya/tlscert v0.2.0/go.mod h1:O4njj3ELLnJjGdkN7M/vIVCpZ+Cf0L6muqOG4tLSl8o=
|
github.com/mdelapenya/tlscert v0.2.0/go.mod h1:O4njj3ELLnJjGdkN7M/vIVCpZ+Cf0L6muqOG4tLSl8o=
|
||||||
|
github.com/mfridman/interpolate v0.0.2 h1:pnuTK7MQIxxFz1Gr+rjSIx9u7qVjf5VOoM/u6BbAxPY=
|
||||||
|
github.com/mfridman/interpolate v0.0.2/go.mod h1:p+7uk6oE07mpE/Ik1b8EckO0O4ZXiGAfshKBWLUM9Xg=
|
||||||
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
|
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
|
||||||
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
|
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
|
||||||
github.com/moby/go-archive v0.2.0 h1:zg5QDUM2mi0JIM9fdQZWC7U8+2ZfixfTYoHL7rWUcP8=
|
github.com/moby/go-archive v0.2.0 h1:zg5QDUM2mi0JIM9fdQZWC7U8+2ZfixfTYoHL7rWUcP8=
|
||||||
github.com/moby/go-archive v0.2.0/go.mod h1:mNeivT14o8xU+5q1YnNrkQVpK+dnNe/K6fHqnTg4qPU=
|
github.com/moby/go-archive v0.2.0/go.mod h1:mNeivT14o8xU+5q1YnNrkQVpK+dnNe/K6fHqnTg4qPU=
|
||||||
github.com/moby/moby/api v1.54.1 h1:TqVzuJkOLsgLDDwNLmYqACUuTehOHRGKiPhvH8V3Nn4=
|
github.com/moby/moby/api v1.54.2 h1:wiat9QAhnDQjA7wk1kh/TqHz2I1uUA7M7t9SAl/JNXg=
|
||||||
github.com/moby/moby/api v1.54.1/go.mod h1:+RQ6wluLwtYaTd1WnPLykIDPekkuyD/ROWQClE83pzs=
|
github.com/moby/moby/api v1.54.2/go.mod h1:+RQ6wluLwtYaTd1WnPLykIDPekkuyD/ROWQClE83pzs=
|
||||||
github.com/moby/moby/client v0.4.0 h1:S+2XegzHQrrvTCvF6s5HFzcrywWQmuVnhOXe2kiWjIw=
|
github.com/moby/moby/client v0.4.1 h1:DMQgisVoMkmMs7fp3ROSdiBnoAu8+vo3GggFl06M/wY=
|
||||||
github.com/moby/moby/client v0.4.0/go.mod h1:QWPbvWchQbxBNdaLSpoKpCdf5E+WxFAgNHogCWDoa7g=
|
github.com/moby/moby/client v0.4.1/go.mod h1:z52C9O2POPOsnxZAy//WtKcQ32P+jT/NGeXu/7nfjGQ=
|
||||||
github.com/moby/patternmatcher v0.6.1 h1:qlhtafmr6kgMIJjKJMDmMWq7WLkKIo23hsrpR3x084U=
|
github.com/moby/patternmatcher v0.6.1 h1:qlhtafmr6kgMIJjKJMDmMWq7WLkKIo23hsrpR3x084U=
|
||||||
github.com/moby/patternmatcher v0.6.1/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
|
github.com/moby/patternmatcher v0.6.1/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
|
||||||
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
|
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
|
||||||
@@ -88,28 +107,42 @@ github.com/moby/sys/userns v0.1.0 h1:tVLXkFOxVu9A64/yh59slHVv9ahO9UIev4JZusOLG/g
|
|||||||
github.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28=
|
github.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28=
|
||||||
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
|
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
|
||||||
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
|
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
|
||||||
|
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
|
||||||
|
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
|
||||||
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
|
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
|
||||||
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
||||||
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
|
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
|
||||||
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
|
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
|
||||||
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
|
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
|
||||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
|
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
|
||||||
|
github.com/pressly/goose/v3 v3.27.1 h1:6uEvcprBybDmW4hcz3gYujhARhye+GoWKhEWyzD5sh4=
|
||||||
|
github.com/pressly/goose/v3 v3.27.1/go.mod h1:maruOxsPnIG2yHHyo8UqKWXYKFcH7Q76csUV7+7KYoM=
|
||||||
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
||||||
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
||||||
|
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||||
|
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||||
|
github.com/sethvargo/go-retry v0.3.0 h1:EEt31A35QhrcRZtrYFDTBg91cqZVnFL2navjDrah2SE=
|
||||||
|
github.com/sethvargo/go-retry v0.3.0/go.mod h1:mNX17F0C/HguQMyMyJxcnU471gOZGxCLyYaFyAZraas=
|
||||||
github.com/shirou/gopsutil/v4 v4.26.3 h1:2ESdQt90yU3oXF/CdOlRCJxrP+Am1aBYubTMTfxJ1qc=
|
github.com/shirou/gopsutil/v4 v4.26.3 h1:2ESdQt90yU3oXF/CdOlRCJxrP+Am1aBYubTMTfxJ1qc=
|
||||||
github.com/shirou/gopsutil/v4 v4.26.3/go.mod h1:LZ6ewCSkBqUpvSOf+LsTGnRinC6iaNUNMGBtDkJBaLQ=
|
github.com/shirou/gopsutil/v4 v4.26.3/go.mod h1:LZ6ewCSkBqUpvSOf+LsTGnRinC6iaNUNMGBtDkJBaLQ=
|
||||||
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
|
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
|
||||||
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
|
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
|
||||||
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/objx v0.5.3 h1:jmXUvGomnU1o3W/V5h2VEradbpJDwGrzugQQvL0POH4=
|
github.com/stretchr/objx v0.5.3 h1:jmXUvGomnU1o3W/V5h2VEradbpJDwGrzugQQvL0POH4=
|
||||||
github.com/stretchr/objx v0.5.3/go.mod h1:rDQraq+vQZU7Fde9LOZLr8Tax6zZvy4kuNKF+QYS+U0=
|
github.com/stretchr/objx v0.5.3/go.mod h1:rDQraq+vQZU7Fde9LOZLr8Tax6zZvy4kuNKF+QYS+U0=
|
||||||
|
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||||
|
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||||
github.com/testcontainers/testcontainers-go v0.42.0 h1:He3IhTzTZOygSXLJPMX7n44XtK+qhjat1nI9cneBbUY=
|
github.com/testcontainers/testcontainers-go v0.42.0 h1:He3IhTzTZOygSXLJPMX7n44XtK+qhjat1nI9cneBbUY=
|
||||||
github.com/testcontainers/testcontainers-go v0.42.0/go.mod h1:vZjdY1YmUA1qEForxOIOazfsrdyORJAbhi0bp8plN30=
|
github.com/testcontainers/testcontainers-go v0.42.0/go.mod h1:vZjdY1YmUA1qEForxOIOazfsrdyORJAbhi0bp8plN30=
|
||||||
|
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0 h1:GCbb1ndrF7OTDiIvxXyItaDab4qkzTFJ48LKFdM7EIo=
|
||||||
|
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0/go.mod h1:IRPBaI8jXdrNfD0e4Zm7Fbcgaz5shKxOQv4axiL09xs=
|
||||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0 h1:id/6LH8ZeDrtAUVSuNvZUAJ1kVpb82y1pr9yweAWsRg=
|
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0 h1:id/6LH8ZeDrtAUVSuNvZUAJ1kVpb82y1pr9yweAWsRg=
|
||||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0/go.mod h1:uF0jI8FITagQpBNOgweGBmPf6rP4K0SeL1XFPbsZSSY=
|
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0/go.mod h1:uF0jI8FITagQpBNOgweGBmPf6rP4K0SeL1XFPbsZSSY=
|
||||||
github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA=
|
github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA=
|
||||||
@@ -125,6 +158,7 @@ github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaD
|
|||||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 h1:CqXxU8VOmDefoh0+ztfGaymYbhdB/tT3zs79QaZTNGY=
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 h1:CqXxU8VOmDefoh0+ztfGaymYbhdB/tT3zs79QaZTNGY=
|
||||||
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0/go.mod h1:BuhAPThV8PBHBvg8ZzZ/Ok3idOdhWIodywz2xEcRbJo=
|
||||||
go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I=
|
go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I=
|
||||||
go.opentelemetry.io/otel v1.43.0/go.mod h1:JuG+u74mvjvcm8vj8pI5XiHy1zDeoCS2LB1spIq7Ay0=
|
go.opentelemetry.io/otel v1.43.0/go.mod h1:JuG+u74mvjvcm8vj8pI5XiHy1zDeoCS2LB1spIq7Ay0=
|
||||||
go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM=
|
go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM=
|
||||||
@@ -137,24 +171,28 @@ go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09
|
|||||||
go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0=
|
go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0=
|
||||||
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
||||||
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||||
golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
|
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||||
golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
|
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||||
golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
|
golang.org/x/crypto v0.50.0 h1:zO47/JPrL6vsNkINmLoo/PH1gcxpls50DNogFvB5ZGI=
|
||||||
golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw=
|
golang.org/x/crypto v0.50.0/go.mod h1:3muZ7vA7PBCE6xgPX7nkzzjiUq87kRItoJQM1Yo8S+Q=
|
||||||
|
golang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=
|
||||||
|
golang.org/x/net v0.53.0/go.mod h1:JvMuJH7rrdiCfbeHoo3fCQU24Lf5JJwT9W3sJFulfgs=
|
||||||
|
golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
|
||||||
|
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
|
||||||
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
|
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
|
||||||
golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
||||||
golang.org/x/term v0.41.0 h1:QCgPso/Q3RTJx2Th4bDLqML4W6iJiaXFq2/ftQF13YU=
|
golang.org/x/term v0.42.0 h1:UiKe+zDFmJobeJ5ggPwOshJIVt6/Ft0rcfrXZDLWAWY=
|
||||||
golang.org/x/term v0.41.0/go.mod h1:3pfBgksrReYfZ5lvYM0kSO0LIkAl4Yl2bXOkKP7Ec2A=
|
golang.org/x/term v0.42.0/go.mod h1:Dq/D+snpsbazcBG5+F9Q1n2rXV8Ma+71xEjTRufARgY=
|
||||||
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
||||||
golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
|
golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
|
||||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
||||||
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg=
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 h1:XF8+t6QQiS0o9ArVan/HW8Q7cycNPGsJf6GA2nXxYAg=
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
||||||
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
||||||
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
||||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||||
@@ -162,9 +200,18 @@ google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j
|
|||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||||
|
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
|
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
|
||||||
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
|
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
|
||||||
|
modernc.org/libc v1.72.1 h1:db1xwJ6u1kE3KHTFTTbe2GCrczHPKzlURP0aDC4NGD0=
|
||||||
|
modernc.org/libc v1.72.1/go.mod h1:HRMiC/PhPGLIPM7GzAFCbI+oSgE3dhZ8FWftmRrHVlY=
|
||||||
|
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
|
||||||
|
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
|
||||||
|
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
|
||||||
|
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
|
||||||
|
modernc.org/sqlite v1.49.1 h1:dYGHTKcX1sJ+EQDnUzvz4TJ5GbuvhNJa8Fg6ElGx73U=
|
||||||
|
modernc.org/sqlite v1.49.1/go.mod h1:m0w8xhwYUVY3H6pSDwc3gkJ/irZT/0YEXwBlhaxQEew=
|
||||||
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
|
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
|
||||||
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
|
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
|
||||||
|
|||||||
@@ -0,0 +1,13 @@
|
|||||||
|
package harness
|
||||||
|
|
||||||
|
// AuthsessionRedisEnv returns the env-var map that wires the authsession
|
||||||
|
// binary to a Redis master at masterAddr using the master/replica/password
|
||||||
|
// shape required by `pkg/redisconn`. The integration suites pass a fixed
|
||||||
|
// placeholder password because the test Redis container runs without
|
||||||
|
// `requirepass`.
|
||||||
|
func AuthsessionRedisEnv(masterAddr string) map[string]string {
|
||||||
|
return map[string]string{
|
||||||
|
"AUTHSESSION_REDIS_MASTER_ADDR": masterAddr,
|
||||||
|
"AUTHSESSION_REDIS_PASSWORD": "integration",
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,12 @@
|
|||||||
|
package harness
|
||||||
|
|
||||||
|
// GatewayRedisEnv returns the env-var map that wires the gateway binary to a
|
||||||
|
// Redis master at masterAddr using the master/replica/password shape required
|
||||||
|
// by `pkg/redisconn`. The integration suites pass a fixed placeholder
|
||||||
|
// password because the test Redis container runs without `requirepass`.
|
||||||
|
func GatewayRedisEnv(masterAddr string) map[string]string {
|
||||||
|
return map[string]string{
|
||||||
|
"GATEWAY_REDIS_MASTER_ADDR": masterAddr,
|
||||||
|
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,51 @@
|
|||||||
|
package harness
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
// LobbyServicePersistence captures the per-test persistence dependencies of
|
||||||
|
// the Game Lobby Service binary: a PostgreSQL container hosting the `lobby`
|
||||||
|
// schema owned by the `lobbyservice` role, plus the Redis credentials that
|
||||||
|
// point the service at the caller-supplied master address.
|
||||||
|
type LobbyServicePersistence struct {
|
||||||
|
// Postgres exposes the started container so tests that need direct SQL
|
||||||
|
// access to the lobby schema (verifying side effects, seeding fixtures)
|
||||||
|
// can read or write through it.
|
||||||
|
Postgres *PostgresRuntime
|
||||||
|
|
||||||
|
// Env carries the environment entries that must be passed to the
|
||||||
|
// lobby-service process. It is safe to merge into the caller's existing
|
||||||
|
// env map, or to use as-is and append further LOBBY_* knobs in place.
|
||||||
|
Env map[string]string
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartLobbyServicePersistence brings up one isolated PostgreSQL container,
|
||||||
|
// provisions the `lobby` schema with the `lobbyservice` role, and returns
|
||||||
|
// the environment entries that wire the lobby-service binary at that
|
||||||
|
// container plus the supplied Redis master address.
|
||||||
|
//
|
||||||
|
// The returned password (`integration`) matches the architectural rule that
|
||||||
|
// Redis traffic is password-protected; miniredis accepts arbitrary password
|
||||||
|
// values when its own RequireAuth is not engaged, so the same value works
|
||||||
|
// against both miniredis and the real `tcredis` runtime.
|
||||||
|
//
|
||||||
|
// Cleanup of the container is handled by StartPostgresContainer through
|
||||||
|
// `t.Cleanup`; callers do not need to defer anything.
|
||||||
|
func StartLobbyServicePersistence(t testing.TB, redisMasterAddr string) LobbyServicePersistence {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
rt := StartPostgresContainer(t)
|
||||||
|
if err := rt.EnsureRoleAndSchema(context.Background(), "lobby", "lobbyservice", "lobbyservice"); err != nil {
|
||||||
|
t.Fatalf("ensure lobby schema/role: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
env := WithPostgres(rt, "LOBBY", "lobby", "lobbyservice")
|
||||||
|
env["LOBBY_REDIS_MASTER_ADDR"] = redisMasterAddr
|
||||||
|
env["LOBBY_REDIS_PASSWORD"] = "integration"
|
||||||
|
return LobbyServicePersistence{
|
||||||
|
Postgres: rt,
|
||||||
|
Env: env,
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,51 @@
|
|||||||
|
package harness
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
// MailServicePersistence captures the per-test persistence dependencies of
|
||||||
|
// the Mail Service binary: a PostgreSQL container hosting the `mail` schema
|
||||||
|
// owned by the `mailservice` role, and the Redis credentials that point the
|
||||||
|
// service at the caller-supplied master address.
|
||||||
|
type MailServicePersistence struct {
|
||||||
|
// Postgres exposes the started container so tests that need direct SQL
|
||||||
|
// access to the mail schema (verifying side effects, seeding fixtures)
|
||||||
|
// can read or write through it.
|
||||||
|
Postgres *PostgresRuntime
|
||||||
|
|
||||||
|
// Env carries the environment entries that must be passed to the
|
||||||
|
// mail-service process. It is safe to merge into the caller's existing env
|
||||||
|
// map, or to use as-is and append further MAIL_* knobs in place.
|
||||||
|
Env map[string]string
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartMailServicePersistence brings up one isolated PostgreSQL container,
|
||||||
|
// provisions the `mail` schema with the `mailservice` role, and returns the
|
||||||
|
// environment entries that wire the mail-service binary at that container plus
|
||||||
|
// the supplied Redis master address.
|
||||||
|
//
|
||||||
|
// The returned password (`integration`) matches the architectural rule that
|
||||||
|
// Redis traffic is password-protected; miniredis accepts arbitrary password
|
||||||
|
// values when its own RequireAuth is not engaged, so the same value works
|
||||||
|
// against both miniredis and the real `tcredis` runtime.
|
||||||
|
//
|
||||||
|
// Cleanup of the container is handled by the underlying StartPostgresContainer
|
||||||
|
// through `t.Cleanup`; callers do not need to defer anything.
|
||||||
|
func StartMailServicePersistence(t testing.TB, redisMasterAddr string) MailServicePersistence {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
rt := StartPostgresContainer(t)
|
||||||
|
if err := rt.EnsureRoleAndSchema(context.Background(), "mail", "mailservice", "mailservice"); err != nil {
|
||||||
|
t.Fatalf("ensure mail schema/role: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
env := WithPostgres(rt, "MAIL", "mail", "mailservice")
|
||||||
|
env["MAIL_REDIS_MASTER_ADDR"] = redisMasterAddr
|
||||||
|
env["MAIL_REDIS_PASSWORD"] = "integration"
|
||||||
|
return MailServicePersistence{
|
||||||
|
Postgres: rt,
|
||||||
|
Env: env,
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,55 @@
|
|||||||
|
package harness
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
// NotificationServicePersistence captures the per-test persistence
|
||||||
|
// dependencies of the Notification Service binary: a PostgreSQL container
|
||||||
|
// hosting the `notification` schema owned by the `notificationservice` role,
|
||||||
|
// and the Redis credentials that point the service at the caller-supplied
|
||||||
|
// master address.
|
||||||
|
type NotificationServicePersistence struct {
|
||||||
|
// Postgres exposes the started container so tests that need direct SQL
|
||||||
|
// access to the notification schema (verifying side effects, seeding
|
||||||
|
// fixtures) can read or write through it.
|
||||||
|
Postgres *PostgresRuntime
|
||||||
|
|
||||||
|
// Env carries the environment entries that must be passed to the
|
||||||
|
// notification-service process. It is safe to merge into the caller's
|
||||||
|
// existing env map, or to use as-is and append further NOTIFICATION_*
|
||||||
|
// knobs in place.
|
||||||
|
Env map[string]string
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartNotificationServicePersistence brings up one isolated PostgreSQL
|
||||||
|
// container, provisions the `notification` schema with the
|
||||||
|
// `notificationservice` role, and returns the environment entries that wire
|
||||||
|
// the notification-service binary at that container plus the supplied Redis
|
||||||
|
// master address.
|
||||||
|
//
|
||||||
|
// The returned password (`integration`) matches the architectural rule that
|
||||||
|
// Redis traffic is password-protected; miniredis accepts arbitrary password
|
||||||
|
// values when its own RequireAuth is not engaged, so the same value works
|
||||||
|
// against both miniredis and the real `tcredis` runtime.
|
||||||
|
//
|
||||||
|
// Cleanup of the container is handled by the underlying
|
||||||
|
// StartPostgresContainer through `t.Cleanup`; callers do not need to defer
|
||||||
|
// anything.
|
||||||
|
func StartNotificationServicePersistence(t testing.TB, redisMasterAddr string) NotificationServicePersistence {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
rt := StartPostgresContainer(t)
|
||||||
|
if err := rt.EnsureRoleAndSchema(context.Background(), "notification", "notificationservice", "notificationservice"); err != nil {
|
||||||
|
t.Fatalf("ensure notification schema/role: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
env := WithPostgres(rt, "NOTIFICATION", "notification", "notificationservice")
|
||||||
|
env["NOTIFICATION_REDIS_MASTER_ADDR"] = redisMasterAddr
|
||||||
|
env["NOTIFICATION_REDIS_PASSWORD"] = "integration"
|
||||||
|
return NotificationServicePersistence{
|
||||||
|
Postgres: rt,
|
||||||
|
Env: env,
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,241 @@
|
|||||||
|
package harness
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net"
|
||||||
|
"net/url"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/postgres"
|
||||||
|
|
||||||
|
testcontainers "github.com/testcontainers/testcontainers-go"
|
||||||
|
tcpostgres "github.com/testcontainers/testcontainers-go/modules/postgres"
|
||||||
|
"github.com/testcontainers/testcontainers-go/wait"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
defaultPostgresContainerImage = "postgres:16-alpine"
|
||||||
|
defaultPostgresDatabase = "galaxy_integration"
|
||||||
|
defaultPostgresSuperuser = "galaxy_integration"
|
||||||
|
defaultPostgresSuperPassword = "galaxy_integration"
|
||||||
|
|
||||||
|
postgresAdminConnectTimeout = 5 * time.Second
|
||||||
|
postgresStartupTimeout = 60 * time.Second
|
||||||
|
)
|
||||||
|
|
||||||
|
// PostgresRuntime stores one started real PostgreSQL container together with
|
||||||
|
// the parsed connection coordinates and the per-test role credentials issued
|
||||||
|
// by EnsureRoleAndSchema.
|
||||||
|
//
|
||||||
|
// The struct is safe to call from concurrent tests because credential lookups
|
||||||
|
// guard the internal map with a mutex; each test should still keep its own
|
||||||
|
// PostgresRuntime to preserve container-level isolation.
|
||||||
|
type PostgresRuntime struct {
|
||||||
|
Container *tcpostgres.PostgresContainer
|
||||||
|
|
||||||
|
baseDSN string
|
||||||
|
host string
|
||||||
|
port string
|
||||||
|
database string
|
||||||
|
|
||||||
|
mu sync.Mutex
|
||||||
|
creds map[string]string
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartPostgresContainer starts one isolated PostgreSQL container and registers
|
||||||
|
// automatic cleanup for the suite. The container exposes a superuser created
|
||||||
|
// from the package-level constants; per-service roles are issued lazily by
|
||||||
|
// EnsureRoleAndSchema.
|
||||||
|
func StartPostgresContainer(t testing.TB) *PostgresRuntime {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
container, err := tcpostgres.Run(ctx,
|
||||||
|
defaultPostgresContainerImage,
|
||||||
|
tcpostgres.WithDatabase(defaultPostgresDatabase),
|
||||||
|
tcpostgres.WithUsername(defaultPostgresSuperuser),
|
||||||
|
tcpostgres.WithPassword(defaultPostgresSuperPassword),
|
||||||
|
// The default Postgres image emits the "ready to accept connections"
|
||||||
|
// log line twice during startup: once during temporary bootstrap, once
|
||||||
|
// after the real listener opens on the mapped port. Waiting for the
|
||||||
|
// second occurrence avoids racing the temporary instance.
|
||||||
|
testcontainers.WithWaitStrategy(
|
||||||
|
wait.ForLog("database system is ready to accept connections").
|
||||||
|
WithOccurrence(2).
|
||||||
|
WithStartupTimeout(postgresStartupTimeout),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("start postgres container: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Cleanup(func() {
|
||||||
|
if err := testcontainers.TerminateContainer(container); err != nil {
|
||||||
|
t.Errorf("terminate postgres container: %v", err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
baseDSN, err := container.ConnectionString(ctx, "sslmode=disable")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("resolve postgres connection string: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
host, port, err := splitHostPort(baseDSN)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("parse postgres connection string: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &PostgresRuntime{
|
||||||
|
Container: container,
|
||||||
|
baseDSN: baseDSN,
|
||||||
|
host: host,
|
||||||
|
port: port,
|
||||||
|
database: defaultPostgresDatabase,
|
||||||
|
creds: map[string]string{},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// BaseDSN returns the superuser DSN exposed by the container, suitable for
|
||||||
|
// administrative tasks such as creating roles or schemas. Callers should
|
||||||
|
// prefer DSNForSchema for service-scoped access.
|
||||||
|
func (rt *PostgresRuntime) BaseDSN() string {
|
||||||
|
return rt.baseDSN
|
||||||
|
}
|
||||||
|
|
||||||
|
// DSNForSchema returns a DSN that connects as role and pins search_path to
|
||||||
|
// schema. EnsureRoleAndSchema must have populated credentials for role first;
|
||||||
|
// otherwise the call panics, signalling a test setup bug.
|
||||||
|
func (rt *PostgresRuntime) DSNForSchema(schema, role string) string {
|
||||||
|
rt.mu.Lock()
|
||||||
|
password, ok := rt.creds[role]
|
||||||
|
rt.mu.Unlock()
|
||||||
|
if !ok {
|
||||||
|
panic(fmt.Sprintf(
|
||||||
|
"harness: DSNForSchema called for role %q with no credentials; call EnsureRoleAndSchema first",
|
||||||
|
role,
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
values := url.Values{}
|
||||||
|
values.Set("search_path", schema)
|
||||||
|
values.Set("sslmode", "disable")
|
||||||
|
|
||||||
|
dsn := url.URL{
|
||||||
|
Scheme: "postgres",
|
||||||
|
User: url.UserPassword(role, password),
|
||||||
|
Host: net.JoinHostPort(rt.host, rt.port),
|
||||||
|
Path: "/" + rt.database,
|
||||||
|
RawQuery: values.Encode(),
|
||||||
|
}
|
||||||
|
return dsn.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// EnsureRoleAndSchema creates role with the given password (idempotent) and a
|
||||||
|
// schema owned by that role (idempotent), then grants USAGE so the role can
|
||||||
|
// resolve table references inside it. The credentials are cached for later
|
||||||
|
// DSNForSchema lookups.
|
||||||
|
//
|
||||||
|
// The operation runs through a temporary administrative connection opened
|
||||||
|
// from BaseDSN; the connection is closed before the call returns.
|
||||||
|
func (rt *PostgresRuntime) EnsureRoleAndSchema(ctx context.Context, schema, role, password string) error {
|
||||||
|
if strings.TrimSpace(schema) == "" {
|
||||||
|
return fmt.Errorf("ensure role and schema: schema must not be empty")
|
||||||
|
}
|
||||||
|
if strings.TrimSpace(role) == "" {
|
||||||
|
return fmt.Errorf("ensure role and schema: role must not be empty")
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg := postgres.DefaultConfig()
|
||||||
|
cfg.PrimaryDSN = rt.baseDSN
|
||||||
|
cfg.OperationTimeout = postgresAdminConnectTimeout
|
||||||
|
|
||||||
|
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("ensure role and schema: open admin connection: %w", err)
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
_ = db.Close()
|
||||||
|
}()
|
||||||
|
|
||||||
|
createRole := fmt.Sprintf(`DO $$
|
||||||
|
BEGIN
|
||||||
|
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = %s) THEN
|
||||||
|
CREATE ROLE %s LOGIN PASSWORD %s;
|
||||||
|
END IF;
|
||||||
|
END $$;`,
|
||||||
|
quoteSQLLiteral(role),
|
||||||
|
quoteSQLIdentifier(role),
|
||||||
|
quoteSQLLiteral(password),
|
||||||
|
)
|
||||||
|
if _, err := db.ExecContext(ctx, createRole); err != nil {
|
||||||
|
return fmt.Errorf("ensure role and schema: create role %q: %w", role, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
createSchema := fmt.Sprintf(`CREATE SCHEMA IF NOT EXISTS %s AUTHORIZATION %s;`,
|
||||||
|
quoteSQLIdentifier(schema),
|
||||||
|
quoteSQLIdentifier(role),
|
||||||
|
)
|
||||||
|
if _, err := db.ExecContext(ctx, createSchema); err != nil {
|
||||||
|
return fmt.Errorf("ensure role and schema: create schema %q: %w", schema, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
grantUsage := fmt.Sprintf(`GRANT USAGE ON SCHEMA %s TO %s;`,
|
||||||
|
quoteSQLIdentifier(schema),
|
||||||
|
quoteSQLIdentifier(role),
|
||||||
|
)
|
||||||
|
if _, err := db.ExecContext(ctx, grantUsage); err != nil {
|
||||||
|
return fmt.Errorf("ensure role and schema: grant usage on %q to %q: %w", schema, role, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
rt.mu.Lock()
|
||||||
|
rt.creds[role] = password
|
||||||
|
rt.mu.Unlock()
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithPostgres returns env entries pointing the service identified by
|
||||||
|
// envPrefix at schema/role inside rt. EnsureRoleAndSchema must have populated
|
||||||
|
// credentials for role first.
|
||||||
|
//
|
||||||
|
// The returned map carries only `<envPrefix>_POSTGRES_PRIMARY_DSN`; the other
|
||||||
|
// per-service Postgres knobs (operation timeout, pool sizes) keep the
|
||||||
|
// defaults provided by `pkg/postgres.DefaultConfig`.
|
||||||
|
func WithPostgres(rt *PostgresRuntime, envPrefix, schema, role string) map[string]string {
|
||||||
|
return map[string]string{
|
||||||
|
envPrefix + "_POSTGRES_PRIMARY_DSN": rt.DSNForSchema(schema, role),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// quoteSQLIdentifier wraps name in double quotes and escapes any embedded
|
||||||
|
// double quote, producing a SQL identifier that survives reserved words such
|
||||||
|
// as `user`.
|
||||||
|
func quoteSQLIdentifier(name string) string {
|
||||||
|
return `"` + strings.ReplaceAll(name, `"`, `""`) + `"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// quoteSQLLiteral wraps value in single quotes and escapes any embedded single
|
||||||
|
// quote, producing a SQL literal usable in DDL statements where parameter
|
||||||
|
// binding is not available.
|
||||||
|
func quoteSQLLiteral(value string) string {
|
||||||
|
return "'" + strings.ReplaceAll(value, "'", "''") + "'"
|
||||||
|
}
|
||||||
|
|
||||||
|
// splitHostPort extracts host and port from a postgres:// DSN.
|
||||||
|
func splitHostPort(dsn string) (string, string, error) {
|
||||||
|
parsed, err := url.Parse(dsn)
|
||||||
|
if err != nil {
|
||||||
|
return "", "", fmt.Errorf("parse dsn: %w", err)
|
||||||
|
}
|
||||||
|
host := parsed.Hostname()
|
||||||
|
port := parsed.Port()
|
||||||
|
if host == "" || port == "" {
|
||||||
|
return "", "", fmt.Errorf("dsn %q missing host or port", dsn)
|
||||||
|
}
|
||||||
|
return host, port, nil
|
||||||
|
}
|
||||||
@@ -0,0 +1,138 @@
|
|||||||
|
package harness
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"net/url"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/postgres"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestPostgresContainerRoundTrip(t *testing.T) {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||||
|
t.Cleanup(cancel)
|
||||||
|
|
||||||
|
rt := StartPostgresContainer(t)
|
||||||
|
|
||||||
|
require.NoError(t, rt.EnsureRoleAndSchema(ctx, "smoke_schema", "smoke_role", "smoke_pass"))
|
||||||
|
|
||||||
|
cfg := postgres.DefaultConfig()
|
||||||
|
cfg.PrimaryDSN = rt.DSNForSchema("smoke_schema", "smoke_role")
|
||||||
|
cfg.OperationTimeout = 5 * time.Second
|
||||||
|
|
||||||
|
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||||
|
require.NoError(t, err)
|
||||||
|
t.Cleanup(func() {
|
||||||
|
require.NoError(t, db.Close())
|
||||||
|
})
|
||||||
|
|
||||||
|
require.NoError(t, postgres.Ping(ctx, db, cfg.OperationTimeout))
|
||||||
|
|
||||||
|
_, err = db.ExecContext(ctx, `CREATE TABLE notes (id serial PRIMARY KEY, body text NOT NULL)`)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
var insertedID int64
|
||||||
|
require.NoError(t, db.QueryRowContext(ctx,
|
||||||
|
`INSERT INTO notes (body) VALUES ($1) RETURNING id`, "hello").Scan(&insertedID))
|
||||||
|
require.Greater(t, insertedID, int64(0))
|
||||||
|
|
||||||
|
var body string
|
||||||
|
require.NoError(t, db.QueryRowContext(ctx,
|
||||||
|
`SELECT body FROM notes WHERE id = $1`, insertedID).Scan(&body))
|
||||||
|
require.Equal(t, "hello", body)
|
||||||
|
|
||||||
|
// search_path is honoured: the unqualified table created above resolved
|
||||||
|
// inside smoke_schema.
|
||||||
|
var schemaName string
|
||||||
|
require.NoError(t, db.QueryRowContext(ctx,
|
||||||
|
`SELECT table_schema FROM information_schema.tables WHERE table_name = 'notes'`,
|
||||||
|
).Scan(&schemaName))
|
||||||
|
require.Equal(t, "smoke_schema", schemaName)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEnsureRoleAndSchemaIsIdempotent(t *testing.T) {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||||
|
t.Cleanup(cancel)
|
||||||
|
|
||||||
|
rt := StartPostgresContainer(t)
|
||||||
|
|
||||||
|
require.NoError(t, rt.EnsureRoleAndSchema(ctx, "schema_x", "role_x", "pass_x"))
|
||||||
|
require.NoError(t, rt.EnsureRoleAndSchema(ctx, "schema_x", "role_x", "pass_x"))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEnsureRoleAndSchemaSupportsReservedWordIdentifiers(t *testing.T) {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
|
||||||
|
t.Cleanup(cancel)
|
||||||
|
|
||||||
|
rt := StartPostgresContainer(t)
|
||||||
|
|
||||||
|
// `user` is a SQL reserved word; identifier quoting must keep this working.
|
||||||
|
require.NoError(t, rt.EnsureRoleAndSchema(ctx, "user", "userservice", "secret"))
|
||||||
|
|
||||||
|
cfg := postgres.DefaultConfig()
|
||||||
|
cfg.PrimaryDSN = rt.DSNForSchema("user", "userservice")
|
||||||
|
cfg.OperationTimeout = 5 * time.Second
|
||||||
|
|
||||||
|
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||||
|
require.NoError(t, err)
|
||||||
|
t.Cleanup(func() {
|
||||||
|
require.NoError(t, db.Close())
|
||||||
|
})
|
||||||
|
|
||||||
|
require.NoError(t, postgres.Ping(ctx, db, cfg.OperationTimeout))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWithPostgresBuildsPrimaryDSNEnv(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
rt := newRuntimeForTest("127.0.0.1", "55432", "galaxy_integration", "userservice", "s3cr3t!")
|
||||||
|
|
||||||
|
env := WithPostgres(rt, "USERSERVICE", "user", "userservice")
|
||||||
|
|
||||||
|
require.Len(t, env, 1)
|
||||||
|
|
||||||
|
dsn, ok := env["USERSERVICE_POSTGRES_PRIMARY_DSN"]
|
||||||
|
require.True(t, ok, "missing USERSERVICE_POSTGRES_PRIMARY_DSN entry")
|
||||||
|
|
||||||
|
parsed, err := url.Parse(dsn)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, "postgres", parsed.Scheme)
|
||||||
|
require.Equal(t, "127.0.0.1:55432", parsed.Host)
|
||||||
|
require.Equal(t, "/galaxy_integration", parsed.Path)
|
||||||
|
require.Equal(t, "userservice", parsed.User.Username())
|
||||||
|
|
||||||
|
password, hasPassword := parsed.User.Password()
|
||||||
|
require.True(t, hasPassword)
|
||||||
|
require.Equal(t, "s3cr3t!", password)
|
||||||
|
|
||||||
|
query := parsed.Query()
|
||||||
|
require.Equal(t, "user", query.Get("search_path"))
|
||||||
|
require.Equal(t, "disable", query.Get("sslmode"))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDSNForSchemaPanicsWithoutCredentials(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
rt := newRuntimeForTest("127.0.0.1", "55432", "galaxy_integration", "userservice", "secret")
|
||||||
|
|
||||||
|
require.PanicsWithValue(t,
|
||||||
|
`harness: DSNForSchema called for role "unknown" with no credentials; call EnsureRoleAndSchema first`,
|
||||||
|
func() {
|
||||||
|
_ = rt.DSNForSchema("user", "unknown")
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// newRuntimeForTest builds a PostgresRuntime without spinning a container.
|
||||||
|
// It exists only to exercise the pure DSN/env-builder paths.
|
||||||
|
func newRuntimeForTest(host, port, database, role, password string) *PostgresRuntime {
|
||||||
|
return &PostgresRuntime{
|
||||||
|
host: host,
|
||||||
|
port: port,
|
||||||
|
database: database,
|
||||||
|
creds: map[string]string{role: password},
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,51 @@
|
|||||||
|
package harness
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
// UserServicePersistence captures the per-test persistence dependencies of
|
||||||
|
// the User Service binary: a PostgreSQL container hosting the `user` schema
|
||||||
|
// owned by the `userservice` role, and the Redis credentials that point the
|
||||||
|
// service at the caller-supplied master address.
|
||||||
|
type UserServicePersistence struct {
|
||||||
|
// Postgres exposes the started container so tests that need direct SQL
|
||||||
|
// access to the user schema (verifying side effects, seeding fixtures)
|
||||||
|
// can read or write through it.
|
||||||
|
Postgres *PostgresRuntime
|
||||||
|
|
||||||
|
// Env carries the environment entries that must be passed to the
|
||||||
|
// userservice process. It is safe to merge into the caller's existing env
|
||||||
|
// map, or to use as-is and append further USERSERVICE_* knobs in place.
|
||||||
|
Env map[string]string
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartUserServicePersistence brings up one isolated PostgreSQL container,
|
||||||
|
// provisions the `user` schema with the `userservice` role, and returns the
|
||||||
|
// environment entries that wire the userservice binary at that container plus
|
||||||
|
// the supplied Redis master address.
|
||||||
|
//
|
||||||
|
// The returned password (`integration`) matches the architectural rule that
|
||||||
|
// Redis traffic is password-protected; miniredis accepts arbitrary password
|
||||||
|
// values when its own RequireAuth is not engaged, so the same value works
|
||||||
|
// against both miniredis and the real `tcredis` runtime.
|
||||||
|
//
|
||||||
|
// Cleanup of the container is handled by the underlying StartPostgresContainer
|
||||||
|
// through `t.Cleanup`; callers do not need to defer anything.
|
||||||
|
func StartUserServicePersistence(t testing.TB, redisMasterAddr string) UserServicePersistence {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
rt := StartPostgresContainer(t)
|
||||||
|
if err := rt.EnsureRoleAndSchema(context.Background(), "user", "userservice", "userservice"); err != nil {
|
||||||
|
t.Fatalf("ensure user schema/role: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
env := WithPostgres(rt, "USERSERVICE", "user", "userservice")
|
||||||
|
env["USERSERVICE_REDIS_MASTER_ADDR"] = redisMasterAddr
|
||||||
|
env["USERSERVICE_REDIS_PASSWORD"] = "integration"
|
||||||
|
return UserServicePersistence{
|
||||||
|
Postgres: rt,
|
||||||
|
Env: env,
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -218,13 +218,12 @@ func newLobbyNotificationHarness(t *testing.T, gmHandler http.HandlerFunc) *lobb
|
|||||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||||
lobbyBinary := harness.BuildBinary(t, "lobby", "./lobby/cmd/lobby")
|
lobbyBinary := harness.BuildBinary(t, "lobby", "./lobby/cmd/lobby")
|
||||||
|
|
||||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, map[string]string{
|
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"USERSERVICE_LOG_LEVEL": "info",
|
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||||
"USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr,
|
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||||
"USERSERVICE_REDIS_ADDR": redisRuntime.Addr,
|
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||||
})
|
|
||||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||||
|
|
||||||
// Use unique stream prefixes per test so concurrent runs do not bleed.
|
// Use unique stream prefixes per test so concurrent runs do not bleed.
|
||||||
@@ -234,23 +233,22 @@ func newLobbyNotificationHarness(t *testing.T, gmHandler http.HandlerFunc) *lobb
|
|||||||
jobResultsStream := runtimeJobResultsStream + ":" + suffix
|
jobResultsStream := runtimeJobResultsStream + ":" + suffix
|
||||||
gmEventsStream := gmLobbyEventsStream + ":" + suffix
|
gmEventsStream := gmLobbyEventsStream + ":" + suffix
|
||||||
|
|
||||||
lobbyProcess := harness.StartProcess(t, "lobby", lobbyBinary, map[string]string{
|
lobbyEnv := harness.StartLobbyServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"LOBBY_LOG_LEVEL": "info",
|
lobbyEnv["LOBBY_LOG_LEVEL"] = "info"
|
||||||
"LOBBY_PUBLIC_HTTP_ADDR": lobbyPublicAddr,
|
lobbyEnv["LOBBY_PUBLIC_HTTP_ADDR"] = lobbyPublicAddr
|
||||||
"LOBBY_INTERNAL_HTTP_ADDR": lobbyInternalAddr,
|
lobbyEnv["LOBBY_INTERNAL_HTTP_ADDR"] = lobbyInternalAddr
|
||||||
"LOBBY_REDIS_ADDR": redisRuntime.Addr,
|
lobbyEnv["LOBBY_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||||
"LOBBY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
lobbyEnv["LOBBY_GM_BASE_URL"] = gmStub.URL
|
||||||
"LOBBY_GM_BASE_URL": gmStub.URL,
|
lobbyEnv["LOBBY_NOTIFICATION_INTENTS_STREAM"] = intentsStream
|
||||||
"LOBBY_NOTIFICATION_INTENTS_STREAM": intentsStream,
|
lobbyEnv["LOBBY_USER_LIFECYCLE_STREAM"] = lifecycleStream
|
||||||
"LOBBY_USER_LIFECYCLE_STREAM": lifecycleStream,
|
lobbyEnv["LOBBY_RUNTIME_JOB_RESULTS_STREAM"] = jobResultsStream
|
||||||
"LOBBY_RUNTIME_JOB_RESULTS_STREAM": jobResultsStream,
|
lobbyEnv["LOBBY_GM_EVENTS_STREAM"] = gmEventsStream
|
||||||
"LOBBY_GM_EVENTS_STREAM": gmEventsStream,
|
lobbyEnv["LOBBY_RUNTIME_JOB_RESULTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||||
"LOBBY_RUNTIME_JOB_RESULTS_READ_BLOCK_TIMEOUT": "200ms",
|
lobbyEnv["LOBBY_USER_LIFECYCLE_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||||
"LOBBY_USER_LIFECYCLE_READ_BLOCK_TIMEOUT": "200ms",
|
lobbyEnv["LOBBY_GM_EVENTS_READ_BLOCK_TIMEOUT"] = "200ms"
|
||||||
"LOBBY_GM_EVENTS_READ_BLOCK_TIMEOUT": "200ms",
|
lobbyEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
lobbyEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
lobbyProcess := harness.StartProcess(t, "lobby", lobbyBinary, lobbyEnv)
|
||||||
})
|
|
||||||
harness.WaitForHTTPStatus(t, lobbyProcess, "http://"+lobbyInternalAddr+"/readyz", http.StatusOK)
|
harness.WaitForHTTPStatus(t, lobbyProcess, "http://"+lobbyInternalAddr+"/readyz", http.StatusOK)
|
||||||
|
|
||||||
return &lobbyNotificationHarness{
|
return &lobbyNotificationHarness{
|
||||||
|
|||||||
@@ -106,25 +106,23 @@ func newLobbyUserHarness(t *testing.T) *lobbyUserHarness {
|
|||||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||||
lobbyBinary := harness.BuildBinary(t, "lobby", "./lobby/cmd/lobby")
|
lobbyBinary := harness.BuildBinary(t, "lobby", "./lobby/cmd/lobby")
|
||||||
|
|
||||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, map[string]string{
|
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"USERSERVICE_LOG_LEVEL": "info",
|
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||||
"USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr,
|
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||||
"USERSERVICE_REDIS_ADDR": redisRuntime.Addr,
|
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||||
})
|
|
||||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||||
|
|
||||||
lobbyProcess := harness.StartProcess(t, "lobby", lobbyBinary, map[string]string{
|
lobbyEnv := harness.StartLobbyServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"LOBBY_LOG_LEVEL": "info",
|
lobbyEnv["LOBBY_LOG_LEVEL"] = "info"
|
||||||
"LOBBY_PUBLIC_HTTP_ADDR": lobbyPublicAddr,
|
lobbyEnv["LOBBY_PUBLIC_HTTP_ADDR"] = lobbyPublicAddr
|
||||||
"LOBBY_INTERNAL_HTTP_ADDR": lobbyInternalAddr,
|
lobbyEnv["LOBBY_INTERNAL_HTTP_ADDR"] = lobbyInternalAddr
|
||||||
"LOBBY_REDIS_ADDR": redisRuntime.Addr,
|
lobbyEnv["LOBBY_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||||
"LOBBY_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
lobbyEnv["LOBBY_GM_BASE_URL"] = gmStub.URL
|
||||||
"LOBBY_GM_BASE_URL": gmStub.URL,
|
lobbyEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
lobbyEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
lobbyProcess := harness.StartProcess(t, "lobby", lobbyBinary, lobbyEnv)
|
||||||
})
|
|
||||||
harness.WaitForHTTPStatus(t, lobbyProcess, "http://"+lobbyInternalAddr+"/readyz", http.StatusOK)
|
harness.WaitForHTTPStatus(t, lobbyProcess, "http://"+lobbyInternalAddr+"/readyz", http.StatusOK)
|
||||||
|
|
||||||
return &lobbyUserHarness{
|
return &lobbyUserHarness{
|
||||||
|
|||||||
@@ -167,35 +167,35 @@ func newNotificationGatewayHarness(t *testing.T) *notificationGatewayHarness {
|
|||||||
notificationBinary := harness.BuildBinary(t, "notification", "./notification/cmd/notification")
|
notificationBinary := harness.BuildBinary(t, "notification", "./notification/cmd/notification")
|
||||||
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
gatewayBinary := harness.BuildBinary(t, "gateway", "./gateway/cmd/gateway")
|
||||||
|
|
||||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, map[string]string{
|
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"USERSERVICE_LOG_LEVEL": "info",
|
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||||
"USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr,
|
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||||
"USERSERVICE_REDIS_ADDR": redisRuntime.Addr,
|
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||||
})
|
|
||||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||||
|
|
||||||
notificationProcess := harness.StartProcess(t, "notification", notificationBinary, map[string]string{
|
notificationEnv := harness.StartNotificationServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"NOTIFICATION_LOG_LEVEL": "info",
|
notificationEnv["NOTIFICATION_LOG_LEVEL"] = "info"
|
||||||
"NOTIFICATION_INTERNAL_HTTP_ADDR": notificationInternalAddr,
|
notificationEnv["NOTIFICATION_INTERNAL_HTTP_ADDR"] = notificationInternalAddr
|
||||||
"NOTIFICATION_REDIS_ADDR": redisRuntime.Addr,
|
notificationEnv["NOTIFICATION_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||||
"NOTIFICATION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
notificationEnv["NOTIFICATION_USER_SERVICE_TIMEOUT"] = time.Second.String()
|
||||||
"NOTIFICATION_USER_SERVICE_TIMEOUT": time.Second.String(),
|
notificationEnv["NOTIFICATION_INTENTS_READ_BLOCK_TIMEOUT"] = "100ms"
|
||||||
"NOTIFICATION_INTENTS_READ_BLOCK_TIMEOUT": "100ms",
|
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MIN"] = "100ms"
|
||||||
"NOTIFICATION_ROUTE_BACKOFF_MIN": "100ms",
|
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MAX"] = "100ms"
|
||||||
"NOTIFICATION_ROUTE_BACKOFF_MAX": "100ms",
|
notificationEnv["NOTIFICATION_GATEWAY_CLIENT_EVENTS_STREAM"] = notificationGatewayClientEventsStream
|
||||||
"NOTIFICATION_GATEWAY_CLIENT_EVENTS_STREAM": notificationGatewayClientEventsStream,
|
notificationEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
notificationEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
notificationProcess := harness.StartProcess(t, "notification", notificationBinary, notificationEnv)
|
||||||
})
|
|
||||||
harness.WaitForHTTPStatus(t, notificationProcess, "http://"+notificationInternalAddr+"/readyz", http.StatusOK)
|
harness.WaitForHTTPStatus(t, notificationProcess, "http://"+notificationInternalAddr+"/readyz", http.StatusOK)
|
||||||
|
|
||||||
gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, map[string]string{
|
gatewayProcess := harness.StartProcess(t, "gateway", gatewayBinary, map[string]string{
|
||||||
"GATEWAY_LOG_LEVEL": "info",
|
"GATEWAY_LOG_LEVEL": "info",
|
||||||
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
"GATEWAY_PUBLIC_HTTP_ADDR": gatewayPublicAddr,
|
||||||
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
"GATEWAY_AUTHENTICATED_GRPC_ADDR": gatewayGRPCAddr,
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_ADDR": redisRuntime.Addr,
|
"GATEWAY_REDIS_MASTER_ADDR": redisRuntime.Addr,
|
||||||
|
|
||||||
|
"GATEWAY_REDIS_PASSWORD": "integration",
|
||||||
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
"GATEWAY_SESSION_CACHE_REDIS_KEY_PREFIX": "gateway:session:",
|
||||||
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
"GATEWAY_SESSION_EVENTS_REDIS_STREAM": "gateway:session_events",
|
||||||
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": notificationGatewayClientEventsStream,
|
"GATEWAY_CLIENT_EVENTS_REDIS_STREAM": notificationGatewayClientEventsStream,
|
||||||
|
|||||||
@@ -332,45 +332,42 @@ func newNotificationMailHarness(t *testing.T) *notificationMailHarness {
|
|||||||
mailBinary := harness.BuildBinary(t, "mail", "./mail/cmd/mail")
|
mailBinary := harness.BuildBinary(t, "mail", "./mail/cmd/mail")
|
||||||
notificationBinary := harness.BuildBinary(t, "notification", "./notification/cmd/notification")
|
notificationBinary := harness.BuildBinary(t, "notification", "./notification/cmd/notification")
|
||||||
|
|
||||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, map[string]string{
|
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"USERSERVICE_LOG_LEVEL": "info",
|
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||||
"USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr,
|
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||||
"USERSERVICE_REDIS_ADDR": redisRuntime.Addr,
|
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||||
})
|
|
||||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||||
|
|
||||||
mailProcess := harness.StartProcess(t, "mail", mailBinary, map[string]string{
|
mailEnv := harness.StartMailServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"MAIL_LOG_LEVEL": "info",
|
mailEnv["MAIL_LOG_LEVEL"] = "info"
|
||||||
"MAIL_INTERNAL_HTTP_ADDR": mailInternalAddr,
|
mailEnv["MAIL_INTERNAL_HTTP_ADDR"] = mailInternalAddr
|
||||||
"MAIL_REDIS_ADDR": redisRuntime.Addr,
|
mailEnv["MAIL_TEMPLATE_DIR"] = mailTemplateDir(t)
|
||||||
"MAIL_TEMPLATE_DIR": mailTemplateDir(t),
|
mailEnv["MAIL_SMTP_MODE"] = "stub"
|
||||||
"MAIL_SMTP_MODE": "stub",
|
mailEnv["MAIL_STREAM_BLOCK_TIMEOUT"] = "100ms"
|
||||||
"MAIL_STREAM_BLOCK_TIMEOUT": "100ms",
|
mailEnv["MAIL_OPERATOR_REQUEST_TIMEOUT"] = time.Second.String()
|
||||||
"MAIL_OPERATOR_REQUEST_TIMEOUT": time.Second.String(),
|
mailEnv["MAIL_SHUTDOWN_TIMEOUT"] = "2s"
|
||||||
"MAIL_SHUTDOWN_TIMEOUT": "2s",
|
mailEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
mailEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
mailProcess := harness.StartProcess(t, "mail", mailBinary, mailEnv)
|
||||||
})
|
|
||||||
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
waitForMailReady(t, mailProcess, "http://"+mailInternalAddr)
|
||||||
|
|
||||||
notificationProcess := harness.StartProcess(t, "notification", notificationBinary, map[string]string{
|
notificationEnv := harness.StartNotificationServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"NOTIFICATION_LOG_LEVEL": "info",
|
notificationEnv["NOTIFICATION_LOG_LEVEL"] = "info"
|
||||||
"NOTIFICATION_INTERNAL_HTTP_ADDR": notificationInternalAddr,
|
notificationEnv["NOTIFICATION_INTERNAL_HTTP_ADDR"] = notificationInternalAddr
|
||||||
"NOTIFICATION_REDIS_ADDR": redisRuntime.Addr,
|
notificationEnv["NOTIFICATION_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||||
"NOTIFICATION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
notificationEnv["NOTIFICATION_USER_SERVICE_TIMEOUT"] = time.Second.String()
|
||||||
"NOTIFICATION_USER_SERVICE_TIMEOUT": time.Second.String(),
|
notificationEnv["NOTIFICATION_INTENTS_READ_BLOCK_TIMEOUT"] = "100ms"
|
||||||
"NOTIFICATION_INTENTS_READ_BLOCK_TIMEOUT": "100ms",
|
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MIN"] = "100ms"
|
||||||
"NOTIFICATION_ROUTE_BACKOFF_MIN": "100ms",
|
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MAX"] = "100ms"
|
||||||
"NOTIFICATION_ROUTE_BACKOFF_MAX": "100ms",
|
notificationEnv["NOTIFICATION_ADMIN_EMAILS_GEO_REVIEW_RECOMMENDED"] = "geo-admin@example.com"
|
||||||
"NOTIFICATION_ADMIN_EMAILS_GEO_REVIEW_RECOMMENDED": "geo-admin@example.com",
|
notificationEnv["NOTIFICATION_ADMIN_EMAILS_GAME_GENERATION_FAILED"] = "game-admin@example.com"
|
||||||
"NOTIFICATION_ADMIN_EMAILS_GAME_GENERATION_FAILED": "game-admin@example.com",
|
notificationEnv["NOTIFICATION_ADMIN_EMAILS_LOBBY_RUNTIME_PAUSED_AFTER_START"] = "lobby-ops@example.com"
|
||||||
"NOTIFICATION_ADMIN_EMAILS_LOBBY_RUNTIME_PAUSED_AFTER_START": "lobby-ops@example.com",
|
notificationEnv["NOTIFICATION_ADMIN_EMAILS_LOBBY_APPLICATION_SUBMITTED"] = "lobby-admin@example.com"
|
||||||
"NOTIFICATION_ADMIN_EMAILS_LOBBY_APPLICATION_SUBMITTED": "lobby-admin@example.com",
|
notificationEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
notificationEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
notificationProcess := harness.StartProcess(t, "notification", notificationBinary, notificationEnv)
|
||||||
})
|
|
||||||
harness.WaitForHTTPStatus(t, notificationProcess, "http://"+notificationInternalAddr+"/readyz", http.StatusOK)
|
harness.WaitForHTTPStatus(t, notificationProcess, "http://"+notificationInternalAddr+"/readyz", http.StatusOK)
|
||||||
|
|
||||||
return ¬ificationMailHarness{
|
return ¬ificationMailHarness{
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ package notificationuser_test
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
|
"database/sql"
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
@@ -13,6 +14,7 @@ import (
|
|||||||
|
|
||||||
"galaxy/integration/internal/harness"
|
"galaxy/integration/internal/harness"
|
||||||
|
|
||||||
|
_ "github.com/jackc/pgx/v5/stdlib"
|
||||||
"github.com/redis/go-redis/v9"
|
"github.com/redis/go-redis/v9"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
@@ -66,17 +68,13 @@ func TestNotificationUserTemporaryUnavailabilityDoesNotAdvanceOffset(t *testing.
|
|||||||
return ok && offset.LastProcessedEntryID == messageID
|
return ok && offset.LastProcessedEntryID == messageID
|
||||||
}, time.Second, 50*time.Millisecond)
|
}, time.Second, 50*time.Millisecond)
|
||||||
|
|
||||||
exists, err := h.redis.Exists(context.Background(), notificationMalformedIntentKey(messageID)).Result()
|
require.False(t, h.malformedIntentExists(t, messageID))
|
||||||
require.NoError(t, err)
|
require.False(t, h.routeExists(t, messageID, "email:user:"+recipient.UserID))
|
||||||
require.Zero(t, exists)
|
|
||||||
|
|
||||||
exists, err = h.redis.Exists(context.Background(), notificationRouteKey(messageID, "email:user:"+recipient.UserID)).Result()
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.Zero(t, exists)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type notificationUserHarness struct {
|
type notificationUserHarness struct {
|
||||||
redis *redis.Client
|
redis *redis.Client
|
||||||
|
pg *sql.DB
|
||||||
|
|
||||||
userServiceURL string
|
userServiceURL string
|
||||||
|
|
||||||
@@ -141,31 +139,34 @@ func newNotificationUserHarness(t *testing.T) *notificationUserHarness {
|
|||||||
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
userServiceBinary := harness.BuildBinary(t, "userservice", "./user/cmd/userservice")
|
||||||
notificationBinary := harness.BuildBinary(t, "notification", "./notification/cmd/notification")
|
notificationBinary := harness.BuildBinary(t, "notification", "./notification/cmd/notification")
|
||||||
|
|
||||||
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, map[string]string{
|
userServiceEnv := harness.StartUserServicePersistence(t, redisRuntime.Addr).Env
|
||||||
"USERSERVICE_LOG_LEVEL": "info",
|
userServiceEnv["USERSERVICE_LOG_LEVEL"] = "info"
|
||||||
"USERSERVICE_INTERNAL_HTTP_ADDR": userServiceAddr,
|
userServiceEnv["USERSERVICE_INTERNAL_HTTP_ADDR"] = userServiceAddr
|
||||||
"USERSERVICE_REDIS_ADDR": redisRuntime.Addr,
|
userServiceEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
userServiceEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
userServiceProcess := harness.StartProcess(t, "userservice", userServiceBinary, userServiceEnv)
|
||||||
})
|
|
||||||
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
waitForUserServiceReady(t, userServiceProcess, "http://"+userServiceAddr)
|
||||||
|
|
||||||
notificationProcess := harness.StartProcess(t, "notification", notificationBinary, map[string]string{
|
notificationPersistence := harness.StartNotificationServicePersistence(t, redisRuntime.Addr)
|
||||||
"NOTIFICATION_LOG_LEVEL": "info",
|
notificationEnv := notificationPersistence.Env
|
||||||
"NOTIFICATION_INTERNAL_HTTP_ADDR": notificationInternalAddr,
|
notificationPG, err := sql.Open("pgx", notificationPersistence.Postgres.DSNForSchema("notification", "notificationservice"))
|
||||||
"NOTIFICATION_REDIS_ADDR": redisRuntime.Addr,
|
require.NoError(t, err)
|
||||||
"NOTIFICATION_USER_SERVICE_BASE_URL": "http://" + userServiceAddr,
|
t.Cleanup(func() { _ = notificationPG.Close() })
|
||||||
"NOTIFICATION_USER_SERVICE_TIMEOUT": "250ms",
|
notificationEnv["NOTIFICATION_LOG_LEVEL"] = "info"
|
||||||
"NOTIFICATION_INTENTS_READ_BLOCK_TIMEOUT": "100ms",
|
notificationEnv["NOTIFICATION_INTERNAL_HTTP_ADDR"] = notificationInternalAddr
|
||||||
"NOTIFICATION_ROUTE_BACKOFF_MIN": "100ms",
|
notificationEnv["NOTIFICATION_USER_SERVICE_BASE_URL"] = "http://" + userServiceAddr
|
||||||
"NOTIFICATION_ROUTE_BACKOFF_MAX": "100ms",
|
notificationEnv["NOTIFICATION_USER_SERVICE_TIMEOUT"] = "250ms"
|
||||||
"OTEL_TRACES_EXPORTER": "none",
|
notificationEnv["NOTIFICATION_INTENTS_READ_BLOCK_TIMEOUT"] = "100ms"
|
||||||
"OTEL_METRICS_EXPORTER": "none",
|
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MIN"] = "100ms"
|
||||||
})
|
notificationEnv["NOTIFICATION_ROUTE_BACKOFF_MAX"] = "100ms"
|
||||||
|
notificationEnv["OTEL_TRACES_EXPORTER"] = "none"
|
||||||
|
notificationEnv["OTEL_METRICS_EXPORTER"] = "none"
|
||||||
|
notificationProcess := harness.StartProcess(t, "notification", notificationBinary, notificationEnv)
|
||||||
harness.WaitForHTTPStatus(t, notificationProcess, "http://"+notificationInternalAddr+"/readyz", http.StatusOK)
|
harness.WaitForHTTPStatus(t, notificationProcess, "http://"+notificationInternalAddr+"/readyz", http.StatusOK)
|
||||||
|
|
||||||
return ¬ificationUserHarness{
|
return ¬ificationUserHarness{
|
||||||
redis: redisClient,
|
redis: redisClient,
|
||||||
|
pg: notificationPG,
|
||||||
userServiceURL: "http://" + userServiceAddr,
|
userServiceURL: "http://" + userServiceAddr,
|
||||||
notificationProcess: notificationProcess,
|
notificationProcess: notificationProcess,
|
||||||
userServiceProcess: userServiceProcess,
|
userServiceProcess: userServiceProcess,
|
||||||
@@ -213,14 +214,27 @@ func (h *notificationUserHarness) publishUserIntent(t *testing.T, recipientUserI
|
|||||||
func (h *notificationUserHarness) waitForRoute(t *testing.T, notificationID string, routeID string) notificationRouteRecord {
|
func (h *notificationUserHarness) waitForRoute(t *testing.T, notificationID string, routeID string) notificationRouteRecord {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
key := notificationRouteKey(notificationID, routeID)
|
|
||||||
var route notificationRouteRecord
|
var route notificationRouteRecord
|
||||||
require.Eventually(t, func() bool {
|
require.Eventually(t, func() bool {
|
||||||
payload, err := h.redis.Get(context.Background(), key).Bytes()
|
row := h.pg.QueryRowContext(context.Background(),
|
||||||
if err != nil {
|
`SELECT notification_id, route_id, channel, recipient_ref, status, resolved_email, resolved_locale
|
||||||
return false
|
FROM routes WHERE notification_id = $1 AND route_id = $2`,
|
||||||
|
notificationID, routeID,
|
||||||
|
)
|
||||||
|
if err := row.Scan(
|
||||||
|
&route.NotificationID,
|
||||||
|
&route.RouteID,
|
||||||
|
&route.Channel,
|
||||||
|
&route.RecipientRef,
|
||||||
|
&route.Status,
|
||||||
|
&route.ResolvedEmail,
|
||||||
|
&route.ResolvedLocale,
|
||||||
|
); err != nil {
|
||||||
|
if errors.Is(err, sql.ErrNoRows) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
require.NoError(t, decodeJSONPayload(payload, &route))
|
|
||||||
return true
|
return true
|
||||||
}, 10*time.Second, 50*time.Millisecond)
|
}, 10*time.Second, 50*time.Millisecond)
|
||||||
|
|
||||||
@@ -230,14 +244,30 @@ func (h *notificationUserHarness) waitForRoute(t *testing.T, notificationID stri
|
|||||||
func (h *notificationUserHarness) waitForMalformedIntent(t *testing.T, streamEntryID string) malformedIntentRecord {
|
func (h *notificationUserHarness) waitForMalformedIntent(t *testing.T, streamEntryID string) malformedIntentRecord {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
key := notificationMalformedIntentKey(streamEntryID)
|
|
||||||
var record malformedIntentRecord
|
var record malformedIntentRecord
|
||||||
require.Eventually(t, func() bool {
|
require.Eventually(t, func() bool {
|
||||||
payload, err := h.redis.Get(context.Background(), key).Bytes()
|
row := h.pg.QueryRowContext(context.Background(),
|
||||||
if err != nil {
|
`SELECT stream_entry_id, notification_type, producer, idempotency_key,
|
||||||
return false
|
failure_code, failure_message, recorded_at
|
||||||
|
FROM malformed_intents WHERE stream_entry_id = $1`,
|
||||||
|
streamEntryID,
|
||||||
|
)
|
||||||
|
var recordedAt time.Time
|
||||||
|
if err := row.Scan(
|
||||||
|
&record.StreamEntryID,
|
||||||
|
&record.NotificationType,
|
||||||
|
&record.Producer,
|
||||||
|
&record.IdempotencyKey,
|
||||||
|
&record.FailureCode,
|
||||||
|
&record.FailureMessage,
|
||||||
|
&recordedAt,
|
||||||
|
); err != nil {
|
||||||
|
if errors.Is(err, sql.ErrNoRows) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
require.NoError(t, decodeStrictJSONPayload(payload, &record))
|
record.RecordedAtMS = recordedAt.UTC().UnixMilli()
|
||||||
return true
|
return true
|
||||||
}, 10*time.Second, 50*time.Millisecond)
|
}, 10*time.Second, 50*time.Millisecond)
|
||||||
|
|
||||||
@@ -374,12 +404,26 @@ func decodeJSONPayload(payload []byte, target any) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func notificationRouteKey(notificationID string, routeID string) string {
|
func (h *notificationUserHarness) routeExists(t *testing.T, notificationID string, routeID string) bool {
|
||||||
return "notification:routes:" + encodeKeyComponent(notificationID) + ":" + encodeKeyComponent(routeID)
|
t.Helper()
|
||||||
|
var exists bool
|
||||||
|
err := h.pg.QueryRowContext(context.Background(),
|
||||||
|
`SELECT EXISTS(SELECT 1 FROM routes WHERE notification_id = $1 AND route_id = $2)`,
|
||||||
|
notificationID, routeID,
|
||||||
|
).Scan(&exists)
|
||||||
|
require.NoError(t, err)
|
||||||
|
return exists
|
||||||
}
|
}
|
||||||
|
|
||||||
func notificationMalformedIntentKey(streamEntryID string) string {
|
func (h *notificationUserHarness) malformedIntentExists(t *testing.T, streamEntryID string) bool {
|
||||||
return "notification:malformed_intents:" + encodeKeyComponent(streamEntryID)
|
t.Helper()
|
||||||
|
var exists bool
|
||||||
|
err := h.pg.QueryRowContext(context.Background(),
|
||||||
|
`SELECT EXISTS(SELECT 1 FROM malformed_intents WHERE stream_entry_id = $1)`,
|
||||||
|
streamEntryID,
|
||||||
|
).Scan(&exists)
|
||||||
|
require.NoError(t, err)
|
||||||
|
return exists
|
||||||
}
|
}
|
||||||
|
|
||||||
func notificationStreamOffsetKey() string {
|
func notificationStreamOffsetKey() string {
|
||||||
|
|||||||
@@ -0,0 +1,10 @@
|
|||||||
|
# Makefile for galaxy/lobby.
|
||||||
|
#
|
||||||
|
# The `jet` target regenerates the go-jet/v2 query-builder code under
|
||||||
|
# internal/adapters/postgres/jet/ against a transient PostgreSQL container
|
||||||
|
# brought up by cmd/jetgen. Generated code is committed.
|
||||||
|
|
||||||
|
.PHONY: jet
|
||||||
|
|
||||||
|
jet:
|
||||||
|
go run ./cmd/jetgen
|
||||||
+73
-38
@@ -137,7 +137,16 @@ The service starts two HTTP listeners and one Redis Stream consumer pipeline.
|
|||||||
|
|
||||||
### Startup dependencies
|
### Startup dependencies
|
||||||
|
|
||||||
- one reachable Redis deployment at `LOBBY_REDIS_ADDR`
|
- one reachable Redis deployment at `LOBBY_REDIS_MASTER_ADDR` (mandatory
|
||||||
|
password via `LOBBY_REDIS_PASSWORD`; replicas optional via
|
||||||
|
`LOBBY_REDIS_REPLICA_ADDRS`). Used for streams, race-name directory,
|
||||||
|
per-game runtime aggregates, and stream offsets.
|
||||||
|
- one reachable PostgreSQL primary at `LOBBY_POSTGRES_PRIMARY_DSN` (DSN
|
||||||
|
must include `search_path=lobby&sslmode=disable`). Embedded goose
|
||||||
|
migrations apply at startup before any listener opens; on migration or
|
||||||
|
ping failure the service exits non-zero. The four core enrollment
|
||||||
|
entities (game / application / invite / membership) live here after
|
||||||
|
PG_PLAN.md §6A; `docs/postgres-migration.md` is the decision record.
|
||||||
- `User Service` reachable at `LOBBY_USER_SERVICE_BASE_URL` (startup check only;
|
- `User Service` reachable at `LOBBY_USER_SERVICE_BASE_URL` (startup check only;
|
||||||
runtime failures are surfaced as request errors, not boot failures)
|
runtime failures are surfaced as request errors, not boot failures)
|
||||||
- `Game Master` at `LOBBY_GM_BASE_URL` (same policy — startup check omitted;
|
- `Game Master` at `LOBBY_GM_BASE_URL` (same policy — startup check omitted;
|
||||||
@@ -147,7 +156,7 @@ The service starts two HTTP listeners and one Redis Stream consumer pipeline.
|
|||||||
|
|
||||||
- `GET /healthz` on both ports returns `{"status":"ok"}`
|
- `GET /healthz` on both ports returns `{"status":"ok"}`
|
||||||
- `GET /readyz` on both ports returns `{"status":"ready"}` after successful
|
- `GET /readyz` on both ports returns `{"status":"ready"}` after successful
|
||||||
startup; no live Redis ping per request
|
startup; no live Redis or PostgreSQL ping per request
|
||||||
|
|
||||||
## Game Record Model
|
## Game Record Model
|
||||||
|
|
||||||
@@ -576,10 +585,14 @@ Sentinel errors: `ErrNameTaken`, `ErrInvalidName`, `ErrPendingMissing`,
|
|||||||
|
|
||||||
### v1 backends
|
### v1 backends
|
||||||
|
|
||||||
- **Redis** (`lobby/internal/adapters/redisstate/racenamedir.go`) — the
|
- **PostgreSQL** (`lobby/internal/adapters/postgres/racenamedir/directory.go`)
|
||||||
production adapter using the key layout in §Redis Logical Model.
|
— the production adapter; one row per binding under
|
||||||
|
`lobby.race_names`, transactional writes guarded by
|
||||||
|
`pg_advisory_xact_lock(hashtextextended(canonical_key, 0))`. See
|
||||||
|
`docs/postgres-migration.md` §6B for the full schema and decision
|
||||||
|
record.
|
||||||
- **Stub** (`lobby/internal/adapters/racenamestub/directory.go`) — in-process
|
- **Stub** (`lobby/internal/adapters/racenamestub/directory.go`) — in-process
|
||||||
implementation for unit tests that do not need Redis. Chosen by
|
implementation for unit tests that do not need PostgreSQL. Chosen by
|
||||||
`LOBBY_RACE_NAME_DIRECTORY_BACKEND=stub`.
|
`LOBBY_RACE_NAME_DIRECTORY_BACKEND=stub`.
|
||||||
|
|
||||||
A future dedicated `Race Name Service` replaces the adapter without changing
|
A future dedicated `Race Name Service` replaces the adapter without changing
|
||||||
@@ -1060,7 +1073,9 @@ Stable error codes:
|
|||||||
|
|
||||||
### Required
|
### Required
|
||||||
|
|
||||||
- `LOBBY_REDIS_ADDR`
|
- `LOBBY_REDIS_MASTER_ADDR`
|
||||||
|
- `LOBBY_REDIS_PASSWORD`
|
||||||
|
- `LOBBY_POSTGRES_PRIMARY_DSN`
|
||||||
- `LOBBY_USER_SERVICE_BASE_URL`
|
- `LOBBY_USER_SERVICE_BASE_URL`
|
||||||
- `LOBBY_GM_BASE_URL`
|
- `LOBBY_GM_BASE_URL`
|
||||||
|
|
||||||
@@ -1087,11 +1102,28 @@ Internal HTTP:
|
|||||||
|
|
||||||
Redis connectivity:
|
Redis connectivity:
|
||||||
|
|
||||||
- `LOBBY_REDIS_USERNAME`
|
- `LOBBY_REDIS_MASTER_ADDR` (required)
|
||||||
- `LOBBY_REDIS_PASSWORD`
|
- `LOBBY_REDIS_REPLICA_ADDRS` (optional, comma-separated; not consumed yet)
|
||||||
- `LOBBY_REDIS_DB`
|
- `LOBBY_REDIS_PASSWORD` (required)
|
||||||
- `LOBBY_REDIS_TLS_ENABLED`
|
- `LOBBY_REDIS_DB` (default 0)
|
||||||
- `LOBBY_REDIS_OPERATION_TIMEOUT` with default `2s`
|
- `LOBBY_REDIS_OPERATION_TIMEOUT` (default 250ms)
|
||||||
|
|
||||||
|
The legacy `LOBBY_REDIS_ADDR`, `LOBBY_REDIS_USERNAME`, and
|
||||||
|
`LOBBY_REDIS_TLS_ENABLED` env vars were retired in PG_PLAN.md §6A; setting
|
||||||
|
either of the latter two now fails fast at startup. See
|
||||||
|
`ARCHITECTURE.md §Persistence Backends` for the architectural rules.
|
||||||
|
|
||||||
|
PostgreSQL connectivity (PG_PLAN.md §6A and §6B; durable game /
|
||||||
|
application / invite / membership records and the Race Name Directory
|
||||||
|
live here):
|
||||||
|
|
||||||
|
- `LOBBY_POSTGRES_PRIMARY_DSN` (required;
|
||||||
|
e.g. `postgres://lobbyservice:secret@postgres:5432/galaxy?search_path=lobby&sslmode=disable`)
|
||||||
|
- `LOBBY_POSTGRES_REPLICA_DSNS` (optional, comma-separated; not consumed yet)
|
||||||
|
- `LOBBY_POSTGRES_OPERATION_TIMEOUT` (default 1s)
|
||||||
|
- `LOBBY_POSTGRES_MAX_OPEN_CONNS` (default 25)
|
||||||
|
- `LOBBY_POSTGRES_MAX_IDLE_CONNS` (default 5)
|
||||||
|
- `LOBBY_POSTGRES_CONN_MAX_LIFETIME` (default 30m)
|
||||||
|
|
||||||
Stream names:
|
Stream names:
|
||||||
|
|
||||||
@@ -1114,8 +1146,9 @@ Enrollment automation:
|
|||||||
|
|
||||||
Race Name Directory:
|
Race Name Directory:
|
||||||
|
|
||||||
- `LOBBY_RACE_NAME_DIRECTORY_BACKEND` with default `redis`
|
- `LOBBY_RACE_NAME_DIRECTORY_BACKEND` with default `postgres`
|
||||||
(alternate: `stub` for in-process tests)
|
(alternate: `stub` for in-process tests; PG_PLAN.md §6B retired the
|
||||||
|
`redis` backend)
|
||||||
- `LOBBY_RACE_NAME_EXPIRATION_INTERVAL` with default `1h` — pending
|
- `LOBBY_RACE_NAME_EXPIRATION_INTERVAL` with default `1h` — pending
|
||||||
registration expiration worker tick
|
registration expiration worker tick
|
||||||
|
|
||||||
@@ -1135,39 +1168,35 @@ OpenTelemetry:
|
|||||||
- `LOBBY_OTEL_STDOUT_TRACES_ENABLED`
|
- `LOBBY_OTEL_STDOUT_TRACES_ENABLED`
|
||||||
- `LOBBY_OTEL_STDOUT_METRICS_ENABLED`
|
- `LOBBY_OTEL_STDOUT_METRICS_ENABLED`
|
||||||
|
|
||||||
## Redis Logical Model
|
## Persistence Layout
|
||||||
|
|
||||||
Storage rules:
|
Game / application / invite / membership records live in PostgreSQL after
|
||||||
|
PG_PLAN.md §6A; the Race Name Directory followed in §6B. See
|
||||||
|
`docs/postgres-migration.md` for the schema and decision records. The
|
||||||
|
`lobby` schema owns five tables — `games`, `applications`, `invites`,
|
||||||
|
`memberships`, `race_names` — plus the partial UNIQUE index on
|
||||||
|
`applications(applicant_user_id, game_id) WHERE status <> 'rejected'` that
|
||||||
|
enforces the single-active-application invariant and the partial UNIQUE
|
||||||
|
index on `race_names(canonical_key) WHERE binding_kind = 'registered'`
|
||||||
|
that enforces single-registered-per-canonical.
|
||||||
|
|
||||||
|
The Redis-backed keys below survive both stages. Redis owns the
|
||||||
|
runtime-coordination state — per-game runtime aggregates, gap activation,
|
||||||
|
capability-evaluation guards, and stream consumer offsets — plus the
|
||||||
|
event-bus streams themselves.
|
||||||
|
|
||||||
|
### Redis key table
|
||||||
|
|
||||||
|
Storage rules for Redis:
|
||||||
|
|
||||||
- durable records are stored as strict JSON blobs
|
|
||||||
- timestamps are stored in Unix milliseconds unless noted otherwise
|
- timestamps are stored in Unix milliseconds unless noted otherwise
|
||||||
- dynamic key segments are base64url-encoded
|
- dynamic key segments are base64url-encoded
|
||||||
|
|
||||||
### Key table
|
|
||||||
|
|
||||||
| Logical artifact | Redis key |
|
| Logical artifact | Redis key |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| game record | `lobby:games:<game_id>` |
|
|
||||||
| game index by status | `lobby:games_by_status:<status>` (sorted set; score = created_at) |
|
|
||||||
| games by owner | `lobby:games_by_owner:<user_id>` (set of game_ids; populated for private games on Save) |
|
|
||||||
| application record | `lobby:applications:<application_id>` |
|
|
||||||
| applications by game | `lobby:game_applications:<game_id>` (set of application_ids) |
|
|
||||||
| applications by user | `lobby:user_applications:<user_id>` (set of application_ids) |
|
|
||||||
| active application per (user, game) | `lobby:user_game_application:<user_id>:<game_id>` → `application_id` |
|
|
||||||
| invite record | `lobby:invites:<invite_id>` |
|
|
||||||
| invites by game | `lobby:game_invites:<game_id>` (set of invite_ids) |
|
|
||||||
| invites by user (invitee) | `lobby:user_invites:<user_id>` (set of invite_ids) |
|
|
||||||
| invites by inviter | `lobby:user_inviter_invites:<user_id>` (set of invite_ids) |
|
|
||||||
| membership record | `lobby:memberships:<membership_id>` |
|
|
||||||
| memberships by game | `lobby:game_memberships:<game_id>` (set of membership_ids) |
|
|
||||||
| memberships by user | `lobby:user_memberships:<user_id>` (set of membership_ids) |
|
|
||||||
| registered race name | `lobby:race_names:registered:<canonical_key>` → JSON `{user_id, race_name, source_game_id, registered_at}` |
|
|
||||||
| user → registered canonical keys | `lobby:race_names:user_registered:<user_id>` (set of `canonical_key`) |
|
|
||||||
| per-game race name reservation | `lobby:race_names:reservations:<game_id>:<canonical_key>` → JSON `{user_id, race_name, reserved_at, status ∈ reserved/pending_registration, eligible_until_ms?}` |
|
|
||||||
| user → reservations index | `lobby:race_names:user_reservations:<user_id>` (set of `game_id:canonical_key`) |
|
|
||||||
| pending-registration expiry index | `lobby:race_names:pending_index` (sorted set; score = `eligible_until_ms`) |
|
|
||||||
| canonical-key lookup cache | `lobby:race_names:canonical_lookup:<canonical_key>` → JSON `{kind, holder_user_id, game_id?}` |
|
|
||||||
| per-game per-user stats aggregate | `lobby:game_turn_stats:<game_id>:<user_id>` → JSON aggregate |
|
| per-game per-user stats aggregate | `lobby:game_turn_stats:<game_id>:<user_id>` → JSON aggregate |
|
||||||
|
| per-game stats user index | `lobby:game_turn_stats_by_game:<game_id>` (set of `user_id`) |
|
||||||
|
| capability-evaluation guard | `lobby:capability_evaluation:done:<game_id>` (sentinel string) |
|
||||||
| GM event stream offset | `lobby:stream_offsets:gm_events` |
|
| GM event stream offset | `lobby:stream_offsets:gm_events` |
|
||||||
| runtime job result offset | `lobby:stream_offsets:runtime_results` |
|
| runtime job result offset | `lobby:stream_offsets:runtime_results` |
|
||||||
| user lifecycle stream offset | `lobby:stream_offsets:user_lifecycle` |
|
| user lifecycle stream offset | `lobby:stream_offsets:user_lifecycle` |
|
||||||
@@ -1175,12 +1204,18 @@ Storage rules:
|
|||||||
|
|
||||||
### Frozen record fields
|
### Frozen record fields
|
||||||
|
|
||||||
|
The five durable records are stored in PostgreSQL columns; the field set
|
||||||
|
per record is unchanged from the previous Redis JSON shape and is
|
||||||
|
documented inline with the migration scripts under
|
||||||
|
`internal/adapters/postgres/migrations/`.
|
||||||
|
|
||||||
| Record | Frozen fields |
|
| Record | Frozen fields |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| game record | all game fields listed in Game Record Model section |
|
| game record | all game fields listed in Game Record Model section |
|
||||||
| application record | `application_id`, `game_id`, `applicant_user_id`, `race_name`, `status`, `created_at`, `decided_at` |
|
| application record | `application_id`, `game_id`, `applicant_user_id`, `race_name`, `status`, `created_at`, `decided_at` |
|
||||||
| invite record | `invite_id`, `game_id`, `inviter_user_id`, `invitee_user_id`, `race_name` (set at redeem), `status`, `created_at`, `expires_at`, `decided_at` |
|
| invite record | `invite_id`, `game_id`, `inviter_user_id`, `invitee_user_id`, `race_name` (set at redeem), `status`, `created_at`, `expires_at`, `decided_at` |
|
||||||
| membership record | all membership fields listed in Membership Model section |
|
| membership record | all membership fields listed in Membership Model section |
|
||||||
|
| race_names row | `canonical_key`, `game_id`, `holder_user_id`, `race_name`, `binding_kind`, `source_game_id`, `reserved_at_ms`, `eligible_until_ms` (pending only), `registered_at_ms` (registered only) |
|
||||||
|
|
||||||
## Observability
|
## Observability
|
||||||
|
|
||||||
|
|||||||
@@ -0,0 +1,236 @@
|
|||||||
|
// Command jetgen regenerates the go-jet/v2 query-builder code under
|
||||||
|
// galaxy/lobby/internal/adapters/postgres/jet/ against a transient
|
||||||
|
// PostgreSQL instance.
|
||||||
|
//
|
||||||
|
// The program is intended to be invoked as `go run ./cmd/jetgen` (or via the
|
||||||
|
// `make jet` Makefile target) from within `galaxy/lobby`. It is not part of
|
||||||
|
// the runtime binary.
|
||||||
|
//
|
||||||
|
// Steps:
|
||||||
|
//
|
||||||
|
// 1. start a postgres:16-alpine container via testcontainers-go
|
||||||
|
// 2. open it through pkg/postgres as the superuser
|
||||||
|
// 3. CREATE ROLE lobbyservice and CREATE SCHEMA "lobby"
|
||||||
|
// AUTHORIZATION lobbyservice
|
||||||
|
// 4. open a second pool as lobbyservice with search_path=lobby and apply
|
||||||
|
// the embedded goose migrations
|
||||||
|
// 5. run jet's PostgreSQL generator against schema=lobby, writing into
|
||||||
|
// ../internal/adapters/postgres/jet
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"net/url"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/migrations"
|
||||||
|
"galaxy/postgres"
|
||||||
|
|
||||||
|
jetpostgres "github.com/go-jet/jet/v2/generator/postgres"
|
||||||
|
testcontainers "github.com/testcontainers/testcontainers-go"
|
||||||
|
tcpostgres "github.com/testcontainers/testcontainers-go/modules/postgres"
|
||||||
|
"github.com/testcontainers/testcontainers-go/wait"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
postgresImage = "postgres:16-alpine"
|
||||||
|
superuserName = "galaxy"
|
||||||
|
superuserPassword = "galaxy"
|
||||||
|
superuserDatabase = "galaxy_lobby"
|
||||||
|
serviceRole = "lobbyservice"
|
||||||
|
servicePassword = "lobbyservice"
|
||||||
|
serviceSchema = "lobby"
|
||||||
|
containerStartup = 90 * time.Second
|
||||||
|
defaultOpTimeout = 10 * time.Second
|
||||||
|
jetOutputDirSuffix = "internal/adapters/postgres/jet"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
if err := run(context.Background()); err != nil {
|
||||||
|
log.Fatalf("jetgen: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func run(ctx context.Context) error {
|
||||||
|
outputDir, err := jetOutputDir()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
container, err := tcpostgres.Run(ctx, postgresImage,
|
||||||
|
tcpostgres.WithDatabase(superuserDatabase),
|
||||||
|
tcpostgres.WithUsername(superuserName),
|
||||||
|
tcpostgres.WithPassword(superuserPassword),
|
||||||
|
testcontainers.WithWaitStrategy(
|
||||||
|
wait.ForLog("database system is ready to accept connections").
|
||||||
|
WithOccurrence(2).
|
||||||
|
WithStartupTimeout(containerStartup),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("start postgres container: %w", err)
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
if termErr := testcontainers.TerminateContainer(container); termErr != nil {
|
||||||
|
log.Printf("jetgen: terminate container: %v", termErr)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
baseDSN, err := container.ConnectionString(ctx, "sslmode=disable")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("resolve container dsn: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := provisionRoleAndSchema(ctx, baseDSN); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
scopedDSN, err := dsnForServiceRole(baseDSN)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := applyMigrations(ctx, scopedDSN); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.RemoveAll(outputDir); err != nil {
|
||||||
|
return fmt.Errorf("remove existing jet output %q: %w", outputDir, err)
|
||||||
|
}
|
||||||
|
if err := os.MkdirAll(filepath.Dir(outputDir), 0o755); err != nil {
|
||||||
|
return fmt.Errorf("ensure jet output parent: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
jetCfg := postgres.DefaultConfig()
|
||||||
|
jetCfg.PrimaryDSN = scopedDSN
|
||||||
|
jetCfg.OperationTimeout = defaultOpTimeout
|
||||||
|
jetDB, err := postgres.OpenPrimary(ctx, jetCfg)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("open scoped pool for jet generation: %w", err)
|
||||||
|
}
|
||||||
|
defer func() { _ = jetDB.Close() }()
|
||||||
|
|
||||||
|
if err := jetpostgres.GenerateDB(jetDB, serviceSchema, outputDir); err != nil {
|
||||||
|
return fmt.Errorf("jet generate: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("jetgen: generated jet code into %s (schema=%s)", outputDir, serviceSchema)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func provisionRoleAndSchema(ctx context.Context, baseDSN string) error {
|
||||||
|
cfg := postgres.DefaultConfig()
|
||||||
|
cfg.PrimaryDSN = baseDSN
|
||||||
|
cfg.OperationTimeout = defaultOpTimeout
|
||||||
|
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("open admin pool: %w", err)
|
||||||
|
}
|
||||||
|
defer func() { _ = db.Close() }()
|
||||||
|
|
||||||
|
statements := []string{
|
||||||
|
fmt.Sprintf(`DO $$ BEGIN
|
||||||
|
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = %s) THEN
|
||||||
|
CREATE ROLE %s LOGIN PASSWORD %s;
|
||||||
|
END IF;
|
||||||
|
END $$;`, sqlLiteral(serviceRole), sqlIdentifier(serviceRole), sqlLiteral(servicePassword)),
|
||||||
|
fmt.Sprintf(`CREATE SCHEMA IF NOT EXISTS %s AUTHORIZATION %s;`,
|
||||||
|
sqlIdentifier(serviceSchema), sqlIdentifier(serviceRole)),
|
||||||
|
fmt.Sprintf(`GRANT USAGE ON SCHEMA %s TO %s;`,
|
||||||
|
sqlIdentifier(serviceSchema), sqlIdentifier(serviceRole)),
|
||||||
|
}
|
||||||
|
for _, statement := range statements {
|
||||||
|
if _, err := db.ExecContext(ctx, statement); err != nil {
|
||||||
|
return fmt.Errorf("provision %q/%q: %w", serviceSchema, serviceRole, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func dsnForServiceRole(baseDSN string) (string, error) {
|
||||||
|
parsed, err := url.Parse(baseDSN)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("parse base dsn: %w", err)
|
||||||
|
}
|
||||||
|
values := url.Values{}
|
||||||
|
values.Set("search_path", serviceSchema)
|
||||||
|
values.Set("sslmode", "disable")
|
||||||
|
scoped := url.URL{
|
||||||
|
Scheme: parsed.Scheme,
|
||||||
|
User: url.UserPassword(serviceRole, servicePassword),
|
||||||
|
Host: parsed.Host,
|
||||||
|
Path: parsed.Path,
|
||||||
|
RawQuery: values.Encode(),
|
||||||
|
}
|
||||||
|
return scoped.String(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func applyMigrations(ctx context.Context, dsn string) error {
|
||||||
|
cfg := postgres.DefaultConfig()
|
||||||
|
cfg.PrimaryDSN = dsn
|
||||||
|
cfg.OperationTimeout = defaultOpTimeout
|
||||||
|
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("open scoped pool: %w", err)
|
||||||
|
}
|
||||||
|
defer func() { _ = db.Close() }()
|
||||||
|
|
||||||
|
if err := postgres.Ping(ctx, db, defaultOpTimeout); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := postgres.RunMigrations(ctx, db, migrations.FS(), "."); err != nil {
|
||||||
|
return fmt.Errorf("run migrations: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// jetOutputDir returns the absolute path that jet should write into. We rely
|
||||||
|
// on the runtime caller info to anchor it to galaxy/lobby regardless of the
|
||||||
|
// invoking working directory.
|
||||||
|
func jetOutputDir() (string, error) {
|
||||||
|
_, file, _, ok := runtime.Caller(0)
|
||||||
|
if !ok {
|
||||||
|
return "", errors.New("resolve runtime caller for jet output path")
|
||||||
|
}
|
||||||
|
dir := filepath.Dir(file)
|
||||||
|
// dir = .../galaxy/lobby/cmd/jetgen
|
||||||
|
moduleRoot := filepath.Clean(filepath.Join(dir, "..", ".."))
|
||||||
|
return filepath.Join(moduleRoot, jetOutputDirSuffix), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func sqlIdentifier(name string) string {
|
||||||
|
return `"` + escapeDoubleQuotes(name) + `"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func sqlLiteral(value string) string {
|
||||||
|
return "'" + escapeSingleQuotes(value) + "'"
|
||||||
|
}
|
||||||
|
|
||||||
|
func escapeDoubleQuotes(value string) string {
|
||||||
|
out := make([]byte, 0, len(value))
|
||||||
|
for index := 0; index < len(value); index++ {
|
||||||
|
if value[index] == '"' {
|
||||||
|
out = append(out, '"', '"')
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
out = append(out, value[index])
|
||||||
|
}
|
||||||
|
return string(out)
|
||||||
|
}
|
||||||
|
|
||||||
|
func escapeSingleQuotes(value string) string {
|
||||||
|
out := make([]byte, 0, len(value))
|
||||||
|
for index := 0; index < len(value); index++ {
|
||||||
|
if value[index] == '\'' {
|
||||||
|
out = append(out, '\'', '\'')
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
out = append(out, value[index])
|
||||||
|
}
|
||||||
|
return string(out)
|
||||||
|
}
|
||||||
+32
-14
@@ -6,10 +6,14 @@ and timestamps with values that match the deployment under inspection.
|
|||||||
## Example `.env`
|
## Example `.env`
|
||||||
|
|
||||||
A minimum-viable `LOBBY_*` set for a local run against a single Redis
|
A minimum-viable `LOBBY_*` set for a local run against a single Redis
|
||||||
container. The full list with defaults lives in `../README.md` §Configuration.
|
container plus a PostgreSQL container with the `lobby` schema and the
|
||||||
|
`lobbyservice` role provisioned. The full list with defaults lives in
|
||||||
|
`../README.md` §Configuration.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
LOBBY_REDIS_ADDR=127.0.0.1:6379
|
LOBBY_REDIS_MASTER_ADDR=127.0.0.1:6379
|
||||||
|
LOBBY_REDIS_PASSWORD=local
|
||||||
|
LOBBY_POSTGRES_PRIMARY_DSN=postgres://lobbyservice:lobbyservice@127.0.0.1:5432/galaxy?search_path=lobby&sslmode=disable
|
||||||
LOBBY_USER_SERVICE_BASE_URL=http://127.0.0.1:8083
|
LOBBY_USER_SERVICE_BASE_URL=http://127.0.0.1:8083
|
||||||
LOBBY_GM_BASE_URL=http://127.0.0.1:8096
|
LOBBY_GM_BASE_URL=http://127.0.0.1:8096
|
||||||
|
|
||||||
@@ -19,7 +23,7 @@ LOBBY_INTERNAL_HTTP_ADDR=:8095
|
|||||||
LOBBY_LOG_LEVEL=info
|
LOBBY_LOG_LEVEL=info
|
||||||
LOBBY_SHUTDOWN_TIMEOUT=30s
|
LOBBY_SHUTDOWN_TIMEOUT=30s
|
||||||
|
|
||||||
LOBBY_RACE_NAME_DIRECTORY_BACKEND=redis
|
LOBBY_RACE_NAME_DIRECTORY_BACKEND=postgres
|
||||||
LOBBY_ENROLLMENT_AUTOMATION_INTERVAL=30s
|
LOBBY_ENROLLMENT_AUTOMATION_INTERVAL=30s
|
||||||
LOBBY_RACE_NAME_EXPIRATION_INTERVAL=1h
|
LOBBY_RACE_NAME_EXPIRATION_INTERVAL=1h
|
||||||
|
|
||||||
@@ -115,16 +119,36 @@ curl -s http://localhost:8095/api/v1/internal/games/game-01HZ...
|
|||||||
curl -s http://localhost:8095/api/v1/internal/games/game-01HZ.../memberships
|
curl -s http://localhost:8095/api/v1/internal/games/game-01HZ.../memberships
|
||||||
```
|
```
|
||||||
|
|
||||||
## Redis Examples
|
## Storage Inspection Examples
|
||||||
|
|
||||||
### Inspect a game record
|
### Inspect a game record (PostgreSQL)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
redis-cli GET lobby:games:game-01HZ...
|
psql "$LOBBY_POSTGRES_PRIMARY_DSN" -c \
|
||||||
|
"SELECT * FROM lobby.games WHERE game_id = 'game-01HZ...'"
|
||||||
```
|
```
|
||||||
|
|
||||||
The value is a strict JSON blob with the fields documented in
|
The columns mirror the fields documented in `../README.md` §Game Record Model.
|
||||||
`../README.md` §Game Record Model.
|
|
||||||
|
### Inspect open enrollment games (sorted by created_at)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
psql "$LOBBY_POSTGRES_PRIMARY_DSN" -c \
|
||||||
|
"SELECT game_id, game_name, created_at FROM lobby.games
|
||||||
|
WHERE status = 'enrollment_open'
|
||||||
|
ORDER BY created_at DESC"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Inspect a Race Name Directory binding
|
||||||
|
|
||||||
|
```bash
|
||||||
|
psql "$LOBBY_POSTGRES_PRIMARY_DSN" -c \
|
||||||
|
"SELECT canonical_key, game_id, holder_user_id, race_name, binding_kind,
|
||||||
|
source_game_id, eligible_until_ms, registered_at_ms
|
||||||
|
FROM lobby.race_names WHERE race_name = 'Aurora'"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Redis Examples
|
||||||
|
|
||||||
### Publish a runtime job result (Runtime Manager simulation)
|
### Publish a runtime job result (Runtime Manager simulation)
|
||||||
|
|
||||||
@@ -162,12 +186,6 @@ redis-cli XADD gm:lobby_events '*' \
|
|||||||
finished_at_ms 1714123456789
|
finished_at_ms 1714123456789
|
||||||
```
|
```
|
||||||
|
|
||||||
### Inspect open enrollment games (sorted by created_at)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
redis-cli ZRANGE lobby:games_by_status:enrollment_open 0 -1 WITHSCORES
|
|
||||||
```
|
|
||||||
|
|
||||||
## Notification Intent Format
|
## Notification Intent Format
|
||||||
|
|
||||||
Lobby produces every notification through `pkg/notificationintent` and
|
Lobby produces every notification through `pkg/notificationintent` and
|
||||||
|
|||||||
@@ -0,0 +1,386 @@
|
|||||||
|
# PostgreSQL Migration
|
||||||
|
|
||||||
|
PG_PLAN.md §6A migrated the four core enrollment entities of Game Lobby
|
||||||
|
Service — `Game`, `Application`, `Invite`, `Membership` — from Redis-only
|
||||||
|
durable storage to the steady-state Redis + PostgreSQL split codified in
|
||||||
|
`ARCHITECTURE.md §Persistence Backends`. PG_PLAN.md §6B then moved the
|
||||||
|
Race Name Directory onto PostgreSQL, retiring the Redis Lua scripts and
|
||||||
|
canonical-lookup cache that backed it. PG_PLAN.md §6C confirmed which
|
||||||
|
runtime-coordination state intentionally stays on Redis (per-game
|
||||||
|
`game_turn_stats`, `gap_activated_at`, `capability_evaluation:done:*`,
|
||||||
|
`stream_offsets:*`, plus the event-bus streams themselves) and pruned the
|
||||||
|
remaining redisstate keyspace.
|
||||||
|
|
||||||
|
This document records the schema decisions and the non-obvious agreements
|
||||||
|
behind them. Use it together with the migration scripts under
|
||||||
|
`internal/adapters/postgres/migrations/` and the runtime wiring
|
||||||
|
(`internal/app/runtime.go`).
|
||||||
|
|
||||||
|
## Outcomes
|
||||||
|
|
||||||
|
- Schema `lobby` (provisioned externally) holds four tables: `games`,
|
||||||
|
`applications`, `invites`, `memberships`. A partial UNIQUE index on
|
||||||
|
`applications(applicant_user_id, game_id) WHERE status <> 'rejected'`
|
||||||
|
enforces the single-active-application constraint at the database
|
||||||
|
level.
|
||||||
|
- The runtime opens one PostgreSQL pool via `pkg/postgres.OpenPrimary`,
|
||||||
|
applies embedded goose migrations strictly before any HTTP listener
|
||||||
|
becomes ready, and exits non-zero when migration or ping fails.
|
||||||
|
- The runtime opens one shared `*redis.Client` via
|
||||||
|
`pkg/redisconn.NewMasterClient` and passes it to the Race Name
|
||||||
|
Directory adapter, the per-game stats / gap-activation /
|
||||||
|
evaluation-guard / stream-offset stores, the consumer pipelines, and
|
||||||
|
the notification-intent publisher.
|
||||||
|
- The Redis adapter package (`internal/adapters/redisstate/`) keeps the
|
||||||
|
surviving stores (`racenamedir`, `gameturnstatsstore`,
|
||||||
|
`gapactivationstore`, `evaluationguardstore`, `streamoffsetstore`,
|
||||||
|
`streamlagprobe`) and the keyspace methods that back them; the
|
||||||
|
game/application/invite/membership stores, codecs, tests, and
|
||||||
|
per-record TTL constants are gone.
|
||||||
|
- Configuration drops `LOBBY_REDIS_ADDR`, `LOBBY_REDIS_USERNAME`,
|
||||||
|
`LOBBY_REDIS_TLS_ENABLED` and introduces `LOBBY_REDIS_MASTER_ADDR`,
|
||||||
|
`LOBBY_REDIS_REPLICA_ADDRS`, `LOBBY_REDIS_PASSWORD`,
|
||||||
|
`LOBBY_POSTGRES_PRIMARY_DSN`, `LOBBY_POSTGRES_REPLICA_DSNS`, plus
|
||||||
|
the standard `LOBBY_POSTGRES_*` pool tuning knobs. Setting either of
|
||||||
|
the two retired Redis env vars now fails fast at startup via the
|
||||||
|
shared `pkg/redisconn.LoadFromEnv` rejection path.
|
||||||
|
|
||||||
|
## Decisions
|
||||||
|
|
||||||
|
### 1. One schema, externally-provisioned role
|
||||||
|
|
||||||
|
**Decision.** The `lobby` schema and the matching `lobbyservice` role
|
||||||
|
are created outside the migration sequence (in tests, by
|
||||||
|
`integration/internal/harness/postgres_container.go::EnsureRoleAndSchema`;
|
||||||
|
in production, by an ops init script not in scope for this stage). The
|
||||||
|
embedded migration `00001_init.sql` only contains DDL for tables and
|
||||||
|
indexes and assumes it runs as the schema owner with
|
||||||
|
`search_path=lobby`.
|
||||||
|
|
||||||
|
**Why.** Mirrors the precedent set by Notification Stage 5 and Mail
|
||||||
|
Stage 4 and matches the schema-per-service architectural rule
|
||||||
|
(`ARCHITECTURE.md §Persistence Backends`). Mixing role + schema + table
|
||||||
|
DDL into one script would force every consumer of the migration to run
|
||||||
|
as a superuser; splitting them lines up with the operational split
|
||||||
|
(ops provisions roles and schemas, the service applies schema-scoped
|
||||||
|
migrations).
|
||||||
|
|
||||||
|
### 2. Single-active application = partial UNIQUE on `applications`
|
||||||
|
|
||||||
|
**Decision.** `applications` carries a partial UNIQUE index on
|
||||||
|
`(applicant_user_id, game_id) WHERE status <> 'rejected'`. INSERT
|
||||||
|
attempts that violate the constraint are surfaced to the service layer
|
||||||
|
as `application.ErrConflict` via the shared
|
||||||
|
`sqlx.IsUniqueViolation` helper.
|
||||||
|
|
||||||
|
**Why.** Replaces the Redis lookup key `lobby:user_game_application:*:*`
|
||||||
|
with a deterministic database-level invariant. Multiple `rejected`
|
||||||
|
rows are intentionally allowed (one applicant may submit, get rejected,
|
||||||
|
and resubmit), and the UNIQUE only fires on the second simultaneous
|
||||||
|
submitted/approved row for the same `(user, game)`. The constraint is
|
||||||
|
race-safe: under concurrent submission attempts one INSERT wins, the
|
||||||
|
others fail with conflict.
|
||||||
|
|
||||||
|
### 3. Public games carry an empty `owner_user_id`; partial index excludes them
|
||||||
|
|
||||||
|
**Decision.** `games.owner_user_id` is `text NOT NULL DEFAULT ''`, and
|
||||||
|
the secondary `games_owner_idx` is partial: `WHERE game_type = 'private'`.
|
||||||
|
Public games (admin-owned) carry an empty owner string and are excluded
|
||||||
|
from the index entirely.
|
||||||
|
|
||||||
|
**Why.** Mirrors the previous Redis behaviour where `games_by_owner:*`
|
||||||
|
sets were created only for private games. The partial index keeps the
|
||||||
|
owner lookup tight (only private-game rows participate) while letting
|
||||||
|
the column stay non-nullable and consistent with the domain model.
|
||||||
|
|
||||||
|
### 4. JSONB columns for runtime snapshot and runtime binding
|
||||||
|
|
||||||
|
**Decision.** `games.runtime_snapshot` is `jsonb NOT NULL DEFAULT
|
||||||
|
'{}'::jsonb`; `games.runtime_binding` is `jsonb NULL`. The JSON shapes
|
||||||
|
used inside both columns are stable and live in
|
||||||
|
`internal/adapters/postgres/gamestore/codecs.go`. `runtime_binding`
|
||||||
|
binds NULL when the domain pointer is nil, otherwise an object with
|
||||||
|
`container_id`, `engine_endpoint`, `runtime_job_id`, `bound_at_ms`
|
||||||
|
fields.
|
||||||
|
|
||||||
|
**Why.** Both fields are opaque to queries — Lobby never element-filters
|
||||||
|
on their internals. JSONB matches the "everything outside primary
|
||||||
|
fields is JSON" pattern Notification Stage 5 already established and
|
||||||
|
allows a future GIN index without a schema rewrite. The `bound_at_ms`
|
||||||
|
field inside the binding stays in Unix milliseconds so the encoded
|
||||||
|
payload is naked-comparable across Redis and PostgreSQL audits during
|
||||||
|
the transition window.
|
||||||
|
|
||||||
|
### 5. Optimistic concurrency via current-status compare-and-swap
|
||||||
|
|
||||||
|
**Decision.** `UpdateStatus` on every store is implemented as `UPDATE …
|
||||||
|
WHERE id = $X AND status = $expected`. A zero-rows result is
|
||||||
|
disambiguated with a follow-up `SELECT status` probe — missing rows map
|
||||||
|
to the per-domain `ErrNotFound`, mismatches map to `ErrConflict`.
|
||||||
|
Snapshot/binding overrides on `games` use the same pattern but only
|
||||||
|
guard on the primary key (no expected-status gate).
|
||||||
|
|
||||||
|
**Why.** Mirrors the previous Redis WATCH/TxPipelined behaviour without
|
||||||
|
holding a `SELECT … FOR UPDATE` lock across application logic. The
|
||||||
|
compare-and-swap is local to one statement, never spans more than one
|
||||||
|
network round trip, and produces the same observable error semantics
|
||||||
|
the service layer already depends on.
|
||||||
|
|
||||||
|
### 6. Memberships store `race_name` and `canonical_key` side by side
|
||||||
|
|
||||||
|
**Decision.** `memberships` carries both `race_name` (original casing)
|
||||||
|
and `canonical_key` (policy-derived form) as separate `text NOT NULL`
|
||||||
|
columns. There is no UNIQUE constraint on `canonical_key`.
|
||||||
|
|
||||||
|
**Why.** Downstream consumers — capability evaluation and the
|
||||||
|
user-lifecycle cascade — read the canonical form directly without
|
||||||
|
re-deriving it from `race_name`, which is the same arrangement the
|
||||||
|
Redis JSON record had. Race-name uniqueness across the platform
|
||||||
|
remains the responsibility of the Race Name Directory; enforcing a
|
||||||
|
UNIQUE on memberships' canonical_key now would duplicate the RND
|
||||||
|
invariant and create deadlock potential between the two stores.
|
||||||
|
|
||||||
|
### 7. ON DELETE CASCADE from games to children
|
||||||
|
|
||||||
|
**Decision.** Each child table (`applications`, `invites`,
|
||||||
|
`memberships`) declares its `game_id` as `REFERENCES games(game_id) ON
|
||||||
|
DELETE CASCADE`.
|
||||||
|
|
||||||
|
**Why.** Lobby code never deletes games today — every status terminal
|
||||||
|
is a soft state — so the cascade has no live trigger. It exists for
|
||||||
|
two future paths: scheduled cleanup of `cancelled` games far past
|
||||||
|
retention, and explicit operator/test resets. CASCADE keeps those paths
|
||||||
|
trivial and free of dangling references.
|
||||||
|
|
||||||
|
### 8. Listing order: most-recent-first for games, oldest-first for child tables
|
||||||
|
|
||||||
|
**Decision.** `GetByStatus` and `GetByOwner` on `games` order by
|
||||||
|
`created_at DESC, game_id DESC`. The per-game/per-user listings on
|
||||||
|
`applications`, `invites`, `memberships` order by `created_at ASC,
|
||||||
|
<id> ASC` (memberships order by `joined_at ASC`).
|
||||||
|
|
||||||
|
**Why.** Game listings serve user-facing feeds where most-recent-first
|
||||||
|
is the natural expectation, matching the previous Redis sorted-set
|
||||||
|
score and the `accounts.created_at DESC` convention from User Stage 3.
|
||||||
|
Child-table listings serve administrative and cascade flows where the
|
||||||
|
chronological order helps operators reason about the sequence of
|
||||||
|
events. The ports doc explicitly says "order is adapter-defined", so
|
||||||
|
either convention is contract-compatible.
|
||||||
|
|
||||||
|
### 9. Heavy `runtime_test.go` / `runtime_smoke_test.go` deleted; integration coverage
|
||||||
|
|
||||||
|
**Decision.** The service-local `internal/app/runtime_test.go` and
|
||||||
|
`runtime_smoke_test.go` were removed. Black-box runtime coverage moves
|
||||||
|
to the `integration/lobbyuser` and `integration/lobbynotification`
|
||||||
|
suites, which now spin up both a PostgreSQL container (via
|
||||||
|
`harness.StartLobbyServicePersistence`) and the existing Redis
|
||||||
|
container.
|
||||||
|
|
||||||
|
**Why.** Mirrors the Mail Stage 4 / Notification Stage 5 precedent.
|
||||||
|
Booting a full Lobby runtime now requires both PostgreSQL and Redis,
|
||||||
|
which is the integration-suite shape; duplicating that bootstrap
|
||||||
|
inside `internal/app/` would be heavy and fragile. The remaining
|
||||||
|
service-local tests cover units that do not require the full runtime.
|
||||||
|
|
||||||
|
### 10. Query layer is `go-jet/jet/v2`
|
||||||
|
|
||||||
|
**Decision.** All four PG-store packages build SQL through the jet
|
||||||
|
builder API (`pgtable.<Table>.INSERT/SELECT/UPDATE/DELETE` plus the
|
||||||
|
`pg.AND/OR/SET/COALESCE/...` DSL). Generated table models live under
|
||||||
|
`internal/adapters/postgres/jet/lobby/{model,table}/` and are
|
||||||
|
regenerated by `make jet` (which spins up a transient PostgreSQL via
|
||||||
|
testcontainers, applies the embedded goose migrations, and runs jet's
|
||||||
|
generator). Generated code is committed.
|
||||||
|
|
||||||
|
**Why.** Aligns with `PG_PLAN.md` §Library stack ("Query layer:
|
||||||
|
`github.com/go-jet/jet/v2` (PostgreSQL dialect). Generated code lives
|
||||||
|
under each service `internal/adapters/postgres/jet/`, regenerated via
|
||||||
|
a `make jet` target and committed to the repo"). PostgreSQL constructs
|
||||||
|
that the jet builder does not cover natively (`FOR UPDATE`,
|
||||||
|
`COALESCE`, `LOWER` on subselects, JSONB params) are expressed through
|
||||||
|
the per-DSL helpers (`.FOR(pg.UPDATE())`, `pg.COALESCE`, `pg.LOWER`,
|
||||||
|
direct `[]byte`/string params for JSONB columns). Manual `rowScanner`
|
||||||
|
helpers (`scanGame`, `scanApplication`, `scanInvite`,
|
||||||
|
`scanMembership`) preserve the codecs.go boundary translations and
|
||||||
|
domain-type mapping; jet only owns SQL construction.
|
||||||
|
|
||||||
|
## Out of scope for §6A
|
||||||
|
|
||||||
|
- Read routing through `LOBBY_POSTGRES_REPLICA_DSNS` — config exposes
|
||||||
|
the field, runtime ignores it.
|
||||||
|
- Production provisioning of the `lobby` schema and `lobbyservice`
|
||||||
|
role — operational concern handled outside the service binary.
|
||||||
|
|
||||||
|
## §6B — Race Name Directory on PostgreSQL
|
||||||
|
|
||||||
|
§6B replaces the Redis-backed Race Name Directory (one Lua script + a
|
||||||
|
canonical-lookup cache + a pending-index ZSET + per-binding string keys)
|
||||||
|
with a single PostgreSQL table `race_names` whose rows back all three
|
||||||
|
binding kinds (`registered`, `reservation`, `pending_registration`).
|
||||||
|
The `race_names` DDL lives in `00001_init.sql` next to the four core
|
||||||
|
enrollment tables (it was originally introduced as a separate
|
||||||
|
`00002_race_names.sql`; PG_PLAN.md §9 collapsed the two files into one
|
||||||
|
init migration during the pre-launch development window). The adapter
|
||||||
|
`internal/adapters/postgres/racenamedir/directory.go` is the canonical
|
||||||
|
reference; the architecture rule is unchanged from §6A.
|
||||||
|
|
||||||
|
### 11. One table, composite primary key `(canonical_key, game_id)`
|
||||||
|
|
||||||
|
**Decision.** `race_names` carries one row per binding under the
|
||||||
|
composite primary key `(canonical_key, game_id)`. Reservations and
|
||||||
|
pending_registrations write the actual game id; registered rows write
|
||||||
|
`game_id = ''` and keep the source game in `source_game_id`. A partial
|
||||||
|
UNIQUE index on `(canonical_key)` filtered to `binding_kind =
|
||||||
|
'registered'` enforces the single-registered-per-canonical rule.
|
||||||
|
|
||||||
|
**Why.** PG_PLAN.md §6B sketched the table as `(canonical_key PK, …)`,
|
||||||
|
but the existing port semantics (`testReserveCrossGame`,
|
||||||
|
`testReleaseReservationKeepsCrossGame` in
|
||||||
|
`internal/ports/racenamedirtest/suite.go`) require the same user to hold
|
||||||
|
several per-game reservations on one canonical key concurrently. A flat
|
||||||
|
single-PK table cannot model that without losing the per-game
|
||||||
|
identity. The composite PK matches both invariants — at most one row per
|
||||||
|
(canonical, game) and at most one registered row per canonical — without
|
||||||
|
splitting the data into two tables (which would force every write
|
||||||
|
operation to touch two unrelated indexes and reproduce the old
|
||||||
|
canonical-lookup cache invariant manually).
|
||||||
|
|
||||||
|
### 12. Concurrency: PostgreSQL transactional advisory locks
|
||||||
|
|
||||||
|
**Decision.** Every write operation (`Reserve`, `MarkPendingRegistration`,
|
||||||
|
`Register`, `ReleaseReservation`, the per-row branch of
|
||||||
|
`ExpirePendingRegistrations`) opens a `BEGIN; …; COMMIT` and acquires
|
||||||
|
`pg_advisory_xact_lock(hashtextextended($canonical_key, 0))` as the very
|
||||||
|
first statement. The lock auto-releases on commit or rollback.
|
||||||
|
`ReleaseAllByUser` is a single `DELETE WHERE holder_user_id = $1` and
|
||||||
|
takes no advisory lock — it runs on permanent_blocked / deleted
|
||||||
|
lifecycle events, so the user being deleted cannot be a concurrent
|
||||||
|
writer on those bindings.
|
||||||
|
|
||||||
|
**Why.** PG_PLAN.md §6B explicitly authorised either `SELECT … FOR
|
||||||
|
UPDATE` or advisory locks. `SELECT … FOR UPDATE` cannot serialize
|
||||||
|
against not-yet-existing rows (e.g. concurrent first-time `Reserve`s for
|
||||||
|
the same canonical), so advisory locks are required for race-free
|
||||||
|
INSERTs. Hashing through `hashtextextended` produces a 64-bit lock key
|
||||||
|
covering arbitrary canonical strings, sidestepping `bigint` truncation
|
||||||
|
that older `hashtext` exposes. Holding the lock for one transaction
|
||||||
|
keeps the contention surface tight and matches the Notification §5
|
||||||
|
"narrow CAS, no application-logic-bound row locks" precedent.
|
||||||
|
|
||||||
|
### 13. `binding_kind` values match `ports.Kind*` verbatim
|
||||||
|
|
||||||
|
**Decision.** `race_names.binding_kind` stores `"registered"`,
|
||||||
|
`"reservation"`, or `"pending_registration"` — the same string literals
|
||||||
|
exported by `ports.KindRegistered`, `ports.KindReservation`,
|
||||||
|
`ports.KindPendingRegistration`. The adapter returns the raw value
|
||||||
|
directly through `Availability.Kind` without translation. A `CHECK`
|
||||||
|
constraint on the column rejects anything else.
|
||||||
|
|
||||||
|
**Why.** Avoids one boundary translation and one synonym ("reserved" vs
|
||||||
|
"reservation") that the Redis adapter carried internally as
|
||||||
|
`reservationStatusReserved = "reserved"`. With the port-equivalent
|
||||||
|
literals on disk, future operator-side queries (`SELECT … WHERE
|
||||||
|
binding_kind = 'reservation'`) match the Go-level constants 1:1, and
|
||||||
|
the adapter saves a `switch` per `Check` call.
|
||||||
|
|
||||||
|
### 14. `Check` returns the strongest binding via in-process priority
|
||||||
|
|
||||||
|
**Decision.** `Check` issues `SELECT holder_user_id, binding_kind FROM
|
||||||
|
race_names WHERE canonical_key = $1` and picks the strongest binding in
|
||||||
|
Go using a priority rank `registered > pending_registration >
|
||||||
|
reservation`. There is no SQL `CASE` expression in the ORDER BY.
|
||||||
|
|
||||||
|
**Why.** The dataset per canonical is bounded (at most one registered +
|
||||||
|
one row per active game) and is read frequently by every `Check`. The
|
||||||
|
Go-side rank avoids a SQL DSL detour that go-jet/v2 would express via
|
||||||
|
raw SQL anyway, and it keeps the query plan a single index scan on
|
||||||
|
`canonical_key`.
|
||||||
|
|
||||||
|
### 15. `ExpirePendingRegistrations` scans then locks per row
|
||||||
|
|
||||||
|
**Decision.** The expirer first runs an indexed scan
|
||||||
|
`WHERE binding_kind = 'pending_registration' AND eligible_until_ms <=
|
||||||
|
$cutoff` (served by `race_names_pending_eligible_idx`), then re-reads
|
||||||
|
each candidate inside its own advisory-locked transaction, asserts the
|
||||||
|
binding is still pending and still expired, and DELETEs it. Concurrent
|
||||||
|
`Register` or `ReleaseReservation` simply causes the per-row branch to
|
||||||
|
skip without error.
|
||||||
|
|
||||||
|
**Why.** Mirrors the Redis adapter's two-phase `ZRANGEBYSCORE` + per-
|
||||||
|
member release loop. A bulk `DELETE … WHERE eligible_until_ms <= …`
|
||||||
|
would not produce the per-entry `ports.ExpiredPending` slice the worker
|
||||||
|
needs for telemetry, and would race with `Register` (which targets the
|
||||||
|
same row).
|
||||||
|
|
||||||
|
### 16. Shared port test suite stays on PostgreSQL via a serial harness
|
||||||
|
|
||||||
|
**Decision.** The shared `racenamedirtest` suite no longer calls
|
||||||
|
`t.Parallel()` from its subtests. Every subtest goes through the
|
||||||
|
factory, the factory truncates the lobby tables and constructs a fresh
|
||||||
|
adapter against the package-shared testcontainers PostgreSQL.
|
||||||
|
|
||||||
|
**Why.** The PostgreSQL adapter relies on `pgtest.TruncateAll` between
|
||||||
|
factory invocations; running subtests in parallel against one shared
|
||||||
|
container would race truncate against other subtests' INSERTs. Spinning
|
||||||
|
up a per-subtest schema would multiply container provisioning cost
|
||||||
|
significantly (PG generation step alone takes minutes per fresh
|
||||||
|
container), and the suite is fast enough serially. The Redis-only
|
||||||
|
backend retired in §6B no longer needs the parallelism either; only the
|
||||||
|
in-process stub remains in scope and has trivial setup cost.
|
||||||
|
|
||||||
|
## §6C — Workers, ephemeral stores, cleanup
|
||||||
|
|
||||||
|
§6C closes the Lobby migration: it confirms what intentionally stays on
|
||||||
|
Redis, prunes the dead Redis adapter code, and finalises the
|
||||||
|
service-layer documentation.
|
||||||
|
|
||||||
|
### 17. Workers stayed on ports — no functional change
|
||||||
|
|
||||||
|
**Decision.** The four Lobby workers (`pendingregistration`,
|
||||||
|
`gmevents`, `runtimejobresult`, `userlifecycle`) and the
|
||||||
|
`enrollmentautomation` worker shipped in §6A already consume their
|
||||||
|
storage through ports. After §6B the `RaceNameDirectory` port resolves
|
||||||
|
to the PostgreSQL adapter; no worker required code changes.
|
||||||
|
|
||||||
|
**Why.** §6A established the port-on-storage seam for `GameStore`,
|
||||||
|
`ApplicationStore`, `InviteStore`, `MembershipStore`. §6B kept the same
|
||||||
|
contract for `RaceNameDirectory`. Worker logic depends on the contract,
|
||||||
|
not the backend, so the migration completes via a wiring switch in
|
||||||
|
`internal/app/wiring.go::buildRaceNameDirectory` without re-touching
|
||||||
|
worker code.
|
||||||
|
|
||||||
|
### 18. `redisstate` retains only runtime-coordination adapters
|
||||||
|
|
||||||
|
**Decision.** After §6C the `internal/adapters/redisstate/` package
|
||||||
|
implements only `GameTurnStatsStore`, `GapActivationStore`,
|
||||||
|
`EvaluationGuardStore`, `StreamOffsetStore`, and the `StreamLagProbe`.
|
||||||
|
The legacy `racenamedir.go`, `racenamedir_lua.go`,
|
||||||
|
`racenamedir_test.go`, `codecs_racename.go`, and the dead game
|
||||||
|
codecs (`codecs.go`'s `MarshalGame`/`UnmarshalGame`) are removed. The
|
||||||
|
`Keyspace` type only builds keys for the surviving adapters
|
||||||
|
(`GapActivatedAt`, `StreamOffset`, `GameTurnStat`,
|
||||||
|
`GameTurnStatsByGame`, `CapabilityEvaluationGuard`).
|
||||||
|
|
||||||
|
**Why.** Architectural rule (`ARCHITECTURE.md §Persistence Backends`):
|
||||||
|
Redis owns runtime-coordination state, PostgreSQL owns durable business
|
||||||
|
state. The retained Redis stores back ephemeral per-game aggregates
|
||||||
|
(`game_turn_stats`), short-lived sentinels (`gap_activated_at`,
|
||||||
|
`capability_evaluation:done:*`), and the consumer-offset coordination
|
||||||
|
state (`stream_offsets:*`) — all rebuildable or losable without
|
||||||
|
durability impact. Streams stay on Redis because they *are* the event
|
||||||
|
bus.
|
||||||
|
|
||||||
|
### 19. Default Race Name Directory backend is `postgres`
|
||||||
|
|
||||||
|
**Decision.** `LOBBY_RACE_NAME_DIRECTORY_BACKEND` defaults to
|
||||||
|
`"postgres"`. The accepted values are `postgres` (production) and
|
||||||
|
`stub` (in-process for unit tests that do not need a real PostgreSQL).
|
||||||
|
The `redis` value, the corresponding `RaceNameDirectoryBackendRedis`
|
||||||
|
constant, and the wiring branch are removed.
|
||||||
|
|
||||||
|
**Why.** The Redis adapter is gone; keeping the value in the validator
|
||||||
|
would produce a misleading "configuration accepted, but startup fails
|
||||||
|
when wiring resolves the directory" path. Leaving `stub` as a valid
|
||||||
|
backend lets per-service unit tests run against a small, fast
|
||||||
|
in-process directory; integration suites use `postgres` via the
|
||||||
|
testcontainers harness.
|
||||||
+47
-18
@@ -7,8 +7,23 @@ readiness, shutdown, and the handful of recovery paths specific to Lobby.
|
|||||||
|
|
||||||
Before starting the process, confirm:
|
Before starting the process, confirm:
|
||||||
|
|
||||||
- `LOBBY_REDIS_ADDR` points to the Redis deployment used for state and the
|
- `LOBBY_REDIS_MASTER_ADDR` and `LOBBY_REDIS_PASSWORD` point to the Redis
|
||||||
five Lobby-related streams.
|
deployment used for the runtime-coordination state that intentionally
|
||||||
|
stays on Redis: stream consumers/publishers, stream offsets, per-game
|
||||||
|
turn-stats aggregates, gap-activation timestamps, and the
|
||||||
|
capability-evaluation guard. The deprecated `LOBBY_REDIS_ADDR`,
|
||||||
|
`LOBBY_REDIS_USERNAME`, and `LOBBY_REDIS_TLS_ENABLED` env vars were
|
||||||
|
retired in PG_PLAN.md §6A; setting either of the latter two now fails
|
||||||
|
fast at startup.
|
||||||
|
- `LOBBY_POSTGRES_PRIMARY_DSN` points to the PostgreSQL primary that
|
||||||
|
hosts the `lobby` schema. The DSN must include `search_path=lobby` and
|
||||||
|
`sslmode=disable`. Embedded goose migrations apply at startup before
|
||||||
|
any HTTP listener opens; a migration or ping failure terminates the
|
||||||
|
process with a non-zero exit. After PG_PLAN.md §6A the schema holds
|
||||||
|
`games`, `applications`, `invites`, `memberships`; after §6B it also
|
||||||
|
holds `race_names`. The schema and the `lobbyservice` role are
|
||||||
|
provisioned externally (operator init script in production, the
|
||||||
|
testcontainers harness in tests).
|
||||||
- `LOBBY_USER_SERVICE_BASE_URL` and `LOBBY_GM_BASE_URL` are reachable from
|
- `LOBBY_USER_SERVICE_BASE_URL` and `LOBBY_GM_BASE_URL` are reachable from
|
||||||
the network the Lobby pods run in. Lobby does not ping these at boot,
|
the network the Lobby pods run in. Lobby does not ping these at boot,
|
||||||
but transport failures against them will surface as request errors.
|
but transport failures against them will surface as request errors.
|
||||||
@@ -19,11 +34,13 @@ Before starting the process, confirm:
|
|||||||
- `LOBBY_RUNTIME_JOB_RESULTS_STREAM` (default `runtime:job_results`)
|
- `LOBBY_RUNTIME_JOB_RESULTS_STREAM` (default `runtime:job_results`)
|
||||||
- `LOBBY_USER_LIFECYCLE_STREAM` (default `user:lifecycle_events`)
|
- `LOBBY_USER_LIFECYCLE_STREAM` (default `user:lifecycle_events`)
|
||||||
- `LOBBY_NOTIFICATION_INTENTS_STREAM` (default `notification:intents`)
|
- `LOBBY_NOTIFICATION_INTENTS_STREAM` (default `notification:intents`)
|
||||||
- `LOBBY_RACE_NAME_DIRECTORY_BACKEND` is `redis` for production; the
|
- `LOBBY_RACE_NAME_DIRECTORY_BACKEND` is `postgres` for production
|
||||||
`stub` value is only for unit tests.
|
(the default after PG_PLAN.md §6B); the `stub` value is only for
|
||||||
|
unit tests that do not need a real PostgreSQL.
|
||||||
|
|
||||||
At startup the process performs a bounded `PING` against Redis. Startup
|
At startup the process opens the PostgreSQL pool, applies migrations,
|
||||||
fails fast if the ping fails. There are no liveness checks against User
|
pings PostgreSQL, then opens the Redis client and pings Redis. Startup
|
||||||
|
fails fast if any step fails. There are no liveness checks against User
|
||||||
Service or Game Master at boot; those are surfaced at request time.
|
Service or Game Master at boot; those are surfaced at request time.
|
||||||
|
|
||||||
Expected listener state after a healthy start:
|
Expected listener state after a healthy start:
|
||||||
@@ -160,11 +177,15 @@ is reachable again.
|
|||||||
To inspect the backlog:
|
To inspect the backlog:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
redis-cli ZRANGE lobby:race_names:pending_index 0 -1 WITHSCORES
|
psql -c "SELECT canonical_key, game_id, holder_user_id, eligible_until_ms
|
||||||
|
FROM lobby.race_names
|
||||||
|
WHERE binding_kind = 'pending_registration'
|
||||||
|
ORDER BY eligible_until_ms ASC"
|
||||||
```
|
```
|
||||||
|
|
||||||
Entries with `score < now()` (Unix milliseconds) are expirable on the next
|
Rows whose `eligible_until_ms` is at or below `extract(epoch from now()) * 1000`
|
||||||
tick.
|
are expirable on the next tick. The partial index
|
||||||
|
`race_names_pending_eligible_idx` keeps this scan cheap.
|
||||||
|
|
||||||
## Cascade Release Operator Notes
|
## Cascade Release Operator Notes
|
||||||
|
|
||||||
@@ -195,26 +216,34 @@ out-of-band.
|
|||||||
|
|
||||||
## Diagnostic Queries
|
## Diagnostic Queries
|
||||||
|
|
||||||
A handful of Redis CLI snippets help during incidents:
|
Durable enrollment state and Race Name Directory bindings live in
|
||||||
|
PostgreSQL; runtime coordination state stays in Redis. A handful of CLI
|
||||||
|
snippets help during incidents:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Live game count by status
|
# Live game count by status (PostgreSQL)
|
||||||
redis-cli ZCARD lobby:games_by_status:enrollment_open
|
psql -c "SELECT status, COUNT(*) FROM lobby.games GROUP BY status"
|
||||||
redis-cli ZCARD lobby:games_by_status:running
|
|
||||||
|
|
||||||
# Inspect a specific game record
|
# Inspect a specific game record
|
||||||
redis-cli GET lobby:games:<game_id>
|
psql -c "SELECT * FROM lobby.games WHERE game_id = '<game_id>'"
|
||||||
|
|
||||||
# Member roster for a game
|
# Member roster for a game
|
||||||
redis-cli SMEMBERS lobby:game_memberships:<game_id>
|
psql -c "SELECT user_id, race_name, status, joined_at
|
||||||
|
FROM lobby.memberships
|
||||||
|
WHERE game_id = '<game_id>'
|
||||||
|
ORDER BY joined_at"
|
||||||
|
|
||||||
# Race name pending entries (oldest first)
|
# Race name pending entries (oldest first)
|
||||||
redis-cli ZRANGE lobby:race_names:pending_index 0 -1 WITHSCORES
|
psql -c "SELECT canonical_key, game_id, holder_user_id, eligible_until_ms
|
||||||
|
FROM lobby.race_names
|
||||||
|
WHERE binding_kind = 'pending_registration'
|
||||||
|
ORDER BY eligible_until_ms ASC"
|
||||||
|
|
||||||
# Stream lag inspection
|
# Stream lag inspection (Redis)
|
||||||
redis-cli XINFO STREAM gm:lobby_events
|
redis-cli XINFO STREAM gm:lobby_events
|
||||||
redis-cli GET lobby:stream_offsets:gm_events
|
redis-cli GET lobby:stream_offsets:gm_events
|
||||||
```
|
```
|
||||||
|
|
||||||
The gauges and counters surfaced through OpenTelemetry are the primary
|
The gauges and counters surfaced through OpenTelemetry are the primary
|
||||||
observability surface; raw Redis access is for last-resort triage.
|
observability surface; raw PostgreSQL and Redis access is for last-resort
|
||||||
|
triage.
|
||||||
|
|||||||
+19
-11
@@ -56,9 +56,10 @@ flowchart LR
|
|||||||
|
|
||||||
Notes:
|
Notes:
|
||||||
|
|
||||||
- `cmd/lobby` refuses startup when Redis connectivity is misconfigured. User
|
- `cmd/lobby` refuses startup when Redis connectivity is misconfigured, when
|
||||||
Service and Game Master reachability are not verified at boot; transport
|
PostgreSQL is unreachable, or when the embedded goose migrations fail to
|
||||||
failures surface as request errors.
|
apply. User Service and Game Master reachability are not verified at boot;
|
||||||
|
transport failures surface as request errors.
|
||||||
- Both HTTP listeners expose `/healthz` and `/readyz` independently so health
|
- Both HTTP listeners expose `/healthz` and `/readyz` independently so health
|
||||||
checks can target either port.
|
checks can target either port.
|
||||||
- `register-runtime` is an outgoing call from Lobby to Game Master after the
|
- `register-runtime` is an outgoing call from Lobby to Game Master after the
|
||||||
@@ -85,7 +86,7 @@ Probe routes:
|
|||||||
|
|
||||||
- `GET /healthz` returns `{"status":"ok"}`
|
- `GET /healthz` returns `{"status":"ok"}`
|
||||||
- `GET /readyz` returns `{"status":"ready"}` once startup wiring completes.
|
- `GET /readyz` returns `{"status":"ready"}` once startup wiring completes.
|
||||||
- Neither probe performs a live Redis ping per request.
|
- Neither probe performs a live Redis or PostgreSQL ping per request.
|
||||||
- There is no `/metrics` route. Metrics flow through OpenTelemetry exporters.
|
- There is no `/metrics` route. Metrics flow through OpenTelemetry exporters.
|
||||||
|
|
||||||
## Background Workers
|
## Background Workers
|
||||||
@@ -130,13 +131,20 @@ lags or stalls, the gauge climbs and stays high.
|
|||||||
The full env-var list with defaults lives in `../README.md` §Configuration.
|
The full env-var list with defaults lives in `../README.md` §Configuration.
|
||||||
The groups below summarize the structure:
|
The groups below summarize the structure:
|
||||||
|
|
||||||
- **Required** — `LOBBY_REDIS_ADDR`, `LOBBY_USER_SERVICE_BASE_URL`,
|
- **Required** — `LOBBY_REDIS_MASTER_ADDR`, `LOBBY_REDIS_PASSWORD`,
|
||||||
|
`LOBBY_POSTGRES_PRIMARY_DSN`, `LOBBY_USER_SERVICE_BASE_URL`,
|
||||||
`LOBBY_GM_BASE_URL`.
|
`LOBBY_GM_BASE_URL`.
|
||||||
- **Process and logging** — `LOBBY_SHUTDOWN_TIMEOUT`, `LOBBY_LOG_LEVEL`.
|
- **Process and logging** — `LOBBY_SHUTDOWN_TIMEOUT`, `LOBBY_LOG_LEVEL`.
|
||||||
- **HTTP listeners** — `LOBBY_PUBLIC_HTTP_*`, `LOBBY_INTERNAL_HTTP_*`.
|
- **HTTP listeners** — `LOBBY_PUBLIC_HTTP_*`, `LOBBY_INTERNAL_HTTP_*`.
|
||||||
- **Redis connectivity** — `LOBBY_REDIS_USERNAME`, `LOBBY_REDIS_PASSWORD`,
|
- **Redis connectivity** — `LOBBY_REDIS_MASTER_ADDR`,
|
||||||
`LOBBY_REDIS_DB`, `LOBBY_REDIS_TLS_ENABLED`,
|
`LOBBY_REDIS_REPLICA_ADDRS`, `LOBBY_REDIS_PASSWORD`, `LOBBY_REDIS_DB`,
|
||||||
`LOBBY_REDIS_OPERATION_TIMEOUT`.
|
`LOBBY_REDIS_OPERATION_TIMEOUT` (legacy `LOBBY_REDIS_ADDR`,
|
||||||
|
`LOBBY_REDIS_TLS_ENABLED`, `LOBBY_REDIS_USERNAME` removed in PG_PLAN.md
|
||||||
|
§6A).
|
||||||
|
- **PostgreSQL connectivity** — `LOBBY_POSTGRES_PRIMARY_DSN`,
|
||||||
|
`LOBBY_POSTGRES_REPLICA_DSNS`, `LOBBY_POSTGRES_OPERATION_TIMEOUT`,
|
||||||
|
`LOBBY_POSTGRES_MAX_OPEN_CONNS`, `LOBBY_POSTGRES_MAX_IDLE_CONNS`,
|
||||||
|
`LOBBY_POSTGRES_CONN_MAX_LIFETIME`.
|
||||||
- **Streams** — `LOBBY_GM_EVENTS_STREAM`, `LOBBY_RUNTIME_START_JOBS_STREAM`,
|
- **Streams** — `LOBBY_GM_EVENTS_STREAM`, `LOBBY_RUNTIME_START_JOBS_STREAM`,
|
||||||
`LOBBY_RUNTIME_STOP_JOBS_STREAM`, `LOBBY_RUNTIME_JOB_RESULTS_STREAM`,
|
`LOBBY_RUNTIME_STOP_JOBS_STREAM`, `LOBBY_RUNTIME_JOB_RESULTS_STREAM`,
|
||||||
`LOBBY_NOTIFICATION_INTENTS_STREAM`, `LOBBY_USER_LIFECYCLE_STREAM`.
|
`LOBBY_NOTIFICATION_INTENTS_STREAM`, `LOBBY_USER_LIFECYCLE_STREAM`.
|
||||||
@@ -152,9 +160,9 @@ The groups below summarize the structure:
|
|||||||
|
|
||||||
- `Game Lobby` owns platform game state. Game Master may cache snapshots but
|
- `Game Lobby` owns platform game state. Game Master may cache snapshots but
|
||||||
is not the source of truth.
|
is not the source of truth.
|
||||||
- The Race Name Directory ships a Redis adapter and an in-process stub; the
|
- The Race Name Directory ships a PostgreSQL adapter (default after
|
||||||
stub is intended for unit tests and is selected via
|
PG_PLAN.md §6B) and an in-process stub. The stub is intended for unit
|
||||||
`LOBBY_RACE_NAME_DIRECTORY_BACKEND=stub`.
|
tests and is selected via `LOBBY_RACE_NAME_DIRECTORY_BACKEND=stub`.
|
||||||
- A `permanent_block` or `deleted` event from User Service fans out
|
- A `permanent_block` or `deleted` event from User Service fans out
|
||||||
asynchronously through the `user:lifecycle_events` consumer; in-flight
|
asynchronously through the `user:lifecycle_events` consumer; in-flight
|
||||||
games owned by the affected user receive a stop-job and transition to
|
games owned by the affected user receive a stop-job and transition to
|
||||||
|
|||||||
+33
-11
@@ -3,15 +3,17 @@ module galaxy/lobby
|
|||||||
go 1.26.1
|
go 1.26.1
|
||||||
|
|
||||||
require (
|
require (
|
||||||
|
galaxy/postgres v0.0.0-00010101000000-000000000000
|
||||||
github.com/alicebob/miniredis/v2 v2.37.0
|
github.com/alicebob/miniredis/v2 v2.37.0
|
||||||
github.com/disciplinedware/go-confusables v0.1.1
|
github.com/disciplinedware/go-confusables v0.1.1
|
||||||
github.com/getkin/kin-openapi v0.135.0
|
github.com/getkin/kin-openapi v0.135.0
|
||||||
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0
|
github.com/go-jet/jet/v2 v2.14.1
|
||||||
|
github.com/jackc/pgx/v5 v5.9.2
|
||||||
github.com/redis/go-redis/v9 v9.18.0
|
github.com/redis/go-redis/v9 v9.18.0
|
||||||
github.com/robfig/cron/v3 v3.0.1
|
github.com/robfig/cron/v3 v3.0.1
|
||||||
github.com/stretchr/testify v1.11.1
|
github.com/stretchr/testify v1.11.1
|
||||||
github.com/testcontainers/testcontainers-go v0.42.0
|
github.com/testcontainers/testcontainers-go v0.42.0
|
||||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0
|
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0
|
||||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0
|
||||||
go.opentelemetry.io/otel v1.43.0
|
go.opentelemetry.io/otel v1.43.0
|
||||||
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0
|
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.43.0
|
||||||
@@ -28,6 +30,24 @@ require (
|
|||||||
golang.org/x/text v0.36.0
|
golang.org/x/text v0.36.0
|
||||||
)
|
)
|
||||||
|
|
||||||
|
require (
|
||||||
|
github.com/XSAM/otelsql v0.42.0 // indirect
|
||||||
|
github.com/jackc/chunkreader/v2 v2.0.1 // indirect
|
||||||
|
github.com/jackc/pgconn v1.14.3 // indirect
|
||||||
|
github.com/jackc/pgio v1.0.0 // indirect
|
||||||
|
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||||
|
github.com/jackc/pgproto3/v2 v2.3.3 // indirect
|
||||||
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
|
||||||
|
github.com/jackc/pgtype v1.14.4 // indirect
|
||||||
|
github.com/jackc/puddle/v2 v2.2.2 // indirect
|
||||||
|
github.com/lib/pq v1.10.9 // indirect
|
||||||
|
github.com/mfridman/interpolate v0.0.2 // indirect
|
||||||
|
github.com/pressly/goose/v3 v3.27.1 // indirect
|
||||||
|
github.com/sethvargo/go-retry v0.3.0 // indirect
|
||||||
|
go.uber.org/multierr v1.11.0 // indirect
|
||||||
|
golang.org/x/sync v0.20.0 // indirect
|
||||||
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
dario.cat/mergo v1.0.2 // indirect
|
dario.cat/mergo v1.0.2 // indirect
|
||||||
galaxy/notificationintent v0.0.0
|
galaxy/notificationintent v0.0.0
|
||||||
@@ -44,7 +64,7 @@ require (
|
|||||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
|
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
|
||||||
github.com/distribution/reference v0.6.0 // indirect
|
github.com/distribution/reference v0.6.0 // indirect
|
||||||
github.com/docker/go-connections v0.6.0 // indirect
|
github.com/docker/go-connections v0.7.0 // indirect
|
||||||
github.com/docker/go-units v0.5.0 // indirect
|
github.com/docker/go-units v0.5.0 // indirect
|
||||||
github.com/ebitengine/purego v0.10.0 // indirect
|
github.com/ebitengine/purego v0.10.0 // indirect
|
||||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||||
@@ -60,11 +80,10 @@ require (
|
|||||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
|
||||||
github.com/magiconair/properties v1.8.10 // indirect
|
github.com/magiconair/properties v1.8.10 // indirect
|
||||||
github.com/mailru/easyjson v0.7.7 // indirect
|
github.com/mailru/easyjson v0.7.7 // indirect
|
||||||
github.com/mdelapenya/tlscert v0.2.0 // indirect
|
|
||||||
github.com/moby/docker-image-spec v1.3.1 // indirect
|
github.com/moby/docker-image-spec v1.3.1 // indirect
|
||||||
github.com/moby/go-archive v0.2.0 // indirect
|
github.com/moby/go-archive v0.2.0 // indirect
|
||||||
github.com/moby/moby/api v1.54.1 // indirect
|
github.com/moby/moby/api v1.54.2 // indirect
|
||||||
github.com/moby/moby/client v0.4.0 // indirect
|
github.com/moby/moby/client v0.4.1 // indirect
|
||||||
github.com/moby/patternmatcher v0.6.1 // indirect
|
github.com/moby/patternmatcher v0.6.1 // indirect
|
||||||
github.com/moby/sys/sequential v0.6.0 // indirect
|
github.com/moby/sys/sequential v0.6.0 // indirect
|
||||||
github.com/moby/sys/user v0.4.0 // indirect
|
github.com/moby/sys/user v0.4.0 // indirect
|
||||||
@@ -78,7 +97,6 @@ require (
|
|||||||
github.com/perimeterx/marshmallow v1.1.5 // indirect
|
github.com/perimeterx/marshmallow v1.1.5 // indirect
|
||||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
|
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
|
||||||
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 // indirect
|
|
||||||
github.com/shirou/gopsutil/v4 v4.26.3 // indirect
|
github.com/shirou/gopsutil/v4 v4.26.3 // indirect
|
||||||
github.com/sirupsen/logrus v1.9.4 // indirect
|
github.com/sirupsen/logrus v1.9.4 // indirect
|
||||||
github.com/tklauser/go-sysconf v0.3.16 // indirect
|
github.com/tklauser/go-sysconf v0.3.16 // indirect
|
||||||
@@ -91,14 +109,18 @@ require (
|
|||||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect
|
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect
|
||||||
go.opentelemetry.io/proto/otlp v1.10.0 // indirect
|
go.opentelemetry.io/proto/otlp v1.10.0 // indirect
|
||||||
go.uber.org/atomic v1.11.0 // indirect
|
go.uber.org/atomic v1.11.0 // indirect
|
||||||
golang.org/x/crypto v0.49.0 // indirect
|
golang.org/x/crypto v0.50.0 // indirect
|
||||||
golang.org/x/net v0.52.0 // indirect
|
golang.org/x/net v0.53.0 // indirect
|
||||||
golang.org/x/sys v0.42.0 // indirect
|
golang.org/x/sys v0.43.0 // indirect
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 // indirect
|
||||||
google.golang.org/grpc v1.80.0 // indirect
|
google.golang.org/grpc v1.80.0 // indirect
|
||||||
google.golang.org/protobuf v1.36.11 // indirect
|
google.golang.org/protobuf v1.36.11 // indirect
|
||||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||||
)
|
)
|
||||||
|
|
||||||
replace galaxy/notificationintent => ../pkg/notificationintent
|
replace galaxy/notificationintent => ../pkg/notificationintent
|
||||||
|
|
||||||
|
replace galaxy/postgres => ../pkg/postgres
|
||||||
|
|
||||||
|
replace galaxy/redisconn => ../pkg/redisconn
|
||||||
|
|||||||
+254
-22
@@ -4,8 +4,12 @@ github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8af
|
|||||||
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
|
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
|
||||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
|
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
|
||||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
||||||
|
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||||
|
github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
|
||||||
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
||||||
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
||||||
|
github.com/XSAM/otelsql v0.42.0 h1:Li0xF4eJUxG2e0x3D4rvRlys1f27yJKvjTh7ljkUP5o=
|
||||||
|
github.com/XSAM/otelsql v0.42.0/go.mod h1:4mOrEv+cS1KmKzrvTktvJnstr5GtKSAK+QHvFR9OcpI=
|
||||||
github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68=
|
github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68=
|
||||||
github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM=
|
github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM=
|
||||||
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
|
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
|
||||||
@@ -18,6 +22,7 @@ github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1x
|
|||||||
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
|
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||||
|
github.com/cockroachdb/apd v1.1.0/go.mod h1:8Sl8LxpKi29FqWXR16WEFZRNSz3SoPzUzeMeY4+DwBQ=
|
||||||
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
|
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
|
||||||
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
|
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
|
||||||
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
|
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
|
||||||
@@ -26,10 +31,15 @@ github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
|
|||||||
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
|
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
|
||||||
github.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A=
|
github.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A=
|
||||||
github.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw=
|
github.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw=
|
||||||
|
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||||
|
github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||||
github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GKorA=
|
github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GKorA=
|
||||||
github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
|
github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
|
||||||
|
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
|
||||||
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
|
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
|
||||||
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
|
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
|
||||||
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||||
@@ -38,16 +48,22 @@ github.com/disciplinedware/go-confusables v0.1.1 h1:l/JVOsdrEDHo7nvL+tQfRO1F14Uy
|
|||||||
github.com/disciplinedware/go-confusables v0.1.1/go.mod h1:2hAXIAtpSqx+tMKdCzgRNv4J/kmz/oGfSHTBGJjVgfc=
|
github.com/disciplinedware/go-confusables v0.1.1/go.mod h1:2hAXIAtpSqx+tMKdCzgRNv4J/kmz/oGfSHTBGJjVgfc=
|
||||||
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
|
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
|
||||||
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
|
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
|
||||||
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
|
github.com/docker/go-connections v0.7.0 h1:6SsRfJddP22WMrCkj19x9WKjEDTB+ahsdiGYf0mN39c=
|
||||||
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
|
github.com/docker/go-connections v0.7.0/go.mod h1:no1qkHdjq7kLMGUXYAduOhYPSJxxvgWBh7ogVvptn3Q=
|
||||||
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
|
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
|
||||||
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||||
|
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||||
|
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||||
github.com/ebitengine/purego v0.10.0 h1:QIw4xfpWT6GWTzaW5XEKy3HXoqrJGx1ijYHzTF0/ISU=
|
github.com/ebitengine/purego v0.10.0 h1:QIw4xfpWT6GWTzaW5XEKy3HXoqrJGx1ijYHzTF0/ISU=
|
||||||
github.com/ebitengine/purego v0.10.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
|
github.com/ebitengine/purego v0.10.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
|
||||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||||
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||||
github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg=
|
github.com/getkin/kin-openapi v0.135.0 h1:751SjYfbiwqukYuVjwYEIKNfrSwS5YpA7DZnKSwQgtg=
|
||||||
github.com/getkin/kin-openapi v0.135.0/go.mod h1:6dd5FJl6RdX4usBtFBaQhk9q62Yb2J0Mk5IhUO/QqFI=
|
github.com/getkin/kin-openapi v0.135.0/go.mod h1:6dd5FJl6RdX4usBtFBaQhk9q62Yb2J0Mk5IhUO/QqFI=
|
||||||
|
github.com/go-jet/jet/v2 v2.14.1 h1:wsfD9e7CGP9h46+IFNlftfncBcmVnKddikbTtapQM3M=
|
||||||
|
github.com/go-jet/jet/v2 v2.14.1/go.mod h1:dqTAECV2Mo3S2NFjbm4vJ1aDruZjhaJ1RAAR8rGUkkc=
|
||||||
|
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
|
||||||
|
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
|
||||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||||
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||||
@@ -59,43 +75,123 @@ github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1
|
|||||||
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
|
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
|
||||||
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
|
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
|
||||||
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
|
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
|
||||||
|
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||||
github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM=
|
github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM=
|
||||||
github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
|
github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
|
||||||
|
github.com/gofrs/uuid v4.0.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
|
||||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||||
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||||
|
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs=
|
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs=
|
||||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c=
|
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c=
|
||||||
|
github.com/jackc/chunkreader v1.0.0/go.mod h1:RT6O25fNZIuasFJRyZ4R/Y2BbhasbmZXF9QQ7T3kePo=
|
||||||
|
github.com/jackc/chunkreader/v2 v2.0.0/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
|
||||||
|
github.com/jackc/chunkreader/v2 v2.0.1 h1:i+RDz65UE+mmpjTfyz0MoVTnzeYxroil2G82ki7MGG8=
|
||||||
|
github.com/jackc/chunkreader/v2 v2.0.1/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
|
||||||
|
github.com/jackc/pgconn v0.0.0-20190420214824-7e0022ef6ba3/go.mod h1:jkELnwuX+w9qN5YIfX0fl88Ehu4XC3keFuOJJk9pcnA=
|
||||||
|
github.com/jackc/pgconn v0.0.0-20190824142844-760dd75542eb/go.mod h1:lLjNuW/+OfW9/pnVKPazfWOgNfH2aPem8YQ7ilXGvJE=
|
||||||
|
github.com/jackc/pgconn v0.0.0-20190831204454-2fabfa3c18b7/go.mod h1:ZJKsE/KZfsUgOEh9hBm+xYTstcNHg7UPMVJqRfQxq4s=
|
||||||
|
github.com/jackc/pgconn v1.8.0/go.mod h1:1C2Pb36bGIP9QHGBYCjnyhqu7Rv3sGshaQUvmfGIB/o=
|
||||||
|
github.com/jackc/pgconn v1.9.0/go.mod h1:YctiPyvzfU11JFxoXokUOOKQXQmDMoJL9vJzHH8/2JY=
|
||||||
|
github.com/jackc/pgconn v1.9.1-0.20210724152538-d89c8390a530/go.mod h1:4z2w8XhRbP1hYxkpTuBjTS3ne3J48K83+u0zoyvg2pI=
|
||||||
|
github.com/jackc/pgconn v1.14.3 h1:bVoTr12EGANZz66nZPkMInAV/KHD2TxH9npjXXgiB3w=
|
||||||
|
github.com/jackc/pgconn v1.14.3/go.mod h1:RZbme4uasqzybK2RK5c65VsHxoyaml09lx3tXOcO/VM=
|
||||||
|
github.com/jackc/pgio v1.0.0 h1:g12B9UwVnzGhueNavwioyEEpAmqMe1E/BN9ES+8ovkE=
|
||||||
|
github.com/jackc/pgio v1.0.0/go.mod h1:oP+2QK2wFfUWgr+gxjoBH9KGBb31Eio69xUb0w5bYf8=
|
||||||
|
github.com/jackc/pgmock v0.0.0-20190831213851-13a1b77aafa2/go.mod h1:fGZlG77KXmcq05nJLRkk0+p82V8B8Dw8KN2/V9c/OAE=
|
||||||
|
github.com/jackc/pgmock v0.0.0-20201204152224-4fe30f7445fd/go.mod h1:hrBW0Enj2AZTNpt/7Y5rr2xe/9Mn757Wtb2xeBzPv2c=
|
||||||
|
github.com/jackc/pgmock v0.0.0-20210724152146-4ad1a8207f65 h1:DadwsjnMwFjfWc9y5Wi/+Zz7xoE5ALHsRQlOctkOiHc=
|
||||||
|
github.com/jackc/pgmock v0.0.0-20210724152146-4ad1a8207f65/go.mod h1:5R2h2EEX+qri8jOWMbJCtaPWkrrNc7OHwsp2TCqp7ak=
|
||||||
|
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||||
|
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
|
||||||
|
github.com/jackc/pgproto3 v1.1.0/go.mod h1:eR5FA3leWg7p9aeAqi37XOTgTIbkABlvcPB3E5rlc78=
|
||||||
|
github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190420180111-c116219b62db/go.mod h1:bhq50y+xrl9n5mRYyCBFKkpRVTLYJVWeCc+mEAI3yXA=
|
||||||
|
github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190609003834-432c2951c711/go.mod h1:uH0AWtUmuShn0bcesswc4aBTWGvw0cAxIJp+6OB//Wg=
|
||||||
|
github.com/jackc/pgproto3/v2 v2.0.0-rc3/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM=
|
||||||
|
github.com/jackc/pgproto3/v2 v2.0.0-rc3.0.20190831210041-4c03ce451f29/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM=
|
||||||
|
github.com/jackc/pgproto3/v2 v2.0.6/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
|
||||||
|
github.com/jackc/pgproto3/v2 v2.1.1/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
|
||||||
|
github.com/jackc/pgproto3/v2 v2.3.3 h1:1HLSx5H+tXR9pW3in3zaztoEwQYRC9SQaYUHjTSUOag=
|
||||||
|
github.com/jackc/pgproto3/v2 v2.3.3/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
|
||||||
|
github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b/go.mod h1:vsD4gTJCa9TptPL8sPkXrLZ+hDuNrZCnj29CQpr4X1E=
|
||||||
|
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
||||||
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
|
||||||
|
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
||||||
|
github.com/jackc/pgtype v0.0.0-20190421001408-4ed0de4755e0/go.mod h1:hdSHsc1V01CGwFsrv11mJRHWJ6aifDLfdV3aVjFF0zg=
|
||||||
|
github.com/jackc/pgtype v0.0.0-20190824184912-ab885b375b90/go.mod h1:KcahbBH1nCMSo2DXpzsoWOAfFkdEtEJpPbVLq8eE+mc=
|
||||||
|
github.com/jackc/pgtype v0.0.0-20190828014616-a8802b16cc59/go.mod h1:MWlu30kVJrUS8lot6TQqcg7mtthZ9T0EoIBFiJcmcyw=
|
||||||
|
github.com/jackc/pgtype v1.8.1-0.20210724151600-32e20a603178/go.mod h1:C516IlIV9NKqfsMCXTdChteoXmwgUceqaLfjg2e3NlM=
|
||||||
|
github.com/jackc/pgtype v1.14.0/go.mod h1:LUMuVrfsFfdKGLw+AFFVv6KtHOFMwRgDDzBt76IqCA4=
|
||||||
|
github.com/jackc/pgtype v1.14.4 h1:fKuNiCumbKTAIxQwXfB/nsrnkEI6bPJrrSiMKgbJ2j8=
|
||||||
|
github.com/jackc/pgtype v1.14.4/go.mod h1:aKeozOde08iifGosdJpz9MBZonJOUJxqNpPBcMJTlVA=
|
||||||
|
github.com/jackc/pgx/v4 v4.0.0-20190420224344-cc3461e65d96/go.mod h1:mdxmSJJuR08CZQyj1PVQBHy9XOp5p8/SHH6a0psbY9Y=
|
||||||
|
github.com/jackc/pgx/v4 v4.0.0-20190421002000-1b8f0016e912/go.mod h1:no/Y67Jkk/9WuGR0JG/JseM9irFbnEPbuWV2EELPNuM=
|
||||||
|
github.com/jackc/pgx/v4 v4.0.0-pre1.0.20190824185557-6972a5742186/go.mod h1:X+GQnOEnf1dqHGpw7JmHqHc1NxDoalibchSk9/RWuDc=
|
||||||
|
github.com/jackc/pgx/v4 v4.12.1-0.20210724153913-640aa07df17c/go.mod h1:1QD0+tgSXP7iUjYm9C1NxKhny7lq6ee99u/z+IHFcgs=
|
||||||
|
github.com/jackc/pgx/v4 v4.18.2/go.mod h1:Ey4Oru5tH5sB6tV7hDmfWFahwF15Eb7DNXlRKx2CkVw=
|
||||||
|
github.com/jackc/pgx/v4 v4.18.3 h1:dE2/TrEsGX3RBprb3qryqSV9Y60iZN1C6i8IrmW9/BA=
|
||||||
|
github.com/jackc/pgx/v4 v4.18.3/go.mod h1:Ey4Oru5tH5sB6tV7hDmfWFahwF15Eb7DNXlRKx2CkVw=
|
||||||
|
github.com/jackc/pgx/v5 v5.9.2 h1:3ZhOzMWnR4yJ+RW1XImIPsD1aNSz4T4fyP7zlQb56hw=
|
||||||
|
github.com/jackc/pgx/v5 v5.9.2/go.mod h1:mal1tBGAFfLHvZzaYh77YS/eC6IX9OWbRV1QIIM0Jn4=
|
||||||
|
github.com/jackc/puddle v0.0.0-20190413234325-e4ced69a3a2b/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||||
|
github.com/jackc/puddle v0.0.0-20190608224051-11cab39313c9/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||||
|
github.com/jackc/puddle v1.1.3/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||||
|
github.com/jackc/puddle v1.3.0/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
|
||||||
|
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
|
||||||
|
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
||||||
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
||||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||||
|
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||||
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
|
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
|
||||||
github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ=
|
github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ=
|
||||||
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
||||||
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
||||||
|
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||||
|
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||||
|
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||||
|
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||||
|
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
|
||||||
|
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||||
|
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||||
|
github.com/lib/pq v1.1.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||||
|
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||||
|
github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||||
|
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||||
|
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
|
||||||
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
|
||||||
github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE=
|
github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE=
|
||||||
github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
|
github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
|
||||||
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
||||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||||
|
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
|
||||||
|
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
|
||||||
|
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||||
|
github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||||
|
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
|
||||||
|
github.com/mattn/go-isatty v0.0.21 h1:xYae+lCNBP7QuW4PUnNG61ffM4hVIfm+zUzDuSzYLGs=
|
||||||
|
github.com/mattn/go-isatty v0.0.21/go.mod h1:ZXfXG4SQHsB/w3ZeOYbR0PrPwLy+n6xiMrJlRFqopa4=
|
||||||
github.com/mdelapenya/tlscert v0.2.0 h1:7H81W6Z/4weDvZBNOfQte5GpIMo0lGYEeWbkGp5LJHI=
|
github.com/mdelapenya/tlscert v0.2.0 h1:7H81W6Z/4weDvZBNOfQte5GpIMo0lGYEeWbkGp5LJHI=
|
||||||
github.com/mdelapenya/tlscert v0.2.0/go.mod h1:O4njj3ELLnJjGdkN7M/vIVCpZ+Cf0L6muqOG4tLSl8o=
|
github.com/mdelapenya/tlscert v0.2.0/go.mod h1:O4njj3ELLnJjGdkN7M/vIVCpZ+Cf0L6muqOG4tLSl8o=
|
||||||
|
github.com/mfridman/interpolate v0.0.2 h1:pnuTK7MQIxxFz1Gr+rjSIx9u7qVjf5VOoM/u6BbAxPY=
|
||||||
|
github.com/mfridman/interpolate v0.0.2/go.mod h1:p+7uk6oE07mpE/Ik1b8EckO0O4ZXiGAfshKBWLUM9Xg=
|
||||||
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
|
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
|
||||||
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
|
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
|
||||||
github.com/moby/go-archive v0.2.0 h1:zg5QDUM2mi0JIM9fdQZWC7U8+2ZfixfTYoHL7rWUcP8=
|
github.com/moby/go-archive v0.2.0 h1:zg5QDUM2mi0JIM9fdQZWC7U8+2ZfixfTYoHL7rWUcP8=
|
||||||
github.com/moby/go-archive v0.2.0/go.mod h1:mNeivT14o8xU+5q1YnNrkQVpK+dnNe/K6fHqnTg4qPU=
|
github.com/moby/go-archive v0.2.0/go.mod h1:mNeivT14o8xU+5q1YnNrkQVpK+dnNe/K6fHqnTg4qPU=
|
||||||
github.com/moby/moby/api v1.54.1 h1:TqVzuJkOLsgLDDwNLmYqACUuTehOHRGKiPhvH8V3Nn4=
|
github.com/moby/moby/api v1.54.2 h1:wiat9QAhnDQjA7wk1kh/TqHz2I1uUA7M7t9SAl/JNXg=
|
||||||
github.com/moby/moby/api v1.54.1/go.mod h1:+RQ6wluLwtYaTd1WnPLykIDPekkuyD/ROWQClE83pzs=
|
github.com/moby/moby/api v1.54.2/go.mod h1:+RQ6wluLwtYaTd1WnPLykIDPekkuyD/ROWQClE83pzs=
|
||||||
github.com/moby/moby/client v0.4.0 h1:S+2XegzHQrrvTCvF6s5HFzcrywWQmuVnhOXe2kiWjIw=
|
github.com/moby/moby/client v0.4.1 h1:DMQgisVoMkmMs7fp3ROSdiBnoAu8+vo3GggFl06M/wY=
|
||||||
github.com/moby/moby/client v0.4.0/go.mod h1:QWPbvWchQbxBNdaLSpoKpCdf5E+WxFAgNHogCWDoa7g=
|
github.com/moby/moby/client v0.4.1/go.mod h1:z52C9O2POPOsnxZAy//WtKcQ32P+jT/NGeXu/7nfjGQ=
|
||||||
github.com/moby/patternmatcher v0.6.1 h1:qlhtafmr6kgMIJjKJMDmMWq7WLkKIo23hsrpR3x084U=
|
github.com/moby/patternmatcher v0.6.1 h1:qlhtafmr6kgMIJjKJMDmMWq7WLkKIo23hsrpR3x084U=
|
||||||
github.com/moby/patternmatcher v0.6.1/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
|
github.com/moby/patternmatcher v0.6.1/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
|
||||||
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
|
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
|
||||||
@@ -108,6 +204,8 @@ github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
|
|||||||
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
|
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
|
||||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
|
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
|
||||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
|
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
|
||||||
|
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
|
||||||
|
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
|
||||||
github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48=
|
github.com/oasdiff/yaml v0.0.9 h1:zQOvd2UKoozsSsAknnWoDJlSK4lC0mpmjfDsfqNwX48=
|
||||||
github.com/oasdiff/yaml v0.0.9/go.mod h1:8lvhgJG4xiKPj3HN5lDow4jZHPlx1i7dIwzkdAo6oAM=
|
github.com/oasdiff/yaml v0.0.9/go.mod h1:8lvhgJG4xiKPj3HN5lDow4jZHPlx1i7dIwzkdAo6oAM=
|
||||||
github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g=
|
github.com/oasdiff/yaml3 v0.0.9 h1:rWPrKccrdUm8J0F3sGuU+fuh9+1K/RdJlWF7O/9yw2g=
|
||||||
@@ -118,32 +216,58 @@ github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJw
|
|||||||
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
|
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
|
||||||
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
|
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
|
||||||
github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw=
|
github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw=
|
||||||
|
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||||
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
|
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
|
||||||
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
|
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
|
||||||
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0 h1:QY4nmPHLFAJjtT5O4OMUEOxP8WVaRNOFpcbmxT2NLZU=
|
github.com/pressly/goose/v3 v3.27.1 h1:6uEvcprBybDmW4hcz3gYujhARhye+GoWKhEWyzD5sh4=
|
||||||
github.com/redis/go-redis/extra/rediscmd/v9 v9.18.0/go.mod h1:WH8cY/0fT41Bsf341qzo8v4nx0GCE8FykAA23IVbVmo=
|
github.com/pressly/goose/v3 v3.27.1/go.mod h1:maruOxsPnIG2yHHyo8UqKWXYKFcH7Q76csUV7+7KYoM=
|
||||||
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0 h1:2dKdoEYBJ0CZCLPiCdvvc7luz3DPwY6hKdzjL6m1eHE=
|
|
||||||
github.com/redis/go-redis/extra/redisotel/v9 v9.18.0/go.mod h1:WzkrVG9ro9BwCQD0eJOWn6AGL4Z1CleGflM45w1hu10=
|
|
||||||
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
|
||||||
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
|
||||||
|
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||||
|
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||||
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
||||||
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
||||||
|
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||||
|
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
|
||||||
|
github.com/rs/zerolog v1.13.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU=
|
||||||
|
github.com/rs/zerolog v1.15.0/go.mod h1:xYTKnLHcpfU2225ny5qZjxnj9NvkumZYjJHlAThCjNc=
|
||||||
|
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
|
||||||
|
github.com/sethvargo/go-retry v0.3.0 h1:EEt31A35QhrcRZtrYFDTBg91cqZVnFL2navjDrah2SE=
|
||||||
|
github.com/sethvargo/go-retry v0.3.0/go.mod h1:mNX17F0C/HguQMyMyJxcnU471gOZGxCLyYaFyAZraas=
|
||||||
github.com/shirou/gopsutil/v4 v4.26.3 h1:2ESdQt90yU3oXF/CdOlRCJxrP+Am1aBYubTMTfxJ1qc=
|
github.com/shirou/gopsutil/v4 v4.26.3 h1:2ESdQt90yU3oXF/CdOlRCJxrP+Am1aBYubTMTfxJ1qc=
|
||||||
github.com/shirou/gopsutil/v4 v4.26.3/go.mod h1:LZ6ewCSkBqUpvSOf+LsTGnRinC6iaNUNMGBtDkJBaLQ=
|
github.com/shirou/gopsutil/v4 v4.26.3/go.mod h1:LZ6ewCSkBqUpvSOf+LsTGnRinC6iaNUNMGBtDkJBaLQ=
|
||||||
|
github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24/go.mod h1:M+9NzErvs504Cn4c5DxATwIqPbtswREoFCre64PpcG4=
|
||||||
|
github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
|
||||||
|
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
|
||||||
|
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||||
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
|
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
|
||||||
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
|
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
|
||||||
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
|
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
|
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
|
||||||
|
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||||
|
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||||
github.com/stretchr/objx v0.5.3 h1:jmXUvGomnU1o3W/V5h2VEradbpJDwGrzugQQvL0POH4=
|
github.com/stretchr/objx v0.5.3 h1:jmXUvGomnU1o3W/V5h2VEradbpJDwGrzugQQvL0POH4=
|
||||||
github.com/stretchr/objx v0.5.3/go.mod h1:rDQraq+vQZU7Fde9LOZLr8Tax6zZvy4kuNKF+QYS+U0=
|
github.com/stretchr/objx v0.5.3/go.mod h1:rDQraq+vQZU7Fde9LOZLr8Tax6zZvy4kuNKF+QYS+U0=
|
||||||
|
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||||
|
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||||
|
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||||
|
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
|
||||||
|
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
|
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
|
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||||
|
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||||
github.com/testcontainers/testcontainers-go v0.42.0 h1:He3IhTzTZOygSXLJPMX7n44XtK+qhjat1nI9cneBbUY=
|
github.com/testcontainers/testcontainers-go v0.42.0 h1:He3IhTzTZOygSXLJPMX7n44XtK+qhjat1nI9cneBbUY=
|
||||||
github.com/testcontainers/testcontainers-go v0.42.0/go.mod h1:vZjdY1YmUA1qEForxOIOazfsrdyORJAbhi0bp8plN30=
|
github.com/testcontainers/testcontainers-go v0.42.0/go.mod h1:vZjdY1YmUA1qEForxOIOazfsrdyORJAbhi0bp8plN30=
|
||||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0 h1:id/6LH8ZeDrtAUVSuNvZUAJ1kVpb82y1pr9yweAWsRg=
|
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0 h1:GCbb1ndrF7OTDiIvxXyItaDab4qkzTFJ48LKFdM7EIo=
|
||||||
github.com/testcontainers/testcontainers-go/modules/redis v0.42.0/go.mod h1:uF0jI8FITagQpBNOgweGBmPf6rP4K0SeL1XFPbsZSSY=
|
github.com/testcontainers/testcontainers-go/modules/postgres v0.42.0/go.mod h1:IRPBaI8jXdrNfD0e4Zm7Fbcgaz5shKxOQv4axiL09xs=
|
||||||
github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA=
|
github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA=
|
||||||
github.com/tklauser/go-sysconf v0.3.16/go.mod h1:/qNL9xxDhc7tx3HSRsLWNnuzbVfh3e7gh/BmM179nYI=
|
github.com/tklauser/go-sysconf v0.3.16/go.mod h1:/qNL9xxDhc7tx3HSRsLWNnuzbVfh3e7gh/BmM179nYI=
|
||||||
github.com/tklauser/numcpus v0.11.0 h1:nSTwhKH5e1dMNsCdVBukSZrURJRoHbSEQjdEbY+9RXw=
|
github.com/tklauser/numcpus v0.11.0 h1:nSTwhKH5e1dMNsCdVBukSZrURJRoHbSEQjdEbY+9RXw=
|
||||||
@@ -152,12 +276,14 @@ github.com/ugorji/go/codec v1.3.1 h1:waO7eEiFDwidsBN6agj1vJQ4AG7lh2yqXyOXqhgQuyY
|
|||||||
github.com/ugorji/go/codec v1.3.1/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2Wjqmfxj4=
|
github.com/ugorji/go/codec v1.3.1/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2Wjqmfxj4=
|
||||||
github.com/woodsbury/decimal128 v1.3.0 h1:8pffMNWIlC0O5vbyHWFZAt5yWvWcrHA+3ovIIjVWss0=
|
github.com/woodsbury/decimal128 v1.3.0 h1:8pffMNWIlC0O5vbyHWFZAt5yWvWcrHA+3ovIIjVWss0=
|
||||||
github.com/woodsbury/decimal128 v1.3.0/go.mod h1:C5UTmyTjW3JftjUFzOVhC20BEQa2a4ZKOB5I6Zjb+ds=
|
github.com/woodsbury/decimal128 v1.3.0/go.mod h1:C5UTmyTjW3JftjUFzOVhC20BEQa2a4ZKOB5I6Zjb+ds=
|
||||||
|
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||||
github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M=
|
github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M=
|
||||||
github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
|
github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
|
||||||
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
|
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
|
||||||
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
|
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
|
||||||
github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
|
github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
|
||||||
github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
|
github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
|
||||||
|
github.com/zenazn/goji v0.9.0/go.mod h1:7S9M489iMyHBNxwZnk9/EHS098H4/F6TATF2mIxtB1Q=
|
||||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 h1:CqXxU8VOmDefoh0+ztfGaymYbhdB/tT3zs79QaZTNGY=
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.68.0 h1:CqXxU8VOmDefoh0+ztfGaymYbhdB/tT3zs79QaZTNGY=
|
||||||
@@ -188,42 +314,148 @@ go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09
|
|||||||
go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0=
|
go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0=
|
||||||
go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g=
|
go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g=
|
||||||
go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk=
|
go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk=
|
||||||
|
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||||
|
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||||
|
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||||
|
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||||
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
||||||
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||||
golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
|
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||||
golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
|
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
|
||||||
|
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU=
|
||||||
|
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||||
|
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||||
|
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
|
||||||
|
go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||||
|
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||||
|
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
|
||||||
|
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||||
|
golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
|
||||||
|
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
|
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
|
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
|
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||||
|
golang.org/x/crypto v0.0.0-20201203163018-be400aefbc4c/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
|
||||||
|
golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||||
|
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||||
|
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||||
|
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
|
||||||
|
golang.org/x/crypto v0.20.0/go.mod h1:Xwo95rrVNIoSMx9wa1JroENMToLWn3RNVrTBpLHgZPQ=
|
||||||
|
golang.org/x/crypto v0.50.0 h1:zO47/JPrL6vsNkINmLoo/PH1gcxpls50DNogFvB5ZGI=
|
||||||
|
golang.org/x/crypto v0.50.0/go.mod h1:3muZ7vA7PBCE6xgPX7nkzzjiUq87kRItoJQM1Yo8S+Q=
|
||||||
|
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||||
|
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
|
||||||
|
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||||
|
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||||
|
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||||
golang.org/x/mod v0.35.0 h1:Ww1D637e6Pg+Zb2KrWfHQUnH2dQRLBQyAtpr/haaJeM=
|
golang.org/x/mod v0.35.0 h1:Ww1D637e6Pg+Zb2KrWfHQUnH2dQRLBQyAtpr/haaJeM=
|
||||||
golang.org/x/mod v0.35.0/go.mod h1:+GwiRhIInF8wPm+4AoT6L0FA1QWAad3OMdTRx4tFYlU=
|
golang.org/x/mod v0.35.0/go.mod h1:+GwiRhIInF8wPm+4AoT6L0FA1QWAad3OMdTRx4tFYlU=
|
||||||
golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
|
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw=
|
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
|
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
|
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
|
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||||
|
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||||
|
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||||
|
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
||||||
|
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
|
||||||
|
golang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=
|
||||||
|
golang.org/x/net v0.53.0/go.mod h1:JvMuJH7rrdiCfbeHoo3fCQU24Lf5JJwT9W3sJFulfgs=
|
||||||
|
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
|
||||||
|
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
|
||||||
|
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
|
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
|
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
|
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
|
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/term v0.41.0 h1:QCgPso/Q3RTJx2Th4bDLqML4W6iJiaXFq2/ftQF13YU=
|
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/term v0.41.0/go.mod h1:3pfBgksrReYfZ5lvYM0kSO0LIkAl4Yl2bXOkKP7Ec2A=
|
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
|
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
|
||||||
|
golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
||||||
|
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||||
|
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||||
|
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||||
|
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||||
|
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
|
||||||
|
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
|
||||||
|
golang.org/x/term v0.42.0 h1:UiKe+zDFmJobeJ5ggPwOshJIVt6/Ft0rcfrXZDLWAWY=
|
||||||
|
golang.org/x/term v0.42.0/go.mod h1:Dq/D+snpsbazcBG5+F9Q1n2rXV8Ma+71xEjTRufARgY=
|
||||||
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
|
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||||
|
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
|
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
|
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
|
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||||
|
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||||
|
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
|
||||||
|
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||||
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
||||||
golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
|
golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
|
||||||
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
|
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||||
|
golang.org/x/tools v0.0.0-20190425163242-31fd60d6bfdc/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||||
|
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||||
|
golang.org/x/tools v0.0.0-20190823170909-c4a336ef6a2f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
|
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
|
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
|
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
|
golang.org/x/tools v0.0.0-20200103221440-774c71fcf114/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||||
|
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||||
|
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
|
||||||
|
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
||||||
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA=
|
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA=
|
||||||
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M=
|
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M=
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg=
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529 h1:XF8+t6QQiS0o9ArVan/HW8Q7cycNPGsJf6GA2nXxYAg=
|
||||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20260420184626-e10c466a9529/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
||||||
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
||||||
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
||||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||||
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
|
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||||
|
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||||
|
gopkg.in/inconshreveable/log15.v2 v2.0.0-20180818164646-67afb5ed74ec/go.mod h1:aPpfJ7XW+gOuirDoZ8gHhLh3kZ1B08FtV2bbmy7Jv3s=
|
||||||
|
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
|
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
|
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
|
||||||
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
|
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
|
||||||
|
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||||
|
modernc.org/libc v1.72.1 h1:db1xwJ6u1kE3KHTFTTbe2GCrczHPKzlURP0aDC4NGD0=
|
||||||
|
modernc.org/libc v1.72.1/go.mod h1:HRMiC/PhPGLIPM7GzAFCbI+oSgE3dhZ8FWftmRrHVlY=
|
||||||
|
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
|
||||||
|
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
|
||||||
|
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
|
||||||
|
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
|
||||||
|
modernc.org/sqlite v1.49.1 h1:dYGHTKcX1sJ+EQDnUzvz4TJ5GbuvhNJa8Fg6ElGx73U=
|
||||||
|
modernc.org/sqlite v1.49.1/go.mod h1:m0w8xhwYUVY3H6pSDwc3gkJ/irZT/0YEXwBlhaxQEew=
|
||||||
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
|
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
|
||||||
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
|
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
|
||||||
|
|||||||
@@ -0,0 +1,310 @@
|
|||||||
|
// Package applicationstore implements the PostgreSQL-backed adapter for
|
||||||
|
// `ports.ApplicationStore`.
|
||||||
|
//
|
||||||
|
// PG_PLAN.md §6A migrates Game Lobby Service away from Redis-backed durable
|
||||||
|
// application records; see `galaxy/lobby/docs/postgres-migration.md` for
|
||||||
|
// the full decision record.
|
||||||
|
package applicationstore
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"database/sql"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/internal/sqlx"
|
||||||
|
pgtable "galaxy/lobby/internal/adapters/postgres/jet/lobby/table"
|
||||||
|
"galaxy/lobby/internal/domain/application"
|
||||||
|
"galaxy/lobby/internal/domain/common"
|
||||||
|
"galaxy/lobby/internal/ports"
|
||||||
|
|
||||||
|
pg "github.com/go-jet/jet/v2/postgres"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Config configures one PostgreSQL-backed application store instance.
|
||||||
|
type Config struct {
|
||||||
|
// DB stores the connection pool the store uses for every query.
|
||||||
|
DB *sql.DB
|
||||||
|
|
||||||
|
// OperationTimeout bounds one round trip.
|
||||||
|
OperationTimeout time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store persists Game Lobby application records in PostgreSQL.
|
||||||
|
type Store struct {
|
||||||
|
db *sql.DB
|
||||||
|
operationTimeout time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// New constructs one PostgreSQL-backed application store from cfg.
|
||||||
|
func New(cfg Config) (*Store, error) {
|
||||||
|
if cfg.DB == nil {
|
||||||
|
return nil, errors.New("new postgres application store: db must not be nil")
|
||||||
|
}
|
||||||
|
if cfg.OperationTimeout <= 0 {
|
||||||
|
return nil, errors.New("new postgres application store: operation timeout must be positive")
|
||||||
|
}
|
||||||
|
return &Store{
|
||||||
|
db: cfg.DB,
|
||||||
|
operationTimeout: cfg.OperationTimeout,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// applicationSelectColumns is the canonical SELECT list for the applications
|
||||||
|
// table, matching scanApplication's column order.
|
||||||
|
var applicationSelectColumns = pg.ColumnList{
|
||||||
|
pgtable.Applications.ApplicationID,
|
||||||
|
pgtable.Applications.GameID,
|
||||||
|
pgtable.Applications.ApplicantUserID,
|
||||||
|
pgtable.Applications.RaceName,
|
||||||
|
pgtable.Applications.Status,
|
||||||
|
pgtable.Applications.CreatedAt,
|
||||||
|
pgtable.Applications.DecidedAt,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save persists a new submitted application record. The single-active
|
||||||
|
// constraint is enforced by the partial unique index
|
||||||
|
// `applications_active_per_user_game_uidx`.
|
||||||
|
func (store *Store) Save(ctx context.Context, record application.Application) error {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return errors.New("save application: nil store")
|
||||||
|
}
|
||||||
|
if err := record.Validate(); err != nil {
|
||||||
|
return fmt.Errorf("save application: %w", err)
|
||||||
|
}
|
||||||
|
if record.Status != application.StatusSubmitted {
|
||||||
|
return fmt.Errorf(
|
||||||
|
"save application: status must be %q, got %q",
|
||||||
|
application.StatusSubmitted, record.Status,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "save application", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
stmt := pgtable.Applications.INSERT(
|
||||||
|
pgtable.Applications.ApplicationID,
|
||||||
|
pgtable.Applications.GameID,
|
||||||
|
pgtable.Applications.ApplicantUserID,
|
||||||
|
pgtable.Applications.RaceName,
|
||||||
|
pgtable.Applications.Status,
|
||||||
|
pgtable.Applications.CreatedAt,
|
||||||
|
pgtable.Applications.DecidedAt,
|
||||||
|
).VALUES(
|
||||||
|
record.ApplicationID.String(),
|
||||||
|
record.GameID.String(),
|
||||||
|
record.ApplicantUserID,
|
||||||
|
record.RaceName,
|
||||||
|
string(record.Status),
|
||||||
|
record.CreatedAt.UTC(),
|
||||||
|
sqlx.NullableTimePtr(record.DecidedAt),
|
||||||
|
)
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||||
|
if sqlx.IsUniqueViolation(err) {
|
||||||
|
return fmt.Errorf("save application: %w", application.ErrConflict)
|
||||||
|
}
|
||||||
|
return fmt.Errorf("save application: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get returns the record identified by applicationID. It returns
|
||||||
|
// application.ErrNotFound when no record exists.
|
||||||
|
func (store *Store) Get(ctx context.Context, applicationID common.ApplicationID) (application.Application, error) {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return application.Application{}, errors.New("get application: nil store")
|
||||||
|
}
|
||||||
|
if err := applicationID.Validate(); err != nil {
|
||||||
|
return application.Application{}, fmt.Errorf("get application: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get application", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return application.Application{}, err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
stmt := pg.SELECT(applicationSelectColumns).
|
||||||
|
FROM(pgtable.Applications).
|
||||||
|
WHERE(pgtable.Applications.ApplicationID.EQ(pg.String(applicationID.String())))
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||||
|
record, err := scanApplication(row)
|
||||||
|
if sqlx.IsNoRows(err) {
|
||||||
|
return application.Application{}, application.ErrNotFound
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return application.Application{}, fmt.Errorf("get application: %w", err)
|
||||||
|
}
|
||||||
|
return record, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetByGame returns every application attached to gameID. Sorted by
|
||||||
|
// created_at ASC then application_id ASC.
|
||||||
|
func (store *Store) GetByGame(ctx context.Context, gameID common.GameID) ([]application.Application, error) {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return nil, errors.New("get applications by game: nil store")
|
||||||
|
}
|
||||||
|
if err := gameID.Validate(); err != nil {
|
||||||
|
return nil, fmt.Errorf("get applications by game: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
stmt := pg.SELECT(applicationSelectColumns).
|
||||||
|
FROM(pgtable.Applications).
|
||||||
|
WHERE(pgtable.Applications.GameID.EQ(pg.String(gameID.String()))).
|
||||||
|
ORDER_BY(pgtable.Applications.CreatedAt.ASC(), pgtable.Applications.ApplicationID.ASC())
|
||||||
|
|
||||||
|
return store.queryList(ctx, "get applications by game", stmt)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetByUser returns every application submitted by applicantUserID.
|
||||||
|
func (store *Store) GetByUser(ctx context.Context, applicantUserID string) ([]application.Application, error) {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return nil, errors.New("get applications by user: nil store")
|
||||||
|
}
|
||||||
|
trimmed := strings.TrimSpace(applicantUserID)
|
||||||
|
if trimmed == "" {
|
||||||
|
return nil, fmt.Errorf("get applications by user: applicant user id must not be empty")
|
||||||
|
}
|
||||||
|
|
||||||
|
stmt := pg.SELECT(applicationSelectColumns).
|
||||||
|
FROM(pgtable.Applications).
|
||||||
|
WHERE(pgtable.Applications.ApplicantUserID.EQ(pg.String(trimmed))).
|
||||||
|
ORDER_BY(pgtable.Applications.CreatedAt.ASC(), pgtable.Applications.ApplicationID.ASC())
|
||||||
|
|
||||||
|
return store.queryList(ctx, "get applications by user", stmt)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *Store) queryList(ctx context.Context, operation string, stmt pg.SelectStatement) ([]application.Application, error) {
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, operation, store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
|
||||||
|
records := make([]application.Application, 0)
|
||||||
|
for rows.Next() {
|
||||||
|
record, err := scanApplication(rows)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("%s: scan: %w", operation, err)
|
||||||
|
}
|
||||||
|
records = append(records, record)
|
||||||
|
}
|
||||||
|
if err := rows.Err(); err != nil {
|
||||||
|
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||||
|
}
|
||||||
|
if len(records) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return records, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateStatus applies one status transition with compare-and-swap on the
|
||||||
|
// current status column.
|
||||||
|
func (store *Store) UpdateStatus(ctx context.Context, input ports.UpdateApplicationStatusInput) error {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return errors.New("update application status: nil store")
|
||||||
|
}
|
||||||
|
if err := input.Validate(); err != nil {
|
||||||
|
return fmt.Errorf("update application status: %w", err)
|
||||||
|
}
|
||||||
|
if err := application.Transition(input.ExpectedFrom, input.To); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update application status", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
at := input.At.UTC()
|
||||||
|
stmt := pgtable.Applications.UPDATE(pgtable.Applications.Status, pgtable.Applications.DecidedAt).
|
||||||
|
SET(string(input.To), at).
|
||||||
|
WHERE(pg.AND(
|
||||||
|
pgtable.Applications.ApplicationID.EQ(pg.String(input.ApplicationID.String())),
|
||||||
|
pgtable.Applications.Status.EQ(pg.String(string(input.ExpectedFrom))),
|
||||||
|
))
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update application status: %w", err)
|
||||||
|
}
|
||||||
|
affected, err := result.RowsAffected()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update application status: rows affected: %w", err)
|
||||||
|
}
|
||||||
|
if affected == 0 {
|
||||||
|
probe := pg.SELECT(pgtable.Applications.Status).
|
||||||
|
FROM(pgtable.Applications).
|
||||||
|
WHERE(pgtable.Applications.ApplicationID.EQ(pg.String(input.ApplicationID.String())))
|
||||||
|
probeQuery, probeArgs := probe.Sql()
|
||||||
|
|
||||||
|
var current string
|
||||||
|
row := store.db.QueryRowContext(operationCtx, probeQuery, probeArgs...)
|
||||||
|
if err := row.Scan(¤t); err != nil {
|
||||||
|
if sqlx.IsNoRows(err) {
|
||||||
|
return application.ErrNotFound
|
||||||
|
}
|
||||||
|
return fmt.Errorf("update application status: probe: %w", err)
|
||||||
|
}
|
||||||
|
return fmt.Errorf("update application status: %w", application.ErrConflict)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type rowScanner interface {
|
||||||
|
Scan(dest ...any) error
|
||||||
|
}
|
||||||
|
|
||||||
|
func scanApplication(rs rowScanner) (application.Application, error) {
|
||||||
|
var (
|
||||||
|
applicationID string
|
||||||
|
gameID string
|
||||||
|
applicantUserID string
|
||||||
|
raceName string
|
||||||
|
status string
|
||||||
|
createdAt time.Time
|
||||||
|
decidedAt sql.NullTime
|
||||||
|
)
|
||||||
|
if err := rs.Scan(
|
||||||
|
&applicationID,
|
||||||
|
&gameID,
|
||||||
|
&applicantUserID,
|
||||||
|
&raceName,
|
||||||
|
&status,
|
||||||
|
&createdAt,
|
||||||
|
&decidedAt,
|
||||||
|
); err != nil {
|
||||||
|
return application.Application{}, err
|
||||||
|
}
|
||||||
|
return application.Application{
|
||||||
|
ApplicationID: common.ApplicationID(applicationID),
|
||||||
|
GameID: common.GameID(gameID),
|
||||||
|
ApplicantUserID: applicantUserID,
|
||||||
|
RaceName: raceName,
|
||||||
|
Status: application.Status(status),
|
||||||
|
CreatedAt: createdAt.UTC(),
|
||||||
|
DecidedAt: sqlx.TimePtrFromNullable(decidedAt),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure Store satisfies the ports.ApplicationStore interface at compile
|
||||||
|
// time.
|
||||||
|
var _ ports.ApplicationStore = (*Store)(nil)
|
||||||
@@ -0,0 +1,194 @@
|
|||||||
|
package applicationstore_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/applicationstore"
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/gamestore"
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/internal/pgtest"
|
||||||
|
"galaxy/lobby/internal/domain/application"
|
||||||
|
"galaxy/lobby/internal/domain/common"
|
||||||
|
"galaxy/lobby/internal/domain/game"
|
||||||
|
"galaxy/lobby/internal/ports"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||||
|
|
||||||
|
func newStores(t *testing.T) (*gamestore.Store, *applicationstore.Store) {
|
||||||
|
t.Helper()
|
||||||
|
pgtest.TruncateAll(t)
|
||||||
|
gs, err := gamestore.New(gamestore.Config{
|
||||||
|
DB: pgtest.Ensure(t).Pool(), OperationTimeout: pgtest.OperationTimeout,
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
as, err := applicationstore.New(applicationstore.Config{
|
||||||
|
DB: pgtest.Ensure(t).Pool(), OperationTimeout: pgtest.OperationTimeout,
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
return gs, as
|
||||||
|
}
|
||||||
|
|
||||||
|
func seedGame(t *testing.T, gs *gamestore.Store, id string) game.Game {
|
||||||
|
t.Helper()
|
||||||
|
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||||
|
g, err := game.New(game.NewGameInput{
|
||||||
|
GameID: common.GameID(id),
|
||||||
|
GameName: "Game " + id,
|
||||||
|
GameType: game.GameTypePublic,
|
||||||
|
MinPlayers: 2,
|
||||||
|
MaxPlayers: 8,
|
||||||
|
StartGapHours: 12,
|
||||||
|
StartGapPlayers: 2,
|
||||||
|
EnrollmentEndsAt: now.Add(7 * 24 * time.Hour),
|
||||||
|
TurnSchedule: "0 18 * * *",
|
||||||
|
TargetEngineVersion: "v1.0.0",
|
||||||
|
Now: now,
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, gs.Save(context.Background(), g))
|
||||||
|
return g
|
||||||
|
}
|
||||||
|
|
||||||
|
func newApplication(t *testing.T, id, gameID, userID string) application.Application {
|
||||||
|
t.Helper()
|
||||||
|
a, err := application.New(application.NewApplicationInput{
|
||||||
|
ApplicationID: common.ApplicationID(id),
|
||||||
|
GameID: common.GameID(gameID),
|
||||||
|
ApplicantUserID: userID,
|
||||||
|
RaceName: "Pilot " + id,
|
||||||
|
Now: time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC),
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
return a
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSaveAndGet(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, as := newStores(t)
|
||||||
|
seedGame(t, gs, "game-001")
|
||||||
|
|
||||||
|
rec := newApplication(t, "application-001", "game-001", "user-a")
|
||||||
|
require.NoError(t, as.Save(ctx, rec))
|
||||||
|
|
||||||
|
got, err := as.Get(ctx, rec.ApplicationID)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, rec.ApplicationID, got.ApplicationID)
|
||||||
|
assert.Equal(t, application.StatusSubmitted, got.Status)
|
||||||
|
assert.Equal(t, "user-a", got.ApplicantUserID)
|
||||||
|
assert.Nil(t, got.DecidedAt)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSaveRejectsNonSubmittedRecord(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, as := newStores(t)
|
||||||
|
seedGame(t, gs, "game-001")
|
||||||
|
|
||||||
|
rec := newApplication(t, "application-001", "game-001", "user-a")
|
||||||
|
rec.Status = application.StatusApproved
|
||||||
|
require.Error(t, as.Save(ctx, rec))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSavePartialUniqueRejectsSecondActiveForSameUserGame(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, as := newStores(t)
|
||||||
|
seedGame(t, gs, "game-001")
|
||||||
|
|
||||||
|
a1 := newApplication(t, "application-001", "game-001", "user-a")
|
||||||
|
require.NoError(t, as.Save(ctx, a1))
|
||||||
|
|
||||||
|
// second submission by the same user against the same game must fail.
|
||||||
|
a2 := newApplication(t, "application-002", "game-001", "user-a")
|
||||||
|
err := as.Save(ctx, a2)
|
||||||
|
require.ErrorIs(t, err, application.ErrConflict)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSavePartialUniqueAllowsResubmitAfterRejection(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, as := newStores(t)
|
||||||
|
seedGame(t, gs, "game-001")
|
||||||
|
|
||||||
|
a1 := newApplication(t, "application-001", "game-001", "user-a")
|
||||||
|
require.NoError(t, as.Save(ctx, a1))
|
||||||
|
|
||||||
|
require.NoError(t, as.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||||
|
ApplicationID: a1.ApplicationID,
|
||||||
|
ExpectedFrom: application.StatusSubmitted,
|
||||||
|
To: application.StatusRejected,
|
||||||
|
At: a1.CreatedAt.Add(time.Minute),
|
||||||
|
}))
|
||||||
|
|
||||||
|
// after rejection a new submission for the same (user, game) is allowed.
|
||||||
|
a2 := newApplication(t, "application-002", "game-001", "user-a")
|
||||||
|
require.NoError(t, as.Save(ctx, a2))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateStatusReturnsConflictOnExpectedFromMismatch(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, as := newStores(t)
|
||||||
|
seedGame(t, gs, "game-001")
|
||||||
|
|
||||||
|
rec := newApplication(t, "application-001", "game-001", "user-a")
|
||||||
|
require.NoError(t, as.Save(ctx, rec))
|
||||||
|
|
||||||
|
// First, transition the row to approved.
|
||||||
|
require.NoError(t, as.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||||
|
ApplicationID: rec.ApplicationID,
|
||||||
|
ExpectedFrom: application.StatusSubmitted,
|
||||||
|
To: application.StatusApproved,
|
||||||
|
At: rec.CreatedAt.Add(time.Minute),
|
||||||
|
}))
|
||||||
|
|
||||||
|
// Second attempt claims status is still submitted: (submitted, rejected)
|
||||||
|
// is a valid domain transition, but the row is already approved, so the
|
||||||
|
// adapter must surface ErrConflict on the row-level mismatch.
|
||||||
|
err := as.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||||
|
ApplicationID: rec.ApplicationID,
|
||||||
|
ExpectedFrom: application.StatusSubmitted,
|
||||||
|
To: application.StatusRejected,
|
||||||
|
At: rec.CreatedAt.Add(2 * time.Minute),
|
||||||
|
})
|
||||||
|
require.ErrorIs(t, err, application.ErrConflict)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateStatusReturnsNotFoundForMissing(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
_, as := newStores(t)
|
||||||
|
err := as.UpdateStatus(ctx, ports.UpdateApplicationStatusInput{
|
||||||
|
ApplicationID: common.ApplicationID("application-missing"),
|
||||||
|
ExpectedFrom: application.StatusSubmitted,
|
||||||
|
To: application.StatusApproved,
|
||||||
|
At: time.Now().UTC(),
|
||||||
|
})
|
||||||
|
require.ErrorIs(t, err, application.ErrNotFound)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetByGameAndGetByUser(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, as := newStores(t)
|
||||||
|
seedGame(t, gs, "game-001")
|
||||||
|
seedGame(t, gs, "game-002")
|
||||||
|
|
||||||
|
require.NoError(t, as.Save(ctx, newApplication(t, "application-001", "game-001", "user-a")))
|
||||||
|
require.NoError(t, as.Save(ctx, newApplication(t, "application-002", "game-001", "user-b")))
|
||||||
|
require.NoError(t, as.Save(ctx, newApplication(t, "application-003", "game-002", "user-a")))
|
||||||
|
|
||||||
|
g1, err := as.GetByGame(ctx, common.GameID("game-001"))
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Len(t, g1, 2)
|
||||||
|
|
||||||
|
userA, err := as.GetByUser(ctx, "user-a")
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Len(t, userA, 2)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetMissingReturnsNotFound(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
_, as := newStores(t)
|
||||||
|
_, err := as.Get(ctx, common.ApplicationID("application-missing"))
|
||||||
|
require.ErrorIs(t, err, application.ErrNotFound)
|
||||||
|
}
|
||||||
@@ -0,0 +1,94 @@
|
|||||||
|
package gamestore
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/lobby/internal/domain/game"
|
||||||
|
)
|
||||||
|
|
||||||
|
// runtimeSnapshotJSON is the on-disk JSONB shape used for the denormalised
|
||||||
|
// runtime snapshot column on `games`. Keys mirror the field names in
|
||||||
|
// `game.RuntimeSnapshot` so a round-trip remains naked-comparable.
|
||||||
|
type runtimeSnapshotJSON struct {
|
||||||
|
CurrentTurn int `json:"current_turn"`
|
||||||
|
RuntimeStatus string `json:"runtime_status,omitempty"`
|
||||||
|
EngineHealthSummary string `json:"engine_health_summary,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func marshalRuntimeSnapshot(snapshot game.RuntimeSnapshot) ([]byte, error) {
|
||||||
|
payload := runtimeSnapshotJSON{
|
||||||
|
CurrentTurn: snapshot.CurrentTurn,
|
||||||
|
RuntimeStatus: snapshot.RuntimeStatus,
|
||||||
|
EngineHealthSummary: snapshot.EngineHealthSummary,
|
||||||
|
}
|
||||||
|
encoded, err := json.Marshal(payload)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("marshal runtime snapshot: %w", err)
|
||||||
|
}
|
||||||
|
return encoded, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func unmarshalRuntimeSnapshot(payload []byte) (game.RuntimeSnapshot, error) {
|
||||||
|
if len(payload) == 0 {
|
||||||
|
return game.RuntimeSnapshot{}, nil
|
||||||
|
}
|
||||||
|
var stored runtimeSnapshotJSON
|
||||||
|
if err := json.Unmarshal(payload, &stored); err != nil {
|
||||||
|
return game.RuntimeSnapshot{}, fmt.Errorf("unmarshal runtime snapshot: %w", err)
|
||||||
|
}
|
||||||
|
return game.RuntimeSnapshot{
|
||||||
|
CurrentTurn: stored.CurrentTurn,
|
||||||
|
RuntimeStatus: stored.RuntimeStatus,
|
||||||
|
EngineHealthSummary: stored.EngineHealthSummary,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// runtimeBindingJSON is the on-disk JSONB shape used for the optional
|
||||||
|
// runtime binding column on `games`. The `bound_at_ms` field stores Unix
|
||||||
|
// milliseconds so the JSON serialisation matches the previous Redis JSON
|
||||||
|
// shape and the timezone is irrelevant inside the JSON payload itself; the
|
||||||
|
// adapter still re-wraps the resulting time.Time with .UTC() before exposing
|
||||||
|
// it to callers.
|
||||||
|
type runtimeBindingJSON struct {
|
||||||
|
ContainerID string `json:"container_id"`
|
||||||
|
EngineEndpoint string `json:"engine_endpoint"`
|
||||||
|
RuntimeJobID string `json:"runtime_job_id"`
|
||||||
|
BoundAtMS int64 `json:"bound_at_ms"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// marshalRuntimeBinding returns nil bytes (SQL NULL) when binding is nil,
|
||||||
|
// otherwise the JSON encoding of the binding.
|
||||||
|
func marshalRuntimeBinding(binding *game.RuntimeBinding) ([]byte, error) {
|
||||||
|
if binding == nil {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
payload := runtimeBindingJSON{
|
||||||
|
ContainerID: binding.ContainerID,
|
||||||
|
EngineEndpoint: binding.EngineEndpoint,
|
||||||
|
RuntimeJobID: binding.RuntimeJobID,
|
||||||
|
BoundAtMS: binding.BoundAt.UTC().UnixMilli(),
|
||||||
|
}
|
||||||
|
encoded, err := json.Marshal(payload)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("marshal runtime binding: %w", err)
|
||||||
|
}
|
||||||
|
return encoded, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func unmarshalRuntimeBinding(payload []byte) (*game.RuntimeBinding, error) {
|
||||||
|
if len(payload) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
var stored runtimeBindingJSON
|
||||||
|
if err := json.Unmarshal(payload, &stored); err != nil {
|
||||||
|
return nil, fmt.Errorf("unmarshal runtime binding: %w", err)
|
||||||
|
}
|
||||||
|
return &game.RuntimeBinding{
|
||||||
|
ContainerID: stored.ContainerID,
|
||||||
|
EngineEndpoint: stored.EngineEndpoint,
|
||||||
|
RuntimeJobID: stored.RuntimeJobID,
|
||||||
|
BoundAt: time.UnixMilli(stored.BoundAtMS).UTC(),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
@@ -0,0 +1,610 @@
|
|||||||
|
// Package gamestore implements the PostgreSQL-backed adapter for
|
||||||
|
// `ports.GameStore`.
|
||||||
|
//
|
||||||
|
// The package owns the on-disk shape of the `games` table (defined in
|
||||||
|
// `galaxy/lobby/internal/adapters/postgres/migrations`) and translates the
|
||||||
|
// schema-agnostic GameStore interface declared in `internal/ports` into
|
||||||
|
// concrete go-jet/v2 statements driven by the pgx driver. Per-row
|
||||||
|
// lifecycle transitions (Save/UpdateStatus/UpdateRuntimeSnapshot/
|
||||||
|
// UpdateRuntimeBinding) use optimistic concurrency on the `updated_at`
|
||||||
|
// column rather than retaining a `SELECT ... FOR UPDATE` lock across the
|
||||||
|
// caller's logic, mirroring the Notification Stage 5 pattern.
|
||||||
|
//
|
||||||
|
// PG_PLAN.md §6A migrates Game Lobby Service away from Redis-backed durable
|
||||||
|
// game records; see `galaxy/lobby/docs/postgres-migration.md` for the full
|
||||||
|
// decision record.
|
||||||
|
package gamestore
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"database/sql"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/internal/sqlx"
|
||||||
|
pgtable "galaxy/lobby/internal/adapters/postgres/jet/lobby/table"
|
||||||
|
"galaxy/lobby/internal/domain/common"
|
||||||
|
"galaxy/lobby/internal/domain/game"
|
||||||
|
"galaxy/lobby/internal/ports"
|
||||||
|
|
||||||
|
pg "github.com/go-jet/jet/v2/postgres"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Config configures one PostgreSQL-backed game store instance. The store
|
||||||
|
// does not own the underlying *sql.DB lifecycle: the caller (typically the
|
||||||
|
// service runtime) opens, instruments, migrates, and closes the pool.
|
||||||
|
type Config struct {
|
||||||
|
// DB stores the connection pool the store uses for every query.
|
||||||
|
DB *sql.DB
|
||||||
|
|
||||||
|
// OperationTimeout bounds one round trip. The store creates a derived
|
||||||
|
// context for each operation so callers cannot starve the pool with an
|
||||||
|
// unbounded ctx.
|
||||||
|
OperationTimeout time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store persists Game Lobby game records in PostgreSQL.
|
||||||
|
type Store struct {
|
||||||
|
db *sql.DB
|
||||||
|
operationTimeout time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// New constructs one PostgreSQL-backed game store from cfg.
|
||||||
|
func New(cfg Config) (*Store, error) {
|
||||||
|
if cfg.DB == nil {
|
||||||
|
return nil, errors.New("new postgres game store: db must not be nil")
|
||||||
|
}
|
||||||
|
if cfg.OperationTimeout <= 0 {
|
||||||
|
return nil, errors.New("new postgres game store: operation timeout must be positive")
|
||||||
|
}
|
||||||
|
return &Store{
|
||||||
|
db: cfg.DB,
|
||||||
|
operationTimeout: cfg.OperationTimeout,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// gameSelectColumns is the canonical SELECT list for the games table,
|
||||||
|
// matching scanGame's column order.
|
||||||
|
var gameSelectColumns = pg.ColumnList{
|
||||||
|
pgtable.Games.GameID,
|
||||||
|
pgtable.Games.GameName,
|
||||||
|
pgtable.Games.Description,
|
||||||
|
pgtable.Games.GameType,
|
||||||
|
pgtable.Games.OwnerUserID,
|
||||||
|
pgtable.Games.Status,
|
||||||
|
pgtable.Games.MinPlayers,
|
||||||
|
pgtable.Games.MaxPlayers,
|
||||||
|
pgtable.Games.StartGapHours,
|
||||||
|
pgtable.Games.StartGapPlayers,
|
||||||
|
pgtable.Games.EnrollmentEndsAt,
|
||||||
|
pgtable.Games.TurnSchedule,
|
||||||
|
pgtable.Games.TargetEngineVersion,
|
||||||
|
pgtable.Games.CreatedAt,
|
||||||
|
pgtable.Games.UpdatedAt,
|
||||||
|
pgtable.Games.StartedAt,
|
||||||
|
pgtable.Games.FinishedAt,
|
||||||
|
pgtable.Games.RuntimeSnapshot,
|
||||||
|
pgtable.Games.RuntimeBinding,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save upserts record. The status secondary index is intrinsic to
|
||||||
|
// `games_status_created_idx` so callers see the same effect as the previous
|
||||||
|
// Redis adapter without the explicit index rewrite.
|
||||||
|
//
|
||||||
|
// The implementation is INSERT ... ON CONFLICT (game_id) DO UPDATE: the
|
||||||
|
// adapter cannot use plain INSERT because callers (notably the create-game
|
||||||
|
// service and admin updates) expect Save to be upsert.
|
||||||
|
func (store *Store) Save(ctx context.Context, record game.Game) error {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return errors.New("save game: nil store")
|
||||||
|
}
|
||||||
|
if err := record.Validate(); err != nil {
|
||||||
|
return fmt.Errorf("save game: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "save game", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
snapshot, err := marshalRuntimeSnapshot(record.RuntimeSnapshot)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("save game: %w", err)
|
||||||
|
}
|
||||||
|
binding, err := marshalRuntimeBinding(record.RuntimeBinding)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("save game: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
stmt := pgtable.Games.INSERT(
|
||||||
|
pgtable.Games.GameID,
|
||||||
|
pgtable.Games.GameName,
|
||||||
|
pgtable.Games.Description,
|
||||||
|
pgtable.Games.GameType,
|
||||||
|
pgtable.Games.OwnerUserID,
|
||||||
|
pgtable.Games.Status,
|
||||||
|
pgtable.Games.MinPlayers,
|
||||||
|
pgtable.Games.MaxPlayers,
|
||||||
|
pgtable.Games.StartGapHours,
|
||||||
|
pgtable.Games.StartGapPlayers,
|
||||||
|
pgtable.Games.EnrollmentEndsAt,
|
||||||
|
pgtable.Games.TurnSchedule,
|
||||||
|
pgtable.Games.TargetEngineVersion,
|
||||||
|
pgtable.Games.CreatedAt,
|
||||||
|
pgtable.Games.UpdatedAt,
|
||||||
|
pgtable.Games.StartedAt,
|
||||||
|
pgtable.Games.FinishedAt,
|
||||||
|
pgtable.Games.RuntimeSnapshot,
|
||||||
|
pgtable.Games.RuntimeBinding,
|
||||||
|
).VALUES(
|
||||||
|
record.GameID.String(),
|
||||||
|
record.GameName,
|
||||||
|
record.Description,
|
||||||
|
string(record.GameType),
|
||||||
|
record.OwnerUserID,
|
||||||
|
string(record.Status),
|
||||||
|
record.MinPlayers,
|
||||||
|
record.MaxPlayers,
|
||||||
|
record.StartGapHours,
|
||||||
|
record.StartGapPlayers,
|
||||||
|
record.EnrollmentEndsAt.UTC(),
|
||||||
|
record.TurnSchedule,
|
||||||
|
record.TargetEngineVersion,
|
||||||
|
record.CreatedAt.UTC(),
|
||||||
|
record.UpdatedAt.UTC(),
|
||||||
|
sqlx.NullableTimePtr(record.StartedAt),
|
||||||
|
sqlx.NullableTimePtr(record.FinishedAt),
|
||||||
|
snapshot,
|
||||||
|
binding,
|
||||||
|
).ON_CONFLICT(pgtable.Games.GameID).DO_UPDATE(
|
||||||
|
pg.SET(
|
||||||
|
pgtable.Games.GameName.SET(pgtable.Games.EXCLUDED.GameName),
|
||||||
|
pgtable.Games.Description.SET(pgtable.Games.EXCLUDED.Description),
|
||||||
|
pgtable.Games.GameType.SET(pgtable.Games.EXCLUDED.GameType),
|
||||||
|
pgtable.Games.OwnerUserID.SET(pgtable.Games.EXCLUDED.OwnerUserID),
|
||||||
|
pgtable.Games.Status.SET(pgtable.Games.EXCLUDED.Status),
|
||||||
|
pgtable.Games.MinPlayers.SET(pgtable.Games.EXCLUDED.MinPlayers),
|
||||||
|
pgtable.Games.MaxPlayers.SET(pgtable.Games.EXCLUDED.MaxPlayers),
|
||||||
|
pgtable.Games.StartGapHours.SET(pgtable.Games.EXCLUDED.StartGapHours),
|
||||||
|
pgtable.Games.StartGapPlayers.SET(pgtable.Games.EXCLUDED.StartGapPlayers),
|
||||||
|
pgtable.Games.EnrollmentEndsAt.SET(pgtable.Games.EXCLUDED.EnrollmentEndsAt),
|
||||||
|
pgtable.Games.TurnSchedule.SET(pgtable.Games.EXCLUDED.TurnSchedule),
|
||||||
|
pgtable.Games.TargetEngineVersion.SET(pgtable.Games.EXCLUDED.TargetEngineVersion),
|
||||||
|
pgtable.Games.UpdatedAt.SET(pgtable.Games.EXCLUDED.UpdatedAt),
|
||||||
|
pgtable.Games.StartedAt.SET(pgtable.Games.EXCLUDED.StartedAt),
|
||||||
|
pgtable.Games.FinishedAt.SET(pgtable.Games.EXCLUDED.FinishedAt),
|
||||||
|
pgtable.Games.RuntimeSnapshot.SET(pgtable.Games.EXCLUDED.RuntimeSnapshot),
|
||||||
|
pgtable.Games.RuntimeBinding.SET(pgtable.Games.EXCLUDED.RuntimeBinding),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||||
|
return fmt.Errorf("save game: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get returns the record identified by gameID. It returns
|
||||||
|
// game.ErrNotFound when no record exists.
|
||||||
|
func (store *Store) Get(ctx context.Context, gameID common.GameID) (game.Game, error) {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return game.Game{}, errors.New("get game: nil store")
|
||||||
|
}
|
||||||
|
if err := gameID.Validate(); err != nil {
|
||||||
|
return game.Game{}, fmt.Errorf("get game: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get game", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return game.Game{}, err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
stmt := pg.SELECT(gameSelectColumns).
|
||||||
|
FROM(pgtable.Games).
|
||||||
|
WHERE(pgtable.Games.GameID.EQ(pg.String(gameID.String())))
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||||
|
record, err := scanGame(row)
|
||||||
|
if sqlx.IsNoRows(err) {
|
||||||
|
return game.Game{}, game.ErrNotFound
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return game.Game{}, fmt.Errorf("get game: %w", err)
|
||||||
|
}
|
||||||
|
return record, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetByStatus returns every record whose status equals status. Records are
|
||||||
|
// sorted by created_at DESC then game_id DESC, matching the most-recent-first
|
||||||
|
// ordering Lobby's listing services expect.
|
||||||
|
func (store *Store) GetByStatus(ctx context.Context, status game.Status) ([]game.Game, error) {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return nil, errors.New("get games by status: nil store")
|
||||||
|
}
|
||||||
|
if !status.IsKnown() {
|
||||||
|
return nil, fmt.Errorf("get games by status: status %q is unsupported", status)
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get games by status", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
stmt := pg.SELECT(gameSelectColumns).
|
||||||
|
FROM(pgtable.Games).
|
||||||
|
WHERE(pgtable.Games.Status.EQ(pg.String(string(status)))).
|
||||||
|
ORDER_BY(pgtable.Games.CreatedAt.DESC(), pgtable.Games.GameID.DESC())
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("get games by status: %w", err)
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
|
||||||
|
records, err := scanAllGames(rows)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("get games by status: %w", err)
|
||||||
|
}
|
||||||
|
return records, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CountByStatus returns the number of records under each known status.
|
||||||
|
func (store *Store) CountByStatus(ctx context.Context) (map[game.Status]int, error) {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return nil, errors.New("count games by status: nil store")
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "count games by status", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
countAlias := pg.COUNT(pg.STAR).AS("count")
|
||||||
|
stmt := pg.SELECT(pgtable.Games.Status, countAlias).
|
||||||
|
FROM(pgtable.Games).
|
||||||
|
GROUP_BY(pgtable.Games.Status)
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("count games by status: %w", err)
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
|
||||||
|
counts := make(map[game.Status]int, len(game.AllStatuses()))
|
||||||
|
for _, status := range game.AllStatuses() {
|
||||||
|
counts[status] = 0
|
||||||
|
}
|
||||||
|
for rows.Next() {
|
||||||
|
var status string
|
||||||
|
var count int
|
||||||
|
if err := rows.Scan(&status, &count); err != nil {
|
||||||
|
return nil, fmt.Errorf("count games by status: scan: %w", err)
|
||||||
|
}
|
||||||
|
counts[game.Status(status)] = count
|
||||||
|
}
|
||||||
|
if err := rows.Err(); err != nil {
|
||||||
|
return nil, fmt.Errorf("count games by status: %w", err)
|
||||||
|
}
|
||||||
|
return counts, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetByOwner returns every record whose owner_user_id equals userID. The
|
||||||
|
// underlying `games_owner_idx` is partial (game_type = 'private'); public
|
||||||
|
// games carry an empty owner_user_id and are excluded from the index, matching
|
||||||
|
// the Redis-backed behaviour.
|
||||||
|
func (store *Store) GetByOwner(ctx context.Context, userID string) ([]game.Game, error) {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return nil, errors.New("get games by owner: nil store")
|
||||||
|
}
|
||||||
|
trimmed := strings.TrimSpace(userID)
|
||||||
|
if trimmed == "" {
|
||||||
|
return nil, fmt.Errorf("get games by owner: user id must not be empty")
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get games by owner", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
stmt := pg.SELECT(gameSelectColumns).
|
||||||
|
FROM(pgtable.Games).
|
||||||
|
WHERE(pgtable.Games.OwnerUserID.EQ(pg.String(trimmed))).
|
||||||
|
ORDER_BY(pgtable.Games.CreatedAt.DESC(), pgtable.Games.GameID.DESC())
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("get games by owner: %w", err)
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
|
||||||
|
records, err := scanAllGames(rows)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("get games by owner: %w", err)
|
||||||
|
}
|
||||||
|
return records, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateStatus applies one status transition with compare-and-swap on the
|
||||||
|
// current status column. The domain transition gate runs before any SQL
|
||||||
|
// touch.
|
||||||
|
func (store *Store) UpdateStatus(ctx context.Context, input ports.UpdateStatusInput) error {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return errors.New("update game status: nil store")
|
||||||
|
}
|
||||||
|
if err := input.Validate(); err != nil {
|
||||||
|
return fmt.Errorf("update game status: %w", err)
|
||||||
|
}
|
||||||
|
if err := game.Transition(input.ExpectedFrom, input.To, input.Trigger); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update game status", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
at := input.At.UTC()
|
||||||
|
var startedAt, finishedAt any
|
||||||
|
if input.To == game.StatusRunning {
|
||||||
|
startedAt = at
|
||||||
|
}
|
||||||
|
if input.To == game.StatusFinished {
|
||||||
|
finishedAt = at
|
||||||
|
}
|
||||||
|
|
||||||
|
// COALESCE keeps the existing started_at/finished_at when the new value
|
||||||
|
// is NULL (the bind parameter is nil unless we are entering the
|
||||||
|
// running/finished state for the first time).
|
||||||
|
startedExpr := pg.COALESCE(pgtable.Games.StartedAt, pg.TimestampzT(at))
|
||||||
|
if startedAt == nil {
|
||||||
|
startedExpr = pgtable.Games.StartedAt
|
||||||
|
}
|
||||||
|
finishedExpr := pg.COALESCE(pgtable.Games.FinishedAt, pg.TimestampzT(at))
|
||||||
|
if finishedAt == nil {
|
||||||
|
finishedExpr = pgtable.Games.FinishedAt
|
||||||
|
}
|
||||||
|
|
||||||
|
stmt := pgtable.Games.UPDATE(
|
||||||
|
pgtable.Games.Status,
|
||||||
|
pgtable.Games.UpdatedAt,
|
||||||
|
pgtable.Games.StartedAt,
|
||||||
|
pgtable.Games.FinishedAt,
|
||||||
|
).SET(
|
||||||
|
pg.String(string(input.To)),
|
||||||
|
pg.TimestampzT(at),
|
||||||
|
startedExpr,
|
||||||
|
finishedExpr,
|
||||||
|
).WHERE(pg.AND(
|
||||||
|
pgtable.Games.GameID.EQ(pg.String(input.GameID.String())),
|
||||||
|
pgtable.Games.Status.EQ(pg.String(string(input.ExpectedFrom))),
|
||||||
|
))
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update game status: %w", err)
|
||||||
|
}
|
||||||
|
affected, err := result.RowsAffected()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update game status: rows affected: %w", err)
|
||||||
|
}
|
||||||
|
if affected == 0 {
|
||||||
|
// distinguish "not found" from "status mismatch" with a follow-up read
|
||||||
|
probe := pg.SELECT(pgtable.Games.Status).
|
||||||
|
FROM(pgtable.Games).
|
||||||
|
WHERE(pgtable.Games.GameID.EQ(pg.String(input.GameID.String())))
|
||||||
|
probeQuery, probeArgs := probe.Sql()
|
||||||
|
|
||||||
|
var current string
|
||||||
|
row := store.db.QueryRowContext(operationCtx, probeQuery, probeArgs...)
|
||||||
|
if err := row.Scan(¤t); err != nil {
|
||||||
|
if sqlx.IsNoRows(err) {
|
||||||
|
return game.ErrNotFound
|
||||||
|
}
|
||||||
|
return fmt.Errorf("update game status: probe: %w", err)
|
||||||
|
}
|
||||||
|
return fmt.Errorf("update game status: %w", game.ErrConflict)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateRuntimeSnapshot overwrites the denormalised runtime snapshot fields.
|
||||||
|
func (store *Store) UpdateRuntimeSnapshot(ctx context.Context, input ports.UpdateRuntimeSnapshotInput) error {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return errors.New("update runtime snapshot: nil store")
|
||||||
|
}
|
||||||
|
if err := input.Validate(); err != nil {
|
||||||
|
return fmt.Errorf("update runtime snapshot: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update runtime snapshot", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
snapshot, err := marshalRuntimeSnapshot(input.Snapshot)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update runtime snapshot: %w", err)
|
||||||
|
}
|
||||||
|
at := input.At.UTC()
|
||||||
|
|
||||||
|
stmt := pgtable.Games.UPDATE(pgtable.Games.RuntimeSnapshot, pgtable.Games.UpdatedAt).
|
||||||
|
SET(snapshot, at).
|
||||||
|
WHERE(pgtable.Games.GameID.EQ(pg.String(input.GameID.String())))
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update runtime snapshot: %w", err)
|
||||||
|
}
|
||||||
|
affected, err := result.RowsAffected()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update runtime snapshot: rows affected: %w", err)
|
||||||
|
}
|
||||||
|
if affected == 0 {
|
||||||
|
return game.ErrNotFound
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateRuntimeBinding overwrites the runtime binding metadata.
|
||||||
|
func (store *Store) UpdateRuntimeBinding(ctx context.Context, input ports.UpdateRuntimeBindingInput) error {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return errors.New("update runtime binding: nil store")
|
||||||
|
}
|
||||||
|
if err := input.Validate(); err != nil {
|
||||||
|
return fmt.Errorf("update runtime binding: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update runtime binding", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
binding := input.Binding
|
||||||
|
encoded, err := marshalRuntimeBinding(&binding)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update runtime binding: %w", err)
|
||||||
|
}
|
||||||
|
at := input.At.UTC()
|
||||||
|
|
||||||
|
stmt := pgtable.Games.UPDATE(pgtable.Games.RuntimeBinding, pgtable.Games.UpdatedAt).
|
||||||
|
SET(encoded, at).
|
||||||
|
WHERE(pgtable.Games.GameID.EQ(pg.String(input.GameID.String())))
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update runtime binding: %w", err)
|
||||||
|
}
|
||||||
|
affected, err := result.RowsAffected()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update runtime binding: rows affected: %w", err)
|
||||||
|
}
|
||||||
|
if affected == 0 {
|
||||||
|
return game.ErrNotFound
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// rowScanner abstracts *sql.Row and *sql.Rows so scanGame can be shared
|
||||||
|
// across both single-row reads and iterated reads.
|
||||||
|
type rowScanner interface {
|
||||||
|
Scan(dest ...any) error
|
||||||
|
}
|
||||||
|
|
||||||
|
// scanGame scans one games row from rs. Returns sql.ErrNoRows verbatim so
|
||||||
|
// callers can distinguish "no row" from a hard error.
|
||||||
|
func scanGame(rs rowScanner) (game.Game, error) {
|
||||||
|
var (
|
||||||
|
gameID string
|
||||||
|
gameName string
|
||||||
|
description string
|
||||||
|
gameType string
|
||||||
|
ownerUserID string
|
||||||
|
status string
|
||||||
|
minPlayers int
|
||||||
|
maxPlayers int
|
||||||
|
startGapHours int
|
||||||
|
startGapPlayers int
|
||||||
|
enrollmentEndsAt time.Time
|
||||||
|
turnSchedule string
|
||||||
|
targetEngineVersion string
|
||||||
|
createdAt time.Time
|
||||||
|
updatedAt time.Time
|
||||||
|
startedAt sql.NullTime
|
||||||
|
finishedAt sql.NullTime
|
||||||
|
runtimeSnapshot []byte
|
||||||
|
runtimeBinding []byte
|
||||||
|
)
|
||||||
|
if err := rs.Scan(
|
||||||
|
&gameID,
|
||||||
|
&gameName,
|
||||||
|
&description,
|
||||||
|
&gameType,
|
||||||
|
&ownerUserID,
|
||||||
|
&status,
|
||||||
|
&minPlayers,
|
||||||
|
&maxPlayers,
|
||||||
|
&startGapHours,
|
||||||
|
&startGapPlayers,
|
||||||
|
&enrollmentEndsAt,
|
||||||
|
&turnSchedule,
|
||||||
|
&targetEngineVersion,
|
||||||
|
&createdAt,
|
||||||
|
&updatedAt,
|
||||||
|
&startedAt,
|
||||||
|
&finishedAt,
|
||||||
|
&runtimeSnapshot,
|
||||||
|
&runtimeBinding,
|
||||||
|
); err != nil {
|
||||||
|
return game.Game{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
snapshot, err := unmarshalRuntimeSnapshot(runtimeSnapshot)
|
||||||
|
if err != nil {
|
||||||
|
return game.Game{}, err
|
||||||
|
}
|
||||||
|
binding, err := unmarshalRuntimeBinding(runtimeBinding)
|
||||||
|
if err != nil {
|
||||||
|
return game.Game{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return game.Game{
|
||||||
|
GameID: common.GameID(gameID),
|
||||||
|
GameName: gameName,
|
||||||
|
Description: description,
|
||||||
|
GameType: game.GameType(gameType),
|
||||||
|
OwnerUserID: ownerUserID,
|
||||||
|
Status: game.Status(status),
|
||||||
|
MinPlayers: minPlayers,
|
||||||
|
MaxPlayers: maxPlayers,
|
||||||
|
StartGapHours: startGapHours,
|
||||||
|
StartGapPlayers: startGapPlayers,
|
||||||
|
EnrollmentEndsAt: enrollmentEndsAt.UTC(),
|
||||||
|
TurnSchedule: turnSchedule,
|
||||||
|
TargetEngineVersion: targetEngineVersion,
|
||||||
|
CreatedAt: createdAt.UTC(),
|
||||||
|
UpdatedAt: updatedAt.UTC(),
|
||||||
|
StartedAt: sqlx.TimePtrFromNullable(startedAt),
|
||||||
|
FinishedAt: sqlx.TimePtrFromNullable(finishedAt),
|
||||||
|
RuntimeSnapshot: snapshot,
|
||||||
|
RuntimeBinding: binding,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func scanAllGames(rows *sql.Rows) ([]game.Game, error) {
|
||||||
|
records := make([]game.Game, 0)
|
||||||
|
for rows.Next() {
|
||||||
|
record, err := scanGame(rows)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("scan: %w", err)
|
||||||
|
}
|
||||||
|
records = append(records, record)
|
||||||
|
}
|
||||||
|
if err := rows.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if len(records) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return records, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure Store satisfies the ports.GameStore interface at compile time.
|
||||||
|
var _ ports.GameStore = (*Store)(nil)
|
||||||
@@ -0,0 +1,338 @@
|
|||||||
|
package gamestore_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/gamestore"
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/internal/pgtest"
|
||||||
|
"galaxy/lobby/internal/domain/common"
|
||||||
|
"galaxy/lobby/internal/domain/game"
|
||||||
|
"galaxy/lobby/internal/ports"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||||
|
|
||||||
|
func newStore(t *testing.T) *gamestore.Store {
|
||||||
|
t.Helper()
|
||||||
|
pgtest.TruncateAll(t)
|
||||||
|
store, err := gamestore.New(gamestore.Config{
|
||||||
|
DB: pgtest.Ensure(t).Pool(),
|
||||||
|
OperationTimeout: pgtest.OperationTimeout,
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
return store
|
||||||
|
}
|
||||||
|
|
||||||
|
func fixturePublicGame(t *testing.T, id string) game.Game {
|
||||||
|
t.Helper()
|
||||||
|
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||||
|
record, err := game.New(game.NewGameInput{
|
||||||
|
GameID: common.GameID(id),
|
||||||
|
GameName: "Spring Classic " + id,
|
||||||
|
Description: "first public game",
|
||||||
|
GameType: game.GameTypePublic,
|
||||||
|
MinPlayers: 4,
|
||||||
|
MaxPlayers: 8,
|
||||||
|
StartGapHours: 24,
|
||||||
|
StartGapPlayers: 2,
|
||||||
|
EnrollmentEndsAt: now.Add(7 * 24 * time.Hour),
|
||||||
|
TurnSchedule: "0 18 * * *",
|
||||||
|
TargetEngineVersion: "v1.2.3",
|
||||||
|
Now: now,
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
return record
|
||||||
|
}
|
||||||
|
|
||||||
|
func fixturePrivateGame(t *testing.T, id, ownerID string) game.Game {
|
||||||
|
t.Helper()
|
||||||
|
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||||
|
record, err := game.New(game.NewGameInput{
|
||||||
|
GameID: common.GameID(id),
|
||||||
|
GameName: "Private " + id,
|
||||||
|
GameType: game.GameTypePrivate,
|
||||||
|
OwnerUserID: ownerID,
|
||||||
|
MinPlayers: 2,
|
||||||
|
MaxPlayers: 6,
|
||||||
|
StartGapHours: 12,
|
||||||
|
StartGapPlayers: 2,
|
||||||
|
EnrollmentEndsAt: now.Add(7 * 24 * time.Hour),
|
||||||
|
TurnSchedule: "0 18 * * *",
|
||||||
|
TargetEngineVersion: "v1.0.0",
|
||||||
|
Now: now,
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
return record
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSaveAndGet(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
|
||||||
|
record := fixturePublicGame(t, "game-001")
|
||||||
|
require.NoError(t, store.Save(ctx, record))
|
||||||
|
|
||||||
|
got, err := store.Get(ctx, record.GameID)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, record.GameID, got.GameID)
|
||||||
|
assert.Equal(t, record.GameName, got.GameName)
|
||||||
|
assert.Equal(t, record.Status, got.Status)
|
||||||
|
assert.Equal(t, record.MinPlayers, got.MinPlayers)
|
||||||
|
assert.Equal(t, record.MaxPlayers, got.MaxPlayers)
|
||||||
|
assert.True(t, record.EnrollmentEndsAt.Equal(got.EnrollmentEndsAt))
|
||||||
|
assert.Equal(t, time.UTC, got.CreatedAt.Location())
|
||||||
|
assert.Equal(t, time.UTC, got.UpdatedAt.Location())
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetReturnsNotFound(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
_, err := store.Get(ctx, common.GameID("game-missing-x"))
|
||||||
|
require.ErrorIs(t, err, game.ErrNotFound)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSaveIsUpsert(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
|
||||||
|
record := fixturePublicGame(t, "game-001")
|
||||||
|
require.NoError(t, store.Save(ctx, record))
|
||||||
|
|
||||||
|
// edit a few fields, save again
|
||||||
|
record.GameName = "Renamed"
|
||||||
|
record.UpdatedAt = record.UpdatedAt.Add(time.Minute)
|
||||||
|
require.NoError(t, store.Save(ctx, record))
|
||||||
|
|
||||||
|
got, err := store.Get(ctx, record.GameID)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, "Renamed", got.GameName)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateStatusHappyPath(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
|
||||||
|
record := fixturePublicGame(t, "game-001")
|
||||||
|
require.NoError(t, store.Save(ctx, record))
|
||||||
|
|
||||||
|
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||||
|
GameID: record.GameID,
|
||||||
|
ExpectedFrom: game.StatusDraft,
|
||||||
|
To: game.StatusEnrollmentOpen,
|
||||||
|
Trigger: game.TriggerCommand,
|
||||||
|
At: record.UpdatedAt.Add(time.Minute),
|
||||||
|
}))
|
||||||
|
|
||||||
|
got, err := store.Get(ctx, record.GameID)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, game.StatusEnrollmentOpen, got.Status)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateStatusReturnsConflictOnExpectedFromMismatch(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
|
||||||
|
record := fixturePublicGame(t, "game-001")
|
||||||
|
require.NoError(t, store.Save(ctx, record))
|
||||||
|
|
||||||
|
err := store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||||
|
GameID: record.GameID,
|
||||||
|
ExpectedFrom: game.StatusEnrollmentOpen, // wrong
|
||||||
|
To: game.StatusReadyToStart,
|
||||||
|
Trigger: game.TriggerManual,
|
||||||
|
At: record.UpdatedAt.Add(time.Minute),
|
||||||
|
})
|
||||||
|
require.ErrorIs(t, err, game.ErrConflict)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateStatusReturnsNotFoundForMissing(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
|
||||||
|
err := store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||||
|
GameID: common.GameID("game-missing-x"),
|
||||||
|
ExpectedFrom: game.StatusDraft,
|
||||||
|
To: game.StatusEnrollmentOpen,
|
||||||
|
Trigger: game.TriggerCommand,
|
||||||
|
At: time.Now().UTC(),
|
||||||
|
})
|
||||||
|
require.ErrorIs(t, err, game.ErrNotFound)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateStatusSetsStartedAtOnRunning(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
|
||||||
|
record := fixturePublicGame(t, "game-001")
|
||||||
|
require.NoError(t, store.Save(ctx, record))
|
||||||
|
advance := func(from, to game.Status, trigger game.Trigger, at time.Time) {
|
||||||
|
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||||
|
GameID: record.GameID, ExpectedFrom: from, To: to, Trigger: trigger, At: at,
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
now := record.UpdatedAt.Add(time.Minute)
|
||||||
|
advance(game.StatusDraft, game.StatusEnrollmentOpen, game.TriggerCommand, now)
|
||||||
|
advance(game.StatusEnrollmentOpen, game.StatusReadyToStart, game.TriggerManual, now.Add(time.Minute))
|
||||||
|
advance(game.StatusReadyToStart, game.StatusStarting, game.TriggerCommand, now.Add(2*time.Minute))
|
||||||
|
startedAt := now.Add(3 * time.Minute)
|
||||||
|
advance(game.StatusStarting, game.StatusRunning, game.TriggerRuntimeEvent, startedAt)
|
||||||
|
|
||||||
|
got, err := store.Get(ctx, record.GameID)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, game.StatusRunning, got.Status)
|
||||||
|
require.NotNil(t, got.StartedAt)
|
||||||
|
assert.True(t, got.StartedAt.Equal(startedAt))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetByStatusReturnsExpectedRecords(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
|
||||||
|
a := fixturePublicGame(t, "game-aaa")
|
||||||
|
b := fixturePublicGame(t, "game-bbb")
|
||||||
|
c := fixturePublicGame(t, "game-ccc")
|
||||||
|
for _, r := range []game.Game{a, b, c} {
|
||||||
|
require.NoError(t, store.Save(ctx, r))
|
||||||
|
}
|
||||||
|
require.NoError(t, store.UpdateStatus(ctx, ports.UpdateStatusInput{
|
||||||
|
GameID: b.GameID,
|
||||||
|
ExpectedFrom: game.StatusDraft,
|
||||||
|
To: game.StatusEnrollmentOpen,
|
||||||
|
Trigger: game.TriggerCommand,
|
||||||
|
At: b.UpdatedAt.Add(time.Minute),
|
||||||
|
}))
|
||||||
|
|
||||||
|
drafts, err := store.GetByStatus(ctx, game.StatusDraft)
|
||||||
|
require.NoError(t, err)
|
||||||
|
gotIDs := map[common.GameID]struct{}{}
|
||||||
|
for _, r := range drafts {
|
||||||
|
gotIDs[r.GameID] = struct{}{}
|
||||||
|
}
|
||||||
|
assert.Contains(t, gotIDs, a.GameID)
|
||||||
|
assert.Contains(t, gotIDs, c.GameID)
|
||||||
|
assert.NotContains(t, gotIDs, b.GameID)
|
||||||
|
|
||||||
|
open, err := store.GetByStatus(ctx, game.StatusEnrollmentOpen)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Len(t, open, 1)
|
||||||
|
assert.Equal(t, b.GameID, open[0].GameID)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetByOwnerOnlyReturnsPrivateGames(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
|
||||||
|
owner := "user-123"
|
||||||
|
pub := fixturePublicGame(t, "game-pub-001")
|
||||||
|
priv1 := fixturePrivateGame(t, "game-priv-001", owner)
|
||||||
|
priv2 := fixturePrivateGame(t, "game-priv-002", owner)
|
||||||
|
priv3 := fixturePrivateGame(t, "game-priv-003", "user-other")
|
||||||
|
for _, r := range []game.Game{pub, priv1, priv2, priv3} {
|
||||||
|
require.NoError(t, store.Save(ctx, r))
|
||||||
|
}
|
||||||
|
|
||||||
|
got, err := store.GetByOwner(ctx, owner)
|
||||||
|
require.NoError(t, err)
|
||||||
|
ids := map[common.GameID]struct{}{}
|
||||||
|
for _, r := range got {
|
||||||
|
ids[r.GameID] = struct{}{}
|
||||||
|
}
|
||||||
|
assert.Contains(t, ids, priv1.GameID)
|
||||||
|
assert.Contains(t, ids, priv2.GameID)
|
||||||
|
assert.NotContains(t, ids, priv3.GameID)
|
||||||
|
assert.NotContains(t, ids, pub.GameID)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCountByStatusIncludesAllBuckets(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
|
||||||
|
require.NoError(t, store.Save(ctx, fixturePublicGame(t, "game-aaa")))
|
||||||
|
require.NoError(t, store.Save(ctx, fixturePublicGame(t, "game-bbb")))
|
||||||
|
|
||||||
|
counts, err := store.CountByStatus(ctx)
|
||||||
|
require.NoError(t, err)
|
||||||
|
for _, status := range game.AllStatuses() {
|
||||||
|
_, ok := counts[status]
|
||||||
|
assert.Truef(t, ok, "missing bucket for %q", status)
|
||||||
|
}
|
||||||
|
assert.Equal(t, 2, counts[game.StatusDraft])
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateRuntimeSnapshotRoundTripsValues(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
|
||||||
|
record := fixturePublicGame(t, "game-001")
|
||||||
|
require.NoError(t, store.Save(ctx, record))
|
||||||
|
|
||||||
|
snapshot := game.RuntimeSnapshot{
|
||||||
|
CurrentTurn: 42,
|
||||||
|
RuntimeStatus: "running_accepting_commands",
|
||||||
|
EngineHealthSummary: "ok",
|
||||||
|
}
|
||||||
|
require.NoError(t, store.UpdateRuntimeSnapshot(ctx, ports.UpdateRuntimeSnapshotInput{
|
||||||
|
GameID: record.GameID,
|
||||||
|
Snapshot: snapshot,
|
||||||
|
At: record.UpdatedAt.Add(time.Minute),
|
||||||
|
}))
|
||||||
|
|
||||||
|
got, err := store.Get(ctx, record.GameID)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, snapshot, got.RuntimeSnapshot)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateRuntimeBindingRoundTripsValues(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
|
||||||
|
record := fixturePublicGame(t, "game-001")
|
||||||
|
require.NoError(t, store.Save(ctx, record))
|
||||||
|
|
||||||
|
at := record.UpdatedAt.Add(time.Minute)
|
||||||
|
require.NoError(t, store.UpdateRuntimeBinding(ctx, ports.UpdateRuntimeBindingInput{
|
||||||
|
GameID: record.GameID,
|
||||||
|
Binding: game.RuntimeBinding{
|
||||||
|
ContainerID: "container-7",
|
||||||
|
EngineEndpoint: "10.0.0.5:9000",
|
||||||
|
RuntimeJobID: "1700000000-0",
|
||||||
|
BoundAt: at,
|
||||||
|
},
|
||||||
|
At: at,
|
||||||
|
}))
|
||||||
|
|
||||||
|
got, err := store.Get(ctx, record.GameID)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NotNil(t, got.RuntimeBinding)
|
||||||
|
assert.Equal(t, "container-7", got.RuntimeBinding.ContainerID)
|
||||||
|
assert.Equal(t, "10.0.0.5:9000", got.RuntimeBinding.EngineEndpoint)
|
||||||
|
assert.Equal(t, "1700000000-0", got.RuntimeBinding.RuntimeJobID)
|
||||||
|
assert.True(t, got.RuntimeBinding.BoundAt.Equal(at))
|
||||||
|
assert.Equal(t, time.UTC, got.RuntimeBinding.BoundAt.Location())
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateRuntimeSnapshotReturnsNotFoundForMissing(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
store := newStore(t)
|
||||||
|
err := store.UpdateRuntimeSnapshot(ctx, ports.UpdateRuntimeSnapshotInput{
|
||||||
|
GameID: common.GameID("game-missing-x"),
|
||||||
|
Snapshot: game.RuntimeSnapshot{CurrentTurn: 1},
|
||||||
|
At: time.Now().UTC(),
|
||||||
|
})
|
||||||
|
require.ErrorIs(t, err, game.ErrNotFound)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewRejectsNilDB(t *testing.T) {
|
||||||
|
_, err := gamestore.New(gamestore.Config{OperationTimeout: time.Second})
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewRejectsNonPositiveTimeout(t *testing.T) {
|
||||||
|
_, err := gamestore.New(gamestore.Config{DB: pgtest.Ensure(t).Pool()})
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
@@ -0,0 +1,208 @@
|
|||||||
|
// Package pgtest exposes the testcontainers-backed PostgreSQL bootstrap
|
||||||
|
// shared by every Game Lobby PG adapter test. The package is regular Go
|
||||||
|
// code — not a `_test.go` file — so it can be imported by the `_test.go`
|
||||||
|
// files in the four sibling store packages (`gamestore`, `applicationstore`,
|
||||||
|
// `invitestore`, `membershipstore`).
|
||||||
|
//
|
||||||
|
// No production code in `cmd/lobby` or in the runtime imports this package.
|
||||||
|
// The testcontainers-go dependency therefore stays out of the production
|
||||||
|
// binary's import graph.
|
||||||
|
package pgtest
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"database/sql"
|
||||||
|
"net/url"
|
||||||
|
"os"
|
||||||
|
"sync"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/migrations"
|
||||||
|
"galaxy/postgres"
|
||||||
|
|
||||||
|
testcontainers "github.com/testcontainers/testcontainers-go"
|
||||||
|
tcpostgres "github.com/testcontainers/testcontainers-go/modules/postgres"
|
||||||
|
"github.com/testcontainers/testcontainers-go/wait"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
postgresImage = "postgres:16-alpine"
|
||||||
|
superUser = "galaxy"
|
||||||
|
superPassword = "galaxy"
|
||||||
|
superDatabase = "galaxy_lobby"
|
||||||
|
serviceRole = "lobbyservice"
|
||||||
|
servicePassword = "lobbyservice"
|
||||||
|
serviceSchema = "lobby"
|
||||||
|
containerStartup = 90 * time.Second
|
||||||
|
|
||||||
|
// OperationTimeout is the per-statement timeout used by every store
|
||||||
|
// constructed via NewStoreConfig. Tests may pass a smaller value if they
|
||||||
|
// need to assert deadline behaviour explicitly.
|
||||||
|
OperationTimeout = 10 * time.Second
|
||||||
|
)
|
||||||
|
|
||||||
|
// Env holds the per-process container plus the *sql.DB pool already
|
||||||
|
// provisioned with the lobby schema, role, and migrations applied.
|
||||||
|
type Env struct {
|
||||||
|
container *tcpostgres.PostgresContainer
|
||||||
|
pool *sql.DB
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pool returns the shared pool. Tests truncate per-table state before each
|
||||||
|
// run via TruncateAll.
|
||||||
|
func (env *Env) Pool() *sql.DB { return env.pool }
|
||||||
|
|
||||||
|
var (
|
||||||
|
once sync.Once
|
||||||
|
cur *Env
|
||||||
|
curEr error
|
||||||
|
)
|
||||||
|
|
||||||
|
// Ensure starts the PostgreSQL container on first invocation and applies
|
||||||
|
// the embedded goose migrations. Subsequent invocations reuse the same
|
||||||
|
// container/pool. When Docker is unavailable Ensure calls t.Skip with the
|
||||||
|
// underlying error so the test suite still passes on machines without
|
||||||
|
// Docker.
|
||||||
|
func Ensure(t testing.TB) *Env {
|
||||||
|
t.Helper()
|
||||||
|
once.Do(func() {
|
||||||
|
cur, curEr = start()
|
||||||
|
})
|
||||||
|
if curEr != nil {
|
||||||
|
t.Skipf("postgres container start failed (Docker unavailable?): %v", curEr)
|
||||||
|
}
|
||||||
|
return cur
|
||||||
|
}
|
||||||
|
|
||||||
|
// TruncateAll wipes every Game Lobby table inside the shared pool, leaving
|
||||||
|
// the schema and indexes intact. Use it from each test that needs a clean
|
||||||
|
// slate.
|
||||||
|
func TruncateAll(t testing.TB) {
|
||||||
|
t.Helper()
|
||||||
|
env := Ensure(t)
|
||||||
|
const stmt = `TRUNCATE TABLE memberships, invites, applications, games, race_names RESTART IDENTITY CASCADE`
|
||||||
|
if _, err := env.pool.ExecContext(context.Background(), stmt); err != nil {
|
||||||
|
t.Fatalf("truncate lobby tables: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Shutdown terminates the shared container and closes the pool. It is
|
||||||
|
// invoked from each test package's TestMain after `m.Run` returns so the
|
||||||
|
// container is released even if individual tests panic.
|
||||||
|
func Shutdown() {
|
||||||
|
if cur == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if cur.pool != nil {
|
||||||
|
_ = cur.pool.Close()
|
||||||
|
}
|
||||||
|
if cur.container != nil {
|
||||||
|
_ = testcontainers.TerminateContainer(cur.container)
|
||||||
|
}
|
||||||
|
cur = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// RunMain is a convenience helper for each store package's TestMain: it
|
||||||
|
// runs the test main, captures the exit code, shuts the container down, and
|
||||||
|
// exits. Wiring it through one helper keeps every TestMain to two lines.
|
||||||
|
func RunMain(m *testing.M) {
|
||||||
|
code := m.Run()
|
||||||
|
Shutdown()
|
||||||
|
os.Exit(code)
|
||||||
|
}
|
||||||
|
|
||||||
|
func start() (*Env, error) {
|
||||||
|
ctx := context.Background()
|
||||||
|
container, err := tcpostgres.Run(ctx, postgresImage,
|
||||||
|
tcpostgres.WithDatabase(superDatabase),
|
||||||
|
tcpostgres.WithUsername(superUser),
|
||||||
|
tcpostgres.WithPassword(superPassword),
|
||||||
|
testcontainers.WithWaitStrategy(
|
||||||
|
wait.ForLog("database system is ready to accept connections").
|
||||||
|
WithOccurrence(2).
|
||||||
|
WithStartupTimeout(containerStartup),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
baseDSN, err := container.ConnectionString(ctx, "sslmode=disable")
|
||||||
|
if err != nil {
|
||||||
|
_ = testcontainers.TerminateContainer(container)
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if err := provisionRoleAndSchema(ctx, baseDSN); err != nil {
|
||||||
|
_ = testcontainers.TerminateContainer(container)
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
scopedDSN, err := dsnForServiceRole(baseDSN)
|
||||||
|
if err != nil {
|
||||||
|
_ = testcontainers.TerminateContainer(container)
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
cfg := postgres.DefaultConfig()
|
||||||
|
cfg.PrimaryDSN = scopedDSN
|
||||||
|
cfg.OperationTimeout = OperationTimeout
|
||||||
|
pool, err := postgres.OpenPrimary(ctx, cfg)
|
||||||
|
if err != nil {
|
||||||
|
_ = testcontainers.TerminateContainer(container)
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if err := postgres.Ping(ctx, pool, OperationTimeout); err != nil {
|
||||||
|
_ = pool.Close()
|
||||||
|
_ = testcontainers.TerminateContainer(container)
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if err := postgres.RunMigrations(ctx, pool, migrations.FS(), "."); err != nil {
|
||||||
|
_ = pool.Close()
|
||||||
|
_ = testcontainers.TerminateContainer(container)
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &Env{container: container, pool: pool}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func provisionRoleAndSchema(ctx context.Context, baseDSN string) error {
|
||||||
|
cfg := postgres.DefaultConfig()
|
||||||
|
cfg.PrimaryDSN = baseDSN
|
||||||
|
cfg.OperationTimeout = OperationTimeout
|
||||||
|
db, err := postgres.OpenPrimary(ctx, cfg)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer func() { _ = db.Close() }()
|
||||||
|
|
||||||
|
statements := []string{
|
||||||
|
`DO $$ BEGIN
|
||||||
|
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'lobbyservice') THEN
|
||||||
|
CREATE ROLE lobbyservice LOGIN PASSWORD 'lobbyservice';
|
||||||
|
END IF;
|
||||||
|
END $$;`,
|
||||||
|
`CREATE SCHEMA IF NOT EXISTS lobby AUTHORIZATION lobbyservice;`,
|
||||||
|
`GRANT USAGE ON SCHEMA lobby TO lobbyservice;`,
|
||||||
|
}
|
||||||
|
for _, statement := range statements {
|
||||||
|
if _, err := db.ExecContext(ctx, statement); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func dsnForServiceRole(baseDSN string) (string, error) {
|
||||||
|
parsed, err := url.Parse(baseDSN)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
values := url.Values{}
|
||||||
|
values.Set("search_path", serviceSchema)
|
||||||
|
values.Set("sslmode", "disable")
|
||||||
|
scoped := url.URL{
|
||||||
|
Scheme: parsed.Scheme,
|
||||||
|
User: url.UserPassword(serviceRole, servicePassword),
|
||||||
|
Host: parsed.Host,
|
||||||
|
Path: parsed.Path,
|
||||||
|
RawQuery: values.Encode(),
|
||||||
|
}
|
||||||
|
return scoped.String(), nil
|
||||||
|
}
|
||||||
@@ -0,0 +1,96 @@
|
|||||||
|
// Package sqlx contains the small set of helpers shared by every Game Lobby
|
||||||
|
// PostgreSQL adapter (gamestore, applicationstore, invitestore,
|
||||||
|
// membershipstore). The helpers centralise the boundary translations from
|
||||||
|
// the per-service ARCHITECTURE.md timestamp-handling rules and from the pgx
|
||||||
|
// SQLSTATE codes the adapters interpret as domain conflicts.
|
||||||
|
package sqlx
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"database/sql"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/jackc/pgx/v5/pgconn"
|
||||||
|
)
|
||||||
|
|
||||||
|
// PgUniqueViolationCode identifies the SQLSTATE returned by PostgreSQL when
|
||||||
|
// a UNIQUE constraint is violated by INSERT or UPDATE.
|
||||||
|
const PgUniqueViolationCode = "23505"
|
||||||
|
|
||||||
|
// IsUniqueViolation reports whether err is a PostgreSQL unique-violation,
|
||||||
|
// regardless of constraint name.
|
||||||
|
func IsUniqueViolation(err error) bool {
|
||||||
|
var pgErr *pgconn.PgError
|
||||||
|
if !errors.As(err, &pgErr) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return pgErr.Code == PgUniqueViolationCode
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsNoRows reports whether err is sql.ErrNoRows.
|
||||||
|
func IsNoRows(err error) bool {
|
||||||
|
return errors.Is(err, sql.ErrNoRows)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NullableTime returns t.UTC() when non-zero, otherwise nil so the column
|
||||||
|
// is bound as SQL NULL. Several Lobby domain records use *time.Time to
|
||||||
|
// express absent timestamps; for those, callers translate the pointer with
|
||||||
|
// NullableTimePtr instead.
|
||||||
|
func NullableTime(t time.Time) any {
|
||||||
|
if t.IsZero() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return t.UTC()
|
||||||
|
}
|
||||||
|
|
||||||
|
// NullableTimePtr returns t.UTC() when t is non-nil and non-zero, otherwise
|
||||||
|
// nil. The helper is the *time.Time companion of NullableTime: every Lobby
|
||||||
|
// domain record has at least one optional `*time.Time` field
|
||||||
|
// (`StartedAt`, `FinishedAt`, `DecidedAt`, `RemovedAt`) that maps to a
|
||||||
|
// nullable timestamptz column.
|
||||||
|
func NullableTimePtr(t *time.Time) any {
|
||||||
|
if t == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return NullableTime(*t)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TimeFromNullable copies an optional sql.NullTime read from PostgreSQL
|
||||||
|
// into a domain time.Time, applying the global UTC normalisation rule.
|
||||||
|
// Invalid (NULL) values become the zero time.Time.
|
||||||
|
func TimeFromNullable(value sql.NullTime) time.Time {
|
||||||
|
if !value.Valid {
|
||||||
|
return time.Time{}
|
||||||
|
}
|
||||||
|
return value.Time.UTC()
|
||||||
|
}
|
||||||
|
|
||||||
|
// TimePtrFromNullable copies an optional sql.NullTime into a domain
|
||||||
|
// *time.Time. NULL becomes nil; non-NULL values are wrapped after UTC
|
||||||
|
// normalisation.
|
||||||
|
func TimePtrFromNullable(value sql.NullTime) *time.Time {
|
||||||
|
if !value.Valid {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
t := value.Time.UTC()
|
||||||
|
return &t
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithTimeout derives a child context bounded by timeout and prefixes
|
||||||
|
// context errors with operation. Callers must always invoke the returned
|
||||||
|
// cancel.
|
||||||
|
func WithTimeout(ctx context.Context, operation string, timeout time.Duration) (context.Context, context.CancelFunc, error) {
|
||||||
|
if ctx == nil {
|
||||||
|
return nil, nil, fmt.Errorf("%s: nil context", operation)
|
||||||
|
}
|
||||||
|
if err := ctx.Err(); err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("%s: %w", operation, err)
|
||||||
|
}
|
||||||
|
if timeout <= 0 {
|
||||||
|
return nil, nil, fmt.Errorf("%s: operation timeout must be positive", operation)
|
||||||
|
}
|
||||||
|
bounded, cancel := context.WithTimeout(ctx, timeout)
|
||||||
|
return bounded, cancel, nil
|
||||||
|
}
|
||||||
@@ -0,0 +1,348 @@
|
|||||||
|
// Package invitestore implements the PostgreSQL-backed adapter for
|
||||||
|
// `ports.InviteStore`.
|
||||||
|
//
|
||||||
|
// PG_PLAN.md §6A migrates Game Lobby Service away from Redis-backed durable
|
||||||
|
// invite records.
|
||||||
|
package invitestore
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"database/sql"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/internal/sqlx"
|
||||||
|
pgtable "galaxy/lobby/internal/adapters/postgres/jet/lobby/table"
|
||||||
|
"galaxy/lobby/internal/domain/common"
|
||||||
|
"galaxy/lobby/internal/domain/invite"
|
||||||
|
"galaxy/lobby/internal/ports"
|
||||||
|
|
||||||
|
pg "github.com/go-jet/jet/v2/postgres"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Config configures one PostgreSQL-backed invite store instance.
|
||||||
|
type Config struct {
|
||||||
|
DB *sql.DB
|
||||||
|
OperationTimeout time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store persists Game Lobby invite records in PostgreSQL.
|
||||||
|
type Store struct {
|
||||||
|
db *sql.DB
|
||||||
|
operationTimeout time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// New constructs one PostgreSQL-backed invite store from cfg.
|
||||||
|
func New(cfg Config) (*Store, error) {
|
||||||
|
if cfg.DB == nil {
|
||||||
|
return nil, errors.New("new postgres invite store: db must not be nil")
|
||||||
|
}
|
||||||
|
if cfg.OperationTimeout <= 0 {
|
||||||
|
return nil, errors.New("new postgres invite store: operation timeout must be positive")
|
||||||
|
}
|
||||||
|
return &Store{
|
||||||
|
db: cfg.DB,
|
||||||
|
operationTimeout: cfg.OperationTimeout,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// inviteSelectColumns is the canonical SELECT list for the invites table,
|
||||||
|
// matching scanInvite's column order.
|
||||||
|
var inviteSelectColumns = pg.ColumnList{
|
||||||
|
pgtable.Invites.InviteID,
|
||||||
|
pgtable.Invites.GameID,
|
||||||
|
pgtable.Invites.InviterUserID,
|
||||||
|
pgtable.Invites.InviteeUserID,
|
||||||
|
pgtable.Invites.RaceName,
|
||||||
|
pgtable.Invites.Status,
|
||||||
|
pgtable.Invites.CreatedAt,
|
||||||
|
pgtable.Invites.ExpiresAt,
|
||||||
|
pgtable.Invites.DecidedAt,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save persists a new created invite record. Save is create-only; a second
|
||||||
|
// save against the same invite id maps the unique-violation to
|
||||||
|
// invite.ErrConflict.
|
||||||
|
func (store *Store) Save(ctx context.Context, record invite.Invite) error {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return errors.New("save invite: nil store")
|
||||||
|
}
|
||||||
|
if err := record.Validate(); err != nil {
|
||||||
|
return fmt.Errorf("save invite: %w", err)
|
||||||
|
}
|
||||||
|
if record.Status != invite.StatusCreated {
|
||||||
|
return fmt.Errorf(
|
||||||
|
"save invite: status must be %q, got %q",
|
||||||
|
invite.StatusCreated, record.Status,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "save invite", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
stmt := pgtable.Invites.INSERT(
|
||||||
|
pgtable.Invites.InviteID,
|
||||||
|
pgtable.Invites.GameID,
|
||||||
|
pgtable.Invites.InviterUserID,
|
||||||
|
pgtable.Invites.InviteeUserID,
|
||||||
|
pgtable.Invites.RaceName,
|
||||||
|
pgtable.Invites.Status,
|
||||||
|
pgtable.Invites.CreatedAt,
|
||||||
|
pgtable.Invites.ExpiresAt,
|
||||||
|
pgtable.Invites.DecidedAt,
|
||||||
|
).VALUES(
|
||||||
|
record.InviteID.String(),
|
||||||
|
record.GameID.String(),
|
||||||
|
record.InviterUserID,
|
||||||
|
record.InviteeUserID,
|
||||||
|
record.RaceName,
|
||||||
|
string(record.Status),
|
||||||
|
record.CreatedAt.UTC(),
|
||||||
|
record.ExpiresAt.UTC(),
|
||||||
|
sqlx.NullableTimePtr(record.DecidedAt),
|
||||||
|
)
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
if _, err := store.db.ExecContext(operationCtx, query, args...); err != nil {
|
||||||
|
if sqlx.IsUniqueViolation(err) {
|
||||||
|
return fmt.Errorf("save invite: %w", invite.ErrConflict)
|
||||||
|
}
|
||||||
|
return fmt.Errorf("save invite: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get returns the record identified by inviteID.
|
||||||
|
func (store *Store) Get(ctx context.Context, inviteID common.InviteID) (invite.Invite, error) {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return invite.Invite{}, errors.New("get invite: nil store")
|
||||||
|
}
|
||||||
|
if err := inviteID.Validate(); err != nil {
|
||||||
|
return invite.Invite{}, fmt.Errorf("get invite: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "get invite", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return invite.Invite{}, err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
stmt := pg.SELECT(inviteSelectColumns).
|
||||||
|
FROM(pgtable.Invites).
|
||||||
|
WHERE(pgtable.Invites.InviteID.EQ(pg.String(inviteID.String())))
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
row := store.db.QueryRowContext(operationCtx, query, args...)
|
||||||
|
record, err := scanInvite(row)
|
||||||
|
if sqlx.IsNoRows(err) {
|
||||||
|
return invite.Invite{}, invite.ErrNotFound
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return invite.Invite{}, fmt.Errorf("get invite: %w", err)
|
||||||
|
}
|
||||||
|
return record, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetByGame returns every invite attached to gameID.
|
||||||
|
func (store *Store) GetByGame(ctx context.Context, gameID common.GameID) ([]invite.Invite, error) {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return nil, errors.New("get invites by game: nil store")
|
||||||
|
}
|
||||||
|
if err := gameID.Validate(); err != nil {
|
||||||
|
return nil, fmt.Errorf("get invites by game: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
stmt := pg.SELECT(inviteSelectColumns).
|
||||||
|
FROM(pgtable.Invites).
|
||||||
|
WHERE(pgtable.Invites.GameID.EQ(pg.String(gameID.String()))).
|
||||||
|
ORDER_BY(pgtable.Invites.CreatedAt.ASC(), pgtable.Invites.InviteID.ASC())
|
||||||
|
|
||||||
|
return store.queryList(ctx, "get invites by game", stmt)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetByUser returns every invite addressed to inviteeUserID.
|
||||||
|
func (store *Store) GetByUser(ctx context.Context, inviteeUserID string) ([]invite.Invite, error) {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return nil, errors.New("get invites by user: nil store")
|
||||||
|
}
|
||||||
|
trimmed := strings.TrimSpace(inviteeUserID)
|
||||||
|
if trimmed == "" {
|
||||||
|
return nil, fmt.Errorf("get invites by user: invitee user id must not be empty")
|
||||||
|
}
|
||||||
|
|
||||||
|
stmt := pg.SELECT(inviteSelectColumns).
|
||||||
|
FROM(pgtable.Invites).
|
||||||
|
WHERE(pgtable.Invites.InviteeUserID.EQ(pg.String(trimmed))).
|
||||||
|
ORDER_BY(pgtable.Invites.CreatedAt.ASC(), pgtable.Invites.InviteID.ASC())
|
||||||
|
|
||||||
|
return store.queryList(ctx, "get invites by user", stmt)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetByInviter returns every invite created by inviterUserID.
|
||||||
|
func (store *Store) GetByInviter(ctx context.Context, inviterUserID string) ([]invite.Invite, error) {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return nil, errors.New("get invites by inviter: nil store")
|
||||||
|
}
|
||||||
|
trimmed := strings.TrimSpace(inviterUserID)
|
||||||
|
if trimmed == "" {
|
||||||
|
return nil, fmt.Errorf("get invites by inviter: inviter user id must not be empty")
|
||||||
|
}
|
||||||
|
|
||||||
|
stmt := pg.SELECT(inviteSelectColumns).
|
||||||
|
FROM(pgtable.Invites).
|
||||||
|
WHERE(pgtable.Invites.InviterUserID.EQ(pg.String(trimmed))).
|
||||||
|
ORDER_BY(pgtable.Invites.CreatedAt.ASC(), pgtable.Invites.InviteID.ASC())
|
||||||
|
|
||||||
|
return store.queryList(ctx, "get invites by inviter", stmt)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (store *Store) queryList(ctx context.Context, operation string, stmt pg.SelectStatement) ([]invite.Invite, error) {
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, operation, store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
rows, err := store.db.QueryContext(operationCtx, query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
|
||||||
|
records := make([]invite.Invite, 0)
|
||||||
|
for rows.Next() {
|
||||||
|
record, err := scanInvite(rows)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("%s: scan: %w", operation, err)
|
||||||
|
}
|
||||||
|
records = append(records, record)
|
||||||
|
}
|
||||||
|
if err := rows.Err(); err != nil {
|
||||||
|
return nil, fmt.Errorf("%s: %w", operation, err)
|
||||||
|
}
|
||||||
|
if len(records) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return records, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateStatus applies one status transition with compare-and-swap on the
|
||||||
|
// current status column. When transitioning to redeemed the row's race_name
|
||||||
|
// column is replaced with the trimmed input value.
|
||||||
|
func (store *Store) UpdateStatus(ctx context.Context, input ports.UpdateInviteStatusInput) error {
|
||||||
|
if store == nil || store.db == nil {
|
||||||
|
return errors.New("update invite status: nil store")
|
||||||
|
}
|
||||||
|
if err := input.Validate(); err != nil {
|
||||||
|
return fmt.Errorf("update invite status: %w", err)
|
||||||
|
}
|
||||||
|
if err := invite.Transition(input.ExpectedFrom, input.To); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
operationCtx, cancel, err := sqlx.WithTimeout(ctx, "update invite status", store.operationTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
at := input.At.UTC()
|
||||||
|
raceName := strings.TrimSpace(input.RaceName)
|
||||||
|
|
||||||
|
// race_name is replaced only when the caller supplies a non-empty value;
|
||||||
|
// otherwise the existing value is preserved (CASE WHEN '' THEN race_name).
|
||||||
|
raceExpr := pg.CASE().
|
||||||
|
WHEN(pg.String(raceName).EQ(pg.String(""))).THEN(pgtable.Invites.RaceName).
|
||||||
|
ELSE(pg.String(raceName))
|
||||||
|
|
||||||
|
stmt := pgtable.Invites.UPDATE(
|
||||||
|
pgtable.Invites.Status,
|
||||||
|
pgtable.Invites.DecidedAt,
|
||||||
|
pgtable.Invites.RaceName,
|
||||||
|
).SET(
|
||||||
|
pg.String(string(input.To)),
|
||||||
|
pg.TimestampzT(at),
|
||||||
|
raceExpr,
|
||||||
|
).WHERE(pg.AND(
|
||||||
|
pgtable.Invites.InviteID.EQ(pg.String(input.InviteID.String())),
|
||||||
|
pgtable.Invites.Status.EQ(pg.String(string(input.ExpectedFrom))),
|
||||||
|
))
|
||||||
|
|
||||||
|
query, args := stmt.Sql()
|
||||||
|
result, err := store.db.ExecContext(operationCtx, query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update invite status: %w", err)
|
||||||
|
}
|
||||||
|
affected, err := result.RowsAffected()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("update invite status: rows affected: %w", err)
|
||||||
|
}
|
||||||
|
if affected == 0 {
|
||||||
|
probe := pg.SELECT(pgtable.Invites.Status).
|
||||||
|
FROM(pgtable.Invites).
|
||||||
|
WHERE(pgtable.Invites.InviteID.EQ(pg.String(input.InviteID.String())))
|
||||||
|
probeQuery, probeArgs := probe.Sql()
|
||||||
|
|
||||||
|
var current string
|
||||||
|
row := store.db.QueryRowContext(operationCtx, probeQuery, probeArgs...)
|
||||||
|
if err := row.Scan(¤t); err != nil {
|
||||||
|
if sqlx.IsNoRows(err) {
|
||||||
|
return invite.ErrNotFound
|
||||||
|
}
|
||||||
|
return fmt.Errorf("update invite status: probe: %w", err)
|
||||||
|
}
|
||||||
|
return fmt.Errorf("update invite status: %w", invite.ErrConflict)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type rowScanner interface {
|
||||||
|
Scan(dest ...any) error
|
||||||
|
}
|
||||||
|
|
||||||
|
func scanInvite(rs rowScanner) (invite.Invite, error) {
|
||||||
|
var (
|
||||||
|
inviteID string
|
||||||
|
gameID string
|
||||||
|
inviterUserID string
|
||||||
|
inviteeUserID string
|
||||||
|
raceName string
|
||||||
|
status string
|
||||||
|
createdAt time.Time
|
||||||
|
expiresAt time.Time
|
||||||
|
decidedAt sql.NullTime
|
||||||
|
)
|
||||||
|
if err := rs.Scan(
|
||||||
|
&inviteID,
|
||||||
|
&gameID,
|
||||||
|
&inviterUserID,
|
||||||
|
&inviteeUserID,
|
||||||
|
&raceName,
|
||||||
|
&status,
|
||||||
|
&createdAt,
|
||||||
|
&expiresAt,
|
||||||
|
&decidedAt,
|
||||||
|
); err != nil {
|
||||||
|
return invite.Invite{}, err
|
||||||
|
}
|
||||||
|
return invite.Invite{
|
||||||
|
InviteID: common.InviteID(inviteID),
|
||||||
|
GameID: common.GameID(gameID),
|
||||||
|
InviterUserID: inviterUserID,
|
||||||
|
InviteeUserID: inviteeUserID,
|
||||||
|
RaceName: raceName,
|
||||||
|
Status: invite.Status(status),
|
||||||
|
CreatedAt: createdAt.UTC(),
|
||||||
|
ExpiresAt: expiresAt.UTC(),
|
||||||
|
DecidedAt: sqlx.TimePtrFromNullable(decidedAt),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure Store satisfies the ports.InviteStore interface at compile time.
|
||||||
|
var _ ports.InviteStore = (*Store)(nil)
|
||||||
@@ -0,0 +1,199 @@
|
|||||||
|
package invitestore_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/gamestore"
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/internal/pgtest"
|
||||||
|
"galaxy/lobby/internal/adapters/postgres/invitestore"
|
||||||
|
"galaxy/lobby/internal/domain/common"
|
||||||
|
"galaxy/lobby/internal/domain/game"
|
||||||
|
"galaxy/lobby/internal/domain/invite"
|
||||||
|
"galaxy/lobby/internal/ports"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestMain(m *testing.M) { pgtest.RunMain(m) }
|
||||||
|
|
||||||
|
func newStores(t *testing.T) (*gamestore.Store, *invitestore.Store) {
|
||||||
|
t.Helper()
|
||||||
|
pgtest.TruncateAll(t)
|
||||||
|
gs, err := gamestore.New(gamestore.Config{
|
||||||
|
DB: pgtest.Ensure(t).Pool(), OperationTimeout: pgtest.OperationTimeout,
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
is, err := invitestore.New(invitestore.Config{
|
||||||
|
DB: pgtest.Ensure(t).Pool(), OperationTimeout: pgtest.OperationTimeout,
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
return gs, is
|
||||||
|
}
|
||||||
|
|
||||||
|
func seedPrivateGame(t *testing.T, gs *gamestore.Store, id, ownerID string) game.Game {
|
||||||
|
t.Helper()
|
||||||
|
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||||
|
g, err := game.New(game.NewGameInput{
|
||||||
|
GameID: common.GameID(id),
|
||||||
|
GameName: "Private " + id,
|
||||||
|
GameType: game.GameTypePrivate,
|
||||||
|
OwnerUserID: ownerID,
|
||||||
|
MinPlayers: 2,
|
||||||
|
MaxPlayers: 6,
|
||||||
|
StartGapHours: 12,
|
||||||
|
StartGapPlayers: 2,
|
||||||
|
EnrollmentEndsAt: now.Add(7 * 24 * time.Hour),
|
||||||
|
TurnSchedule: "0 18 * * *",
|
||||||
|
TargetEngineVersion: "v1.0.0",
|
||||||
|
Now: now,
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, gs.Save(context.Background(), g))
|
||||||
|
return g
|
||||||
|
}
|
||||||
|
|
||||||
|
func newInvite(t *testing.T, id, gameID, inviter, invitee string) invite.Invite {
|
||||||
|
t.Helper()
|
||||||
|
now := time.Date(2026, 4, 23, 12, 0, 0, 0, time.UTC)
|
||||||
|
rec, err := invite.New(invite.NewInviteInput{
|
||||||
|
InviteID: common.InviteID(id),
|
||||||
|
GameID: common.GameID(gameID),
|
||||||
|
InviterUserID: inviter,
|
||||||
|
InviteeUserID: invitee,
|
||||||
|
Now: now,
|
||||||
|
ExpiresAt: now.Add(7 * 24 * time.Hour),
|
||||||
|
})
|
||||||
|
require.NoError(t, err)
|
||||||
|
return rec
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSaveAndGet(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, is := newStores(t)
|
||||||
|
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||||
|
|
||||||
|
rec := newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")
|
||||||
|
require.NoError(t, is.Save(ctx, rec))
|
||||||
|
|
||||||
|
got, err := is.Get(ctx, rec.InviteID)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, rec.InviteID, got.InviteID)
|
||||||
|
assert.Equal(t, invite.StatusCreated, got.Status)
|
||||||
|
assert.Equal(t, "invitee-1", got.InviteeUserID)
|
||||||
|
assert.True(t, got.ExpiresAt.Equal(rec.ExpiresAt))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSaveRejectsNonCreated(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, is := newStores(t)
|
||||||
|
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||||
|
|
||||||
|
rec := newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")
|
||||||
|
rec.Status = invite.StatusRedeemed
|
||||||
|
require.Error(t, is.Save(ctx, rec))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSaveDuplicateReturnsConflict(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, is := newStores(t)
|
||||||
|
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||||
|
|
||||||
|
rec := newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")
|
||||||
|
require.NoError(t, is.Save(ctx, rec))
|
||||||
|
err := is.Save(ctx, rec)
|
||||||
|
require.ErrorIs(t, err, invite.ErrConflict)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateStatusRedeemSetsRaceName(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, is := newStores(t)
|
||||||
|
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||||
|
|
||||||
|
rec := newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")
|
||||||
|
require.NoError(t, is.Save(ctx, rec))
|
||||||
|
|
||||||
|
require.NoError(t, is.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||||
|
InviteID: rec.InviteID,
|
||||||
|
ExpectedFrom: invite.StatusCreated,
|
||||||
|
To: invite.StatusRedeemed,
|
||||||
|
At: rec.CreatedAt.Add(time.Minute),
|
||||||
|
RaceName: "PilotRedeemed",
|
||||||
|
}))
|
||||||
|
|
||||||
|
got, err := is.Get(ctx, rec.InviteID)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, invite.StatusRedeemed, got.Status)
|
||||||
|
assert.Equal(t, "PilotRedeemed", got.RaceName)
|
||||||
|
require.NotNil(t, got.DecidedAt)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateStatusReturnsConflictOnExpectedFromMismatch(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, is := newStores(t)
|
||||||
|
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||||
|
|
||||||
|
rec := newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")
|
||||||
|
require.NoError(t, is.Save(ctx, rec))
|
||||||
|
|
||||||
|
// Move row out of `created` so the next attempt's `WHERE status = ?`
|
||||||
|
// fails on persistence even though the (created → revoked) transition is
|
||||||
|
// itself valid in the domain table.
|
||||||
|
require.NoError(t, is.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||||
|
InviteID: rec.InviteID,
|
||||||
|
ExpectedFrom: invite.StatusCreated,
|
||||||
|
To: invite.StatusDeclined,
|
||||||
|
At: rec.CreatedAt.Add(time.Minute),
|
||||||
|
}))
|
||||||
|
err := is.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||||
|
InviteID: rec.InviteID,
|
||||||
|
ExpectedFrom: invite.StatusCreated,
|
||||||
|
To: invite.StatusRevoked,
|
||||||
|
At: rec.CreatedAt.Add(2 * time.Minute),
|
||||||
|
})
|
||||||
|
require.ErrorIs(t, err, invite.ErrConflict)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateStatusReturnsNotFoundForMissing(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
_, is := newStores(t)
|
||||||
|
err := is.UpdateStatus(ctx, ports.UpdateInviteStatusInput{
|
||||||
|
InviteID: common.InviteID("invite-missing"),
|
||||||
|
ExpectedFrom: invite.StatusCreated,
|
||||||
|
To: invite.StatusDeclined,
|
||||||
|
At: time.Now().UTC(),
|
||||||
|
})
|
||||||
|
require.ErrorIs(t, err, invite.ErrNotFound)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetByGameUserInviter(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
gs, is := newStores(t)
|
||||||
|
seedPrivateGame(t, gs, "game-001", "owner-1")
|
||||||
|
seedPrivateGame(t, gs, "game-002", "owner-2")
|
||||||
|
|
||||||
|
require.NoError(t, is.Save(ctx, newInvite(t, "invite-001", "game-001", "owner-1", "invitee-1")))
|
||||||
|
require.NoError(t, is.Save(ctx, newInvite(t, "invite-002", "game-001", "owner-1", "invitee-2")))
|
||||||
|
require.NoError(t, is.Save(ctx, newInvite(t, "invite-003", "game-002", "owner-2", "invitee-1")))
|
||||||
|
|
||||||
|
g1, err := is.GetByGame(ctx, common.GameID("game-001"))
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Len(t, g1, 2)
|
||||||
|
|
||||||
|
user1, err := is.GetByUser(ctx, "invitee-1")
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Len(t, user1, 2)
|
||||||
|
|
||||||
|
by1, err := is.GetByInviter(ctx, "owner-1")
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Len(t, by1, 2)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetMissingReturnsNotFound(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
_, is := newStores(t)
|
||||||
|
_, err := is.Get(ctx, common.InviteID("invite-missing"))
|
||||||
|
require.ErrorIs(t, err, invite.ErrNotFound)
|
||||||
|
}
|
||||||
@@ -0,0 +1,22 @@
|
|||||||
|
//
|
||||||
|
// Code generated by go-jet DO NOT EDIT.
|
||||||
|
//
|
||||||
|
// WARNING: Changes to this file may cause incorrect behavior
|
||||||
|
// and will be lost if the code is regenerated
|
||||||
|
//
|
||||||
|
|
||||||
|
package model
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Applications struct {
|
||||||
|
ApplicationID string `sql:"primary_key"`
|
||||||
|
GameID string
|
||||||
|
ApplicantUserID string
|
||||||
|
RaceName string
|
||||||
|
Status string
|
||||||
|
CreatedAt time.Time
|
||||||
|
DecidedAt *time.Time
|
||||||
|
}
|
||||||
@@ -0,0 +1,34 @@
|
|||||||
|
//
|
||||||
|
// Code generated by go-jet DO NOT EDIT.
|
||||||
|
//
|
||||||
|
// WARNING: Changes to this file may cause incorrect behavior
|
||||||
|
// and will be lost if the code is regenerated
|
||||||
|
//
|
||||||
|
|
||||||
|
package model
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Games struct {
|
||||||
|
GameID string `sql:"primary_key"`
|
||||||
|
GameName string
|
||||||
|
Description string
|
||||||
|
GameType string
|
||||||
|
OwnerUserID string
|
||||||
|
Status string
|
||||||
|
MinPlayers int32
|
||||||
|
MaxPlayers int32
|
||||||
|
StartGapHours int32
|
||||||
|
StartGapPlayers int32
|
||||||
|
EnrollmentEndsAt time.Time
|
||||||
|
TurnSchedule string
|
||||||
|
TargetEngineVersion string
|
||||||
|
CreatedAt time.Time
|
||||||
|
UpdatedAt time.Time
|
||||||
|
StartedAt *time.Time
|
||||||
|
FinishedAt *time.Time
|
||||||
|
RuntimeSnapshot string
|
||||||
|
RuntimeBinding *string
|
||||||
|
}
|
||||||
@@ -0,0 +1,19 @@
|
|||||||
|
//
|
||||||
|
// Code generated by go-jet DO NOT EDIT.
|
||||||
|
//
|
||||||
|
// WARNING: Changes to this file may cause incorrect behavior
|
||||||
|
// and will be lost if the code is regenerated
|
||||||
|
//
|
||||||
|
|
||||||
|
package model
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type GooseDbVersion struct {
|
||||||
|
ID int32 `sql:"primary_key"`
|
||||||
|
VersionID int64
|
||||||
|
IsApplied bool
|
||||||
|
Tstamp time.Time
|
||||||
|
}
|
||||||
@@ -0,0 +1,24 @@
|
|||||||
|
//
|
||||||
|
// Code generated by go-jet DO NOT EDIT.
|
||||||
|
//
|
||||||
|
// WARNING: Changes to this file may cause incorrect behavior
|
||||||
|
// and will be lost if the code is regenerated
|
||||||
|
//
|
||||||
|
|
||||||
|
package model
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Invites struct {
|
||||||
|
InviteID string `sql:"primary_key"`
|
||||||
|
GameID string
|
||||||
|
InviterUserID string
|
||||||
|
InviteeUserID string
|
||||||
|
RaceName string
|
||||||
|
Status string
|
||||||
|
CreatedAt time.Time
|
||||||
|
ExpiresAt time.Time
|
||||||
|
DecidedAt *time.Time
|
||||||
|
}
|
||||||
@@ -0,0 +1,23 @@
|
|||||||
|
//
|
||||||
|
// Code generated by go-jet DO NOT EDIT.
|
||||||
|
//
|
||||||
|
// WARNING: Changes to this file may cause incorrect behavior
|
||||||
|
// and will be lost if the code is regenerated
|
||||||
|
//
|
||||||
|
|
||||||
|
package model
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Memberships struct {
|
||||||
|
MembershipID string `sql:"primary_key"`
|
||||||
|
GameID string
|
||||||
|
UserID string
|
||||||
|
RaceName string
|
||||||
|
CanonicalKey string
|
||||||
|
Status string
|
||||||
|
JoinedAt time.Time
|
||||||
|
RemovedAt *time.Time
|
||||||
|
}
|
||||||
@@ -0,0 +1,20 @@
|
|||||||
|
//
|
||||||
|
// Code generated by go-jet DO NOT EDIT.
|
||||||
|
//
|
||||||
|
// WARNING: Changes to this file may cause incorrect behavior
|
||||||
|
// and will be lost if the code is regenerated
|
||||||
|
//
|
||||||
|
|
||||||
|
package model
|
||||||
|
|
||||||
|
type RaceNames struct {
|
||||||
|
CanonicalKey string `sql:"primary_key"`
|
||||||
|
GameID string `sql:"primary_key"`
|
||||||
|
HolderUserID string
|
||||||
|
RaceName string
|
||||||
|
BindingKind string
|
||||||
|
SourceGameID string
|
||||||
|
ReservedAtMs int64
|
||||||
|
EligibleUntilMs *int64
|
||||||
|
RegisteredAtMs *int64
|
||||||
|
}
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user